The Pope delivered his message. The Archbishop of Canterbury delivered his. Many other patriarchs have put their self-serving hopes for the new year ahead to bed. Now, if you’ll forgive me, it’s my turn as we come to the end of the second decade of the 21st century and look forward to what the 2020s and beyond beckon. Yes, I’m being a little presumptuous here, placing myself alongside such notable personages, but since they’ve only generally spouted their usual platitudes, it’s fair game. Let me say first of all, happy new year to all of you not only here in the UK but arrrrround the worrrrld!
I have chosen as my inspiration for this missive Olaf Stapledon’s Last and First Men, a work of science fiction penned nearly one hundred years ago which made a lasting impression on me as a teenager. In a nutshell, it considered the future of humanity not just in the near future (which is the next ten thousand years, maybe?) but in millions of years’ time, to the point when whatever humans might be so far ahead had left planet Earth. The singular point of the book and perhaps what made it revolutionary for me is that it looked so far into the future, well beyond our temporal parochialism. Today (by way of contrast) as the latest Star Wars iteration demonstrates, popular science fiction has only modernised eighteenth century concerns with a dose of souped-up technology. Or to look at Star Trek, whilst acknowledging ethnic diversity, its star kit was still the USS Enterprise, captained of course by an American. Do we think nationalities will survive the next five thousand years? Even another hundred? That’s a good question not just about some long distant time in the future but about today, when nationalism seems to be the flavour of the month on this little planet of ours, and of course threatens our continued existence for the daftest of reasons.
So to take the Stapledon long view, what will humans be like in one billion years’ time? To ask the question assumes our species will survive climate/biosphere change (many times), asteroid collisions, earthly ruptions, resource depletions and so on. I feel quite sure our species will not resemble anything like we are today, and our existence will likely be extra-planetary. The resource we will need most—energy—will be found elsewhere. Perhaps ‘humanity’ will have morphed into a nomadic floating gassy cloud, without a destination, pointless in fact, but living in an intelligent form. Naturally I acknowledge that the physics of this possibility do not currently exist, but science has a long way to go. If we think we’ve nailed all the laws of the universe after 10,000 years or so of civilisation, the next billion years are going to be quite boring on that front.
How we develop will be more down to sentient evolution rather than the rather accidental kind posited by Charles Darwin. Evolutionary development relied on probabilities and connections which were nevertheless purposeless—one development could always be outmatched or destroyed. Dinosaurs were for hundreds of millions of years perfectly evolved for their circumstances, but they couldn’t adjust to circumstances they couldn’t foresee and consequently were wiped out. Sentient evolution makes it possible to conjecture how we could develop, and make allowances for unintended consequences. We can now envisage how we might avert that asteroid strike (but without Bruce Willis would it work?). This is not to say every possible retrograde development could be avoided. Sentient evolution is struggling with climate change for example. But if we survive this, and many of our species will, then our evolutionary path will accelerate.
An important component of this which seems inevitable is the spread of artificial intelligence (AI). Stephen Hawking and others have seen in AI an existential threat, in contrast James Lovelock has seen positives. We are at the very dawn of this revolution, but if it does play out positively then AI will become a chief tool of sentient evolution. It will not remain external to humans, but will become part of our very being, just as all sorts of bits and pieces of our body can be replaced today. AI one day will make a grand entry into that last bastion of our organic structure—the brain. Not in our lifetimes though, so no need to immediately worry.
Having said which I predicted 20 years ago (to a meeting of the Morley Labour Rooms luncheon club) that the reduction in the size of mobile telephony would lead to mobile phone implants inside the skull, in effect creating real telepathy. It hasn’t happened yet (so far as I know) - probably the right connections have yet to be sussed out.
Technology as we know develops at a much faster pace than the legislation required to regulate it. This is especially so today, when so many politicians simply do not understand the technology and because of the power of the tech giants are prone to imagine that ‘self regulation’ is sufficient. One possible approach to dealing with this ever present time lag is to pass a ‘public good’ test, which is to say that any technology brought to the market must demonstrate its value (or harm) to the public good. This is no more revolutionary than the rules already in existence which are designed to ensure drugs are safe. As ever (c.f. the US opioid crisis) such rules are not a panacea but they do provide the foundations for enforcement, if they are enforced.
One of the tests humanity will have to grapple with in the not too distant future (in perhaps just a few hundred years) will be living in a more mono-cultural world, which is to say as is already happening, with the decimation of species, resources, land, crops, water, borders, living space and whatever else we think this planet is super-abundant in. This will be the next big test for sentient evolution, all things being equal. Will serious preparations begin in 2020?
Once again, happy new year!