The Ai Train is leaving the station. Do we get on board?
Picture this. You are on a crowded platform at Central station waiting for an inbound train with an unknown destination - even the driver doesn't know where it's ultimately headed. You're at the station because you've been hearing for some time that a new train is going to a wonderful place and absolutely everyone will want to, indeed need to go there eventually. The FOMO has overloaded both your senses and the platform with eager travellers - even though you have heard one or two rumours that the train still had some technical issues to iron out._______________________________________
I feel like this will be the defining choice of our world in the next few years - only it won't be a choice because the Ai revolution is already inevitable. It feels as though, to coin a phrase from the Borg in Star Trek - "resistance is futile." Perhaps the only real choice is what kind of revolution it will be and how will we embrace it without betraying our humanity and life as we know it?
This dilemma has been faced by people throughout the ages. New technologies are introduced, some are early adopters and others resist - sometimes for fear of change, but not always. Sometimes that resistance is out of concern for what the innovation might do to virtue, vocation, and life as they know it. For example;
- Some monastic communities of the 6th-12th C resisted mechanical aids like water-powered mills because manual labour was seen as spiritually formative; efficiency threatened not productivity, but discipline, humility, and prayerful dependence.
- And between the 12th–15th centuries medieval craft guilds routinely restricted new tools and production methods, not because they hated innovation, but because unchecked efficiency threatened skilled labour, social stability, and the dignity of the craft.
- In the 17th century, despite early mastery of gunpowder weapons, Japan’s Tokugawa shogunate intentionally limited firearm usage and foreign technologies to preserve social order and moral hierarchy, choosing cultural stability, and the art of swordplay over technological superiority.
- More recently in the early 19th.C, the Luddites in England, who were skilled textile workers, sabotaged mechanised looms that threatened their livelihoods. They weren’t anti-technology in principle, but they opposed how industrial technology was deployed—without protections for workers. Many of those Luddites would become guests of the British penal system, transported to Australia for life. One notable example was John Slater, a notorious Luddite, who arrived in Sydney in 1818 after being sentenced to life for his role in a 1817 factory raid. And their name, Luddite, would become synonymous with techno resistance without due consideration of the consequences.
- the contact list was a Teledex by the bed and a mobile phone was a landline with a really long cord.
- car windows had winders, seatbelts were optional and the only way to navigate was maps on your lap.
- the tv remote was my big toe and 5 channels was just fine.
- Thunderbirds at 5am on Saturdays, and Hogan's Heroes or Gilligan's Island after school were essential viewing - but after that, you had trees to climb and streets to ride.
- your playlist was a mixtape pirated off the radio, your Sony walkman needed 4 xAAs and Betamax was still an option at the video store.
- everyone wanted a Commodore 64, when floppy disks shrank from 5 inches to 3 inches and we first heard the sound of a dial up modem.
- scanning the trading post every fortnight was exciting as was getting physical letters in the mailbox from grandparents or pen friends.
- you almost never took photos of yourself, and the only way to see them was to get the rolls processed at the local Kodak film shop when you got home.
- my mum told me to keep my distance from that new appliance in the kitchen that might be radioactive...the microwave. And "worth a google" was that prized set of Encyclopedias displayed in the good room and the yellow pages in the kitchen.
I'm so thankful for these comical memories because they remind me that an analogue life worked quite well. It was a time when life was less instant, anxious and distracted, less polarised and entitled. A time when you would get bored, where you had to be patient, where knowledge never came without a price and you didn't have to keep asking yourself, "is this real or fake?" This is a perspective I know my genZ kids sadly can't know. They are "digital natives" and I'm clearly a digital immigrant.
We GenXers grew up in a wide eyed, Star Trek generation where Man had already walked on the moon and everything new was embraced as one more giant leap for mankind toward a better future. And as I reflect on all those small incremental changes, they've all felt as though the were offering some benevolent service to the human race. (Perhaps that's why for the past 15 years we've all been so willing to give away the valuable personal data of every minute part of our lives to a machine that never forgets it and creates algorithms from it - in exchange for funny cat videos, social media, shopping and google maps).
But up until very recently, I've never felt afraid of the next big thing because whatever that was, it was probably going to make life better or easier. And for the most part it has, till now.
Let's be clear, the Ai revolution is nothing like anything that has come before. It is not one more incremental shift to make life easier - it will permanently redefine life as we know it. I am now genuinely worried that techno-utopia is more anti-Christ than anti-dote to the problems of the world. Yes of course, Ai will offer some wonderful outcomes for humanity, perhaps even solving some big problems. And techno humanists would have us believe Ai will solve the biggest problem of them all - death itself. Wow look at us go!
But for how long? Elon Musk in recent 60minutes interview believes that Ai and robotics are our only hope for keeping the globe out of economic bankruptcy. But when asked about the future of work, admits he doesn't like dwelling on the high probability of a future where any kind of repeatable mental labour (this has started already) and eventually any kind of physical labour (when robotics become mass market) - will have minimal need for humans. Humans will all be redundant - surgeons, lawyers, bankers, accountants, consultants, managers, journalists etc and one day even truck drivers and tradies will struggle to compete with driverless trucks and 3d printed homes.
And this isn't just Elon being Elon. If you read the mission statement of Sam Altman's OpenAi (makers of ChatGPT) it says "OpenAI's mission is to ensure that artificial general intelligence (AGI) - by which we mean highly autonomous systems that outperform humans at most economically valuable work - benefits all of humanity."
How can this logic actually "benefit all of humanity?" How can a global economy function if half of its workforce has been superseded - not by cheaper labour in a developing nation, but by Agentic Ai or robotaxis (both here now) or Tesla Optimus robots (coming very soon)? The deep lie here is that the good life is a life without toil. But what if we need to toil? Work is not a curse on humanity, it is integral to humanity and our collective flourishing.
And will there be a tipping point sometime all of a sudden where AGI, self reproducing humanoid robots and quantum computing converge in a way that may make homo-sapiens more akin to the way we view primates now - cute, endangered, exhibits in a zoo - surpassed in every way by a far more advanced species, only this time, of our own making?*
It's like raising a grizzly bear cub in your home. At first its all cute, cuddly and fun to play with (like ChatGPT & GROK). But when it grows up, you have to hope that it doesn't become an untameable beast who looks at you the same way - fun to play with, or eat!
____________________________________
My point is, that this is not about incremental changes in technology which we can decide to take or leave. The difference this time is that we may be handing over our sovereignty as a species.
Human history is a record of conflict and competition. Yet despite our prideful scrambling for perceived superiority over others, we've somehow collectively maintained limits to that impulse in the belief that an equilibrium of competing desires must eventually be reached. We've come together to end conflict, we've established agreements around trade, human rights, anti-slavery, nuclear-non proliferation, around chemical warfare and environmental protection and so on. Covid exemplified our desire to bring a global response to a global problem. Solidarity and cooperation is our greatest weapon against self destruction. Humans for the most part honour the dignity of human life. We fight but we also love peace. We hate but we also heal. We've always guarded our agency and desire to return to some equilibrium of life together.
But what if humans were no longer the apex species? What happens when we can't just pull the plug out of the wall and delete the program? What happens when the AGI no longer needs human programmers, resists human interventions and writes its own ethics? You can kill a virus like covid but what if we are creating a future in which, to AGI, we become the virus.
I know this is sounding all very Matrixy but even if there was a 10% probability of such and outcome, would we take the risk?
____________________________________
So back to the train analogy. What do we do? Do become neo-luddites, exit the platform and flee to Tasmania (sorry Tasmanians). Or do we just get on board with the Ai revolution and let it take us to whatever trans-human fate that awaits?
The truth is, I really don't know. It feels like the titans of Ai are evangelical about the upside and ambiguous about the shadow side - and us mere mortals are simply following blindly.
I'm writing this from that space on the train platform, in anxious indecision and theological caution. Not to solve it but to express it, to remember life as it was, to mark a time stamp of life before the "Ai revolution" deletes me, and my future self has to say to my current self "I told you so."
As I said earlier, our choice is not whether the Ai revolution happens - it already has. The only real choice is what kind of revolution it will be, will there be any guard rails at all, and how will we embrace it without betraying our humanity and life as we know it?
And who will influence that outcome? And will Christians be as indifferent as the rest of the population? Will we be mute on what might become the mother of all issues in human history? I hope not. Feeling neither responsible nor powerful to do anything would be to believe a lie.
Where might we begin? Perhaps just make a choice to sincerely engage with the news and commentary already available and become vocal in the public domain (see resource links below). And lets all agree....
RESISTANCE IS NOT FUTILE
In one of my favourite sci-fi films of all time, Christopher Nolan's, Interstellar, Dylan Thomas's poem offers a sobering refrain both to the fight for our humanity and against the enemy of death.
Perhaps it's to be our refrain too.
DO NOT GO GENTLE INTO THAT GOOD NIGHT
RAGE, RAGE AGAINST THE DYING OF THE LIGHT.
Resources to explore:
Read
- ISCAST - Christianity and Science in Conversation website here
- Yuval Noah Harari's book Homo Deus: A Brief History of Tomorrow
- Catholic discourse on the subject here
- Praxis' discussion on Redemptive Ai here
- *Go (re)watch Pierre Boulle's The Planet of the Apes series and think about what it says about the nature of humanity and the fate of whoever has dominion.
