Part one of six: a response to two recent viral posts:Something Big is Happening and The 2028 Global Intelligence Crisis. (Translation into other languages available here)
I remember the public information films of the 1970s and 80s, when nuclear annihilation was treated as a practical household problem. We were told Stay indoors. Close your curtains. Place yourself under a table. The assumption that after a thermonuclear exchange, you’d want to get the curtains right was presented without irony. We lived with that terrifying background hum for decades. Thankfully the worst-case scenario didn’t materialise; not because it couldn’t, but because human beings made good choices that prevented it.
That memory came back to me this month, reading two articles that have been doing the rounds, suggesting that AI is an almost existential threat. In the first, AI entrepreneur Matt Shumer describes watching his own job disappear – not to a competitor, but to the very AI tools he helped build. The second, by the financial research group Citrini, is a fictional retrospective written from June 2028, in which AI-driven job displacement triggers a financial crisis that makes 2008 look like a rehearsal.
Both pieces are long, and in their own way, terrifying (if you haven’t already read them, they’re worth your time). But while they deserve to be taken seriously, so does our own track record of panic, because the pattern is familiar: A new technology arrives; early adopters say this changes everything and people who don’t understand it say that’s what they always say. So far, reality has always landed somewhere in between, but not before a period of genuine fear.
Why this time feels different
What I have written so far may sound reassuring – we’ve seen this before and it’s been OK. But reassurance and complacency are easily confused, and I’m reminded of Bertrand Russell’s fable of the chicken who, having been fed every day of its life, concludes that it will be fed tomorrow too. And it is, until the day the farmer wrings its neck. Russell’s point was that the chicken’s confidence in the future was based on perfectly good evidence – until the day it wasn’t. The alternative to being Russell’s chicken is not, however, to be the ostrich – head buried firmly in the sand, convinced that if you don’t look at it, it isn’t happening. I’ve spent enough years in education to know that this is a popular strategy among teenagers and Heads of College alike. It may not end well.
One reason this feels different is that there is something about AI that cuts in a different way to other panics. Freud famously argued that humanity has suffered three great wounds to its self-image: Copernicus showed we’re not at the centre of the universe; Darwin showed we are animals; and Freud himself (modestly) showed that the animal is sick. Each was met with fierce resistance, because each threatened something fundamental about what it means to be human. AI may be the fourth wound. If a machine can write, reason, create, judge, and even show something that looks remarkably like taste, then it feels like there may be nothing left for us.
The evidence backs the feeling. Data from research organisation METR shows AI capability has been doubling roughly every seven months, and suggests it may be accelerating even further. AI systems are now contributing to their own development, and the people building them talk seriously about a point at which AI improvement becomes self-sustaining and potentially beyond human control. Whether this materialises or plateaus may be the most important empirical question of our time. Nobody knows the answer. But the possibility that the trajectory holds is what makes this different from previous moral panics. Previous threats – even the nuclear one – were about the destructive application of human intelligence and held out the possibility that applied more carefully, our own intelligence might solve them. This one is about whether human intelligence itself remains the scarce and special thing we’ve always assumed it to be.
And yet. I gave a talk recently in which I shared some data that I think matters here, drawing on the wonderful Fix the News website
- Every single country in the world today has a lower child mortality rate than it had in 1950.
- In the early 19th century, 85% of the world lived in extreme poverty; today it’s 9%.
- Global life expectancy has gone from 30 to 72 in two centuries.
- Literacy has risen from 12% to 83%.
- The number of people in extreme poverty has fallen by an average of 118,000 every day for 25 years.
- We may be on the brink of a genuine energy transition – solar is now 41% cheaper than fossil fuels on average, and China alone added more solar capacity in the first half of 2025 than the United States has in its entire history.
These are six facts from a million other equally good ones. Now, you could argue this is Russell’s chicken again. Perhaps. But a track record is still evidence, and ignoring it is to be an ostrich in a different way. These things still happened not because the future was guaranteed, or that arriving here was in any way inevitable, but because enough people made good enough choices, often enough, over a long enough period of time for us to arrive here (despite all our flaws and failures) at a genuinely better point than at any point in human history.
The Citrini problem
The predictions of the Citrini piece are dramatic, but it’s the structural argument that matters most. The scenario describes a negative feedback loop with no natural brake: AI gets better, companies cut staff, displaced workers spend less, companies invest more in AI to protect margins, AI gets better. Each individual decision may be rational, but this spiral has no floor and the outcome is catastrophic. And this time, it’s intellectually and morally dishonest to wave this away with the standard ‘Technology destroys jobs but creates even more’ argument that has been true for two centuries. However painful it was (and it has proven to be very painful) previously displaced workers could retrain for the new roles that technology created and so the net effect was more and better jobs. But if AI is a general substitute for cognitive work, then the new roles will also be done by machines and the usual escape hatch may not exist.
Now, the Citrini piece is explicitly a fiction, not a prediction, and it privileges clean narratives over messy reality. Real economies are more resilient, more adaptive, and more surprising than any model. The 2008 financial crisis, for all its devastation, did not produce the collapse that some predicted; COVID-19 was met with a vaccine developed in under a year. Humans are good at muddling through.
But muddling through is not a strategy; it’s a description of what we do when we don’t know what to do, and there is no guarantee it will always work. Nobody will muddle through an asteroid strike. So is AI disruption more like a storm we can weather or an asteroid we need to deflect? The honest answer is that no-one knows. What we do know is that the Shumer piece – with its personal, visceral, this-happened-to-me-on-Monday quality – should unsettle anyone who thinks this is still abstract. The asteroid may or may not be coming, but something is getting brighter in the sky.
So what do we do?
I’m not going to pretend I have a structural answer to the Citrini problem. If the retraining escape hatch is genuinely closed and there is widespread displacement then we need economic and institutional responses that don’t yet exist – and building them is work for economists, policymakers, and political leaders. What I can speak to is the human piece that any structural solution requires.
The quality we will all require. Nerve. Which is not confidence or optimism, but something harder and more demanding: the agency and determination to act when the outcome is genuinely unknown, because action is precisely what makes good outcomes possible. Nerve in this sense is not the belief that things will work out; it is the discipline to act well regardless of whether they do.
The Stoics understood this. Their position is not passive acceptance, despite the caricature – it is the most disciplined form of agency there is: radical clarity about what you cannot control, followed by total commitment to what you can. Marcus Aurelius wrote his Meditations while commanding an army and managing an empire in the middle of a plague. He knew a thing or two about holding his nerve, and he did the work not because he knew how things would work out, or even that they would work out – but because the work was his to do regardless. We can’t control what OpenAI or Anthropic build; we can’t reshape the global economy from the outside, but we can make determined choices about how we raise the next generation, and how we think about what it means to live a good life in uncertain times. That is not a small thing; it may even be the whole thing.
And there is evidence that nerve works. Those six facts I cited earlier – infant mortality falling in every country, extreme poverty collapsing, life expectancy doubling, literacy rising from 12% to 83%, 118,000 people lifted out of poverty every single day – none of them were inevitable or even obvious beforehand. They happened because enough people, over a long enough period, made good enough choices often enough for the trajectory to hold. That is the track record. Not of blind hope, but of human beings repeatedly holding their nerve under pressure, across centuries.
The nuclear threat is the case in point, because the parallels are closer than they might seem. That threat didn’t vanish because it was imaginary. On the contrary, it was sufficiently serious that we acted, and the problem became the raw material for the solution. Slowly, unglamorously, it was contained over the decades by diplomats who sat in rooms with people they didn’t trust, by scientists who argued for restraint when their governments wanted escalation, by activists who marched when marching seemed futile, by politicians who chose negotiation over posture. None of them knew it would work; many probably doubted that it would. They did it anyway, not because they were optimists but because they understood that the alternative to acting was not safety but surrender. That is what nerve looks like in practice.
The question with AI is whether we can summon the same collective will, and whether our institutions are capable of moving fast enough. That is genuinely uncertain, but the capacity is there and this is where the work of parents and educators must be. Any crisis must ultimately be navigated by people who think clearly, act collectively, and hold their nerve. Developing those people is, therefore, an essential part of the broader solution to the problem. Every treaty that contained the nuclear threat was negotiated by someone whose parents and teachers helped form their judgment. Every march was joined by someone who had learned that showing up matters even when the odds are long.
This doesn’t mean controlling the next generation or telling them what to think, still less pretending that we have the answers. It means having the kinds of conversations that develop curiosity, judgment, resilience, intellectual humility, and the disposition to contribute to something larger. Not because these qualities are nice to have, but because they are the operational requirements for navigating what’s coming. Education at its best allows younger generations to draw on the hard-won wisdom of older ones, rather than assuming they need to start from scratch – or worse, move fast and break things without understanding them first. We can’t inoculate our children against an uncertain future – but we can develop in them the capacity to meet it.
In his marvellous Culture novels, Iain M. Banks imagined a post-scarcity civilization where benevolent AI had long ago surpassed human intelligence, solved the resource problem, and liberated people to pursue meaning, art, adventure, and connection. Citrini also imagines a world where AI makes humans economically redundant, but the result is catastrophe. Same starting point, opposite outcomes. So the sky may be falling – it’s fallen before, and we’re still here. But which version of ‘still here’ we get depends on the choices we make, and on us having the nerve to make them.
References
- Images created with AI (OpenArt)
- Aurelius, M. (c. 171-175 CE). Meditations. (M. Hammond, Trans., 2006). Penguin Classics.
- Banks, I. M. (1987-2012). The Culture series. Orbit/Macmillan
- Citrini Research & Shah, A. (2026, February 22). The 2028 Global Intelligence Crisis: A thought exercise in financial history, from the future. Citrini Research. https://www.citriniresearch.com/p/2028gic
- Freud, S. (1917). A difficulty in the path of psycho-analysis. In The Standard Edition of the Complete Psychological Works of Sigmund Freud (Vol. XVII, pp. 135-144). Hogarth Press.
- Hervey, A. (n.d.). Fix the News. https://fixthenews.com
- METR. (2026). Time Horizon 1.1. https://metr.org/blog/2026-1-29-time-horizon-1-1/
- Russell, B. (1912). The Problems of Philosophy. Williams and Norgate. Chapter VI: On Induction.
- Shumer, M. (2026, February 9). Something big is happening. shumer.dev. https://shumer.dev/something-big-is-happening