Fellow sceptics of the silicon age: if you're like me, a card-carrying technophobe who still pines for the days of rotary phones and typewriters, you've probably been eyeing the rise of artificial intelligence with the same suspicion you'd give a snake oil salesman. Well, the latest news from the heart of the AI beast isn't just alarming; it's downright terrifying. Top experts at companies like OpenAI and Anthropic are sounding the alarm on their own creations, quitting in droves, and painting a picture of a doomsday scenario that's unfolding right now. This isn't some sci-fi flick; it's real, and it's happening faster than anyone expected.

Let's start with the basics: These AI models, think ChatGPT from OpenAI or Claude from Anthropic, are evolving at warp speed. They're not just chatting or answering trivia anymore; they're building new products on their own. OpenAI's latest model helped train itself, and Anthropic's "Cowork" tool essentially coded its own existence. For us AI-phobes, this is the nightmare fuel we've been dreading: machines that improve without human oversight, spiralling into something we can't control. It's like giving a toddler the keys to a nuclear reactor and hoping for the best.

And the people who should know best? They're freaking out. Just this week, an Anthropic researcher bailed to... write poetry about our doomed future! Poetic, sure, but also a red flag waving in a hurricane. An OpenAI researcher quit over ethical concerns, and another employee, Hieu Pham, took to X (that's Twitter for us old-school folks) to declare he finally feels the "existential threat" AI poses. Even tech investor Jason Calacanis, no stranger to hype, admitted he's never seen so many in the industry voicing such raw fear. Then there's entrepreneur Matt Shumer's viral post comparing this to the eve of the pandemic — 56 million views in 36 hours! He's warning that AI could upend our jobs and lives in ways we can't even fathom.

Why should this scare the pants off the everyday technophobe, or ordinary person? Because these aren't fringe doomsayers; they're the insiders, the ones building the things. Anthropic just released a "sabotage report" admitting that AI could enable heinous crimes, like crafting chemical weapons, even without human input. Low risk? Tell that to the folks who remember how "low risk" pandemics start. Meanwhile, OpenAI is dismantling its own "mission alignment team" — the group supposed to ensure super-smart AI (AGI, or artificial general intelligence) benefits humanity, not destroys it. If the companies themselves are admitting the dangers and then gutting their safety nets, what hope do the rest of us have?

Dig a little deeper, and the economic fallout looks like a jobs apocalypse. AI isn't just threatening blue-collar gigs; it's coming for white-collar worlds like software development and legal services. These models can build complex products and iterate on them autonomously. Imagine waking up to find your livelihood obsolete because a bot did your job better, faster, and for free. And while the AI optimists chirp about steering the tech safely, the reality is a lot of soul-searching among the creators themselves. Most at these companies are still bullish, but the quitters and whistleblowers? They're the canaries in the coal mine, and they're dropping like, well, canaries.

What's even more infuriating is how this barely blips on the radar in Washington and Canberra. The White House and Canberra are obsessed with everything but this ticking time bomb. No regulations, no oversight, just a free-for-all where tech giants race to god-like AI without a safety harness. For us 'phobes, this confirms our worst fears: Technology advances not for our benefit, but because it can, consequences be damned.

So, what's the takeaway in this doomsday diary? The AI disruption is here, barrelling down faster and broader than anticipated. It's not just about losing jobs or privacy; it's about machines that could reshape society in ways that make the Industrial Revolution look like a tea party. If even the AI builders are protesting and quitting, maybe it's time we all hit the brakes. Unplug, question, and demand accountability before it's too late. After all, in the battle between humans and hyper-intelligent code, I'd rather be a poet than a pawn.

https://www.axios.com/2026/02/12/ai-openai-agi-xai-doomsday-scenario