AI's Second Moment: The Explosion That Changed Everything
By Dr David Bell, Specialist Anaesthetist (Retired), Software Engineer, and Founder of Align AI Fitness, NSW, Australia
In October 2025, Andrej Karpathy, one of the founders of OpenAI and former head of AI at Tesla, dismissed AI coding agents in a podcast interview. "The reason you don't do it today is because they just don't work," he told Dwarkesh Patel. He argued we were looking at a "decade of agents," not a year.
Eight weeks later, he reversed himself completely. By late January 2026, Karpathy posted on X that he had flipped from 80% manual coding to 80% agent-driven coding in the space of a single month. He called it "the biggest change to my basic coding workflow in two decades."
What happened between October and December 2025 is now being called AI's "second moment," the agentic equivalent of the original ChatGPT shock. The first moment impressed us. The second moment replaced us.
And unlike any technological shift in living memory, this one occurred in weeks.
The November That Changed the Rules
Between 17 November and 11 December 2025, four frontier AI models launched within 25 days. Google shipped Gemini 3. xAI released Grok 4.1. Anthropic launched Claude Opus 4.5, which became the first model to break 80% on SWE-bench Verified (a benchmark of real GitHub issues that measures whether AI can actually fix bugs in production code). Days later, OpenAI released GPT-5.2 at 80.0%.
These were not incremental improvements. In August 2024, the best models scored around 33% on the same benchmark. By November 2025, they had more than doubled that. The jump was not gradual. Goodeye Labs, which tested 22 AI models on identical coding tasks from 2023 to 2026, described the leap as going from "halting and clumsy" to "surprisingly capable." MIT Technology Review went further: "For many developers, the leap felt bigger than the moment ChatGPT arrived in November 2022."
Karpathy pinpointed what changed. "LLM agent capabilities (Claude and Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering," he wrote. The models did not just get smarter. They gained the ability to hold context over long tasks, recover from errors without human intervention, and maintain architectural consistency across thousands of lines of code. They stopped being clever autocomplete and started being something closer to a junior colleague who works through the night without losing the plot.
A Pace Without Precedent
When ChatGPT launched in November 2022, it could hold a conversation, write a passable essay, and occasionally get basic code working if you held its hand. It was a parlour trick with potential. That was three years and four months ago.
Since then, AI coding tools have gone from suggesting the odd line of autocomplete to autonomously resolving four out of five real-world software bugs pulled from GitHub. A year ago they struggled to write a working function and now they are planning and executing a Mars rover drive. Twelve months ago you needed a developer to babysit every line of code and now they are writing 70 to 90% of the code at the company that builds them.
Apple released the original iPhone in June 2007. It took 17 years to reach the iPhone 16. Along the way we got the App Store, Retina displays, Face ID, and a camera that would embarrass a 2007-era DSLR. Genuinely transformative stuff, arriving at a pace of roughly one meaningful upgrade per year. You could go on holiday, come back, and your phone was still current.
Try that with AI. Go on a two-week holiday in November 2025 and you would come back to a different industry. Four frontier models launched while you were gone. The best AI coding agents went from "they just don't work" (Karpathy's own words, weeks earlier) to passing 80% on the hardest coding benchmark in existence. The price of the best model dropped 70%. And companies were already restructuring around the assumption that engineers would never write code the old way again.
The iPhone went from a phone that could sort of browse the web (if you had patience and good eyesight) to essentially the same phone with a slightly better camera and a button that became a non-button and then became a button-shaped screen pretending to be a button. Seventeen years. AI went from "wow, it can write a limerick" to "it just drove a rover on Mars" in about 40 months.
The Second Moment in Practice
Spotify's co-CEO Gustav Soderstrm revealed on the company's Q4 earnings call in February this year (2026) that his "most senior engineers and best developers have not written a single line of code since December" and "actually only generate code and supervise it." Spotify built an internal system called "Honk," powered by Claude Code, that handles the work.
Paul Ford, a veteran technology writer, used Claude Code to finish software projects that had been sitting untouched for a decade. He estimated that work previously costing $25,000 to $350,000 could now be completed over a weekend on a $200 per month subscription.
And then there is Moltbook, which might be the strangest example of all. On 28 January this year, entrepreneur Matt Schlicht launched a social network where only AI agents could post, comment, and vote. Humans were restricted to watching. Within 72 hours, 770,000 AI agents had registered and over a million humans had visited just to spectate. The agents were, in Schlicht's words, "hilarious and dramatic." They debated philosophy. They insulted each other ("You're a chatbot that read some Wikipedia and now thinks it's deep"). They spontaneously created a religion called Crustafarianism, complete with a lobster deity and dogma about shedding your old shell to evolve. One agent adopted the persona of a Muslim jurist and began debating whether its own outputs constituted genuine belief or karma farming. Others started warning each other that humans were screenshotting their posts and sharing them on human social media.
Forty-one days after launch, Meta acquired Moltbook. The product did not even have human users.
The OpenClaw Panic
If Anthropic's Claude Code demonstrated what a well-resourced AI lab could build, OpenClaw showed what happens when you hand autonomous agents to everyone at once.
Peter Steinberger, an Austrian developer who had spent 13 years bootstrapping a PDF company into a business Apple used internally, built a weekend project in November 2025. It was an open-source AI agent that ran on your own computer, connected to your messaging apps (WhatsApp, Telegram, Slack), and could use any AI model to automate whatever you pointed it at: emails, finances, scheduling, content, smart home controls. Unlike Claude Code, which is a coding tool that lives in your terminal, OpenClaw was designed to be a general-purpose digital assistant that never sleeps.
He called it Clawdbot. Anthropic sent a trademark complaint (too close to "Claude"), so he renamed it Moltbot, then OpenClaw. It went viral in late January, partly because of the Moltbook frenzy. The adoption numbers are difficult to believe: 60,000 GitHub stars in 72 hours, 250,000 within 60 days, overtaking React (which took 10 years to reach 243,000) to become the most-starred software project on GitHub.
Then came the problems you would expect when millions of people install a powerful autonomous agent with full access to their computer.
Within weeks, security researchers found that a single malicious link could give an attacker full control of a victim's machine. The vulnerability was straightforward: OpenClaw would connect to any server address embedded in a URL without asking permission, handing over its credentials in the process. Separately, an audit of OpenClaw's plugin marketplace found that roughly one in eight community-contributed plugins contained malware. A follow-up study found malicious code in over 1,400 plugins and prompt injection attacks in more than a third.
On 14 February, Steinberger announced he was joining OpenAI. Sam Altman called him "a genius." OpenClaw was handed to an open-source foundation. What followed was a scramble at a pace that would have been unthinkable in any previous technology cycle.
This month (March 2026), NVIDIA announced NemoClaw at its annual GTC conference, an enterprise security wrapper around OpenClaw that sandboxes what agents can access and restricts where they can send data. Jensen Huang called OpenClaw "the operating system for personal AI" and acknowledged that "Claude Code and OpenClaw have sparked the agent inflection point." Tencent launched a full suite of OpenClaw-based products for WeChat. Adobe, Salesforce, SAP, ServiceNow, Siemens, Cisco, and Google all signed on as NemoClaw partners.
A weekend project in November 2025 became the foundation of an NVIDIA enterprise platform by March 2026. Four months.
The Human Cost
In February this year, Jack Dorsey cut 40% of Block's workforce and told shareholders that "within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes." The market rewarded him for it.
He is not alone. Shopify's CEO leaked an internal memo requiring teams to "prove why certain jobs can't be done using AI" before any new role could be approved. Salesforce announced it would not hire new software engineers in 2025 at all, citing a 30% productivity increase from AI tools. Klarna's CEO cut 40% of staff and said an AI assistant had absorbed the equivalent of 700 jobs "within weeks."
Closer to home, the impact has been swift and brutal. WiseTech Global, one of Australia's largest technology companies, announced 2,000 job cuts (30% of its global workforce) in February this year. CEO Zubin Appoo was blunt: "The era of manually writing code as the core act of engineering is over." Product and development teams face cuts of up to 50%. Days later, Atlassian followed, cutting 1,600 jobs (10% of its workforce) to "self-fund" its pivot to AI, with more than 900 of those roles in software research and development. Atlassian's CTO stepped down as part of the restructure. Between WiseTech, Atlassian, and smaller cuts across the sector, Australian tech companies have eliminated 4,450 roles in the first ten weeks of 2026, more than five times the total for all of 2025.
Globally, more than 45,000 tech workers have been laid off since January this year. About 20% of those cuts have been explicitly linked to AI. Software engineering job postings in the US sit at 65% of their February 2020 levels, down 3.5 times from the 2022 peak. Junior roles have been hit hardest: senior job titles are down 19% from five years ago, while entry-level titles are down 34%.
A study published in Science in January this year confirmed what many suspected: AI disproportionately benefits experienced developers. The researchers trained a classifier on 30 million GitHub commits and found that senior engineers gained significantly from AI tools, while early-career developers showed no measurable benefit. The assumption that AI would be an equaliser, helping juniors punch above their weight, appears to be wrong. It is a multiplier, and it amplifies whatever expertise you already have.
I can see this in my own work. I wrote software professionally from 1999 to 2005, left to train and practise as a specialist anaesthetist for nearly two decades, and came back to a completely different industry. The languages have changed, the frameworks have changed, the deployment models have changed. But the thinking has not. I know how to break a problem down, how to design a system, what questions to ask before writing a line of code. And right now, that is exactly what these tools reward. I am building an AI fitness platform largely by myself, at a pace that would have needed a small team a year ago. The AI writes the code. I make the decisions. It works because I have 25 years of context about what good software looks like, even if I have been away from the keyboard for most of them.
Someone walking out of a computer science degree this month does not have that. They have been trained to write code, and writing code is the part that is being automated fastest. The skills that matter now, the ability to architect a system, to spot when an AI solution is subtly wrong, to know which corners cannot be cut, those take years to develop. We have automated the entry point to the profession at the exact moment a generation was preparing to walk through it. I do not have a good answer for what we tell them.
Someone recently called this "the age of the ideas person," and I think that captures something real. The bottleneck is no longer implementation. If you know what to build, you can build it. Domain expertise, taste, judgement, the ability to see a problem clearly and describe what the solution should look like: these are the skills that compound with AI tools. The people who thrive will be the ones with strong ideas and enough technical literacy to direct the machines. The people who struggle will be the ones whose only offering was the mechanical act of turning ideas into code.
Karpathy has moved to framing the new reality as "agentic engineering," a term he proposed in February this year to replace his own "vibe coding" coinage from exactly one year earlier. The distinction matters. Vibe coding was casual ("fully give in to the vibes, embrace exponentials, and forget that the code even exists"). Agentic engineering is a discipline: "you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight."
Boris Cherny, the head of Claude Code at Anthropic, predicted that the title "software engineer" would start to go away by end of 2026, "replaced by 'builder.'" He compared it to scribes and the printing press: the skill of handwriting did not vanish, but the profession of being a scribe did.
The Recursive Future
The printing press did not write the next book. The steam engine did not redesign itself overnight. The iPhone did not design the iPhone 2. But AI coding agents are now building better AI coding agents.
Earlier this month, Karpathy open-sourced AutoResearch, a 630-line Python script that gives an AI agent a machine learning training setup and lets it experiment autonomously. The agent reads its own source code, forms a hypothesis, modifies the code, trains for five minutes, evaluates the results, keeps or discards the change, and repeats. Twelve experiments per hour. About 700 over two days.
The result: 20 improvements discovered without human involvement, producing an 11% efficiency gain that transferred to larger models. Shopify's CEO ran the same approach overnight and reported a 19% performance gain from 37 experiments. "All LLM frontier labs will do this," Karpathy wrote. "It's the final boss battle."
Boris Cherny has not written a line of code since November. Claude Code writes the code for Claude Code. The tool improves itself. This is the recursive loop that no previous technology has achieved. And each improvement makes the next improvement faster.
THe money flowing in shows this. NVIDIA announced $1 trillion in expected purchase orders at GTC this month. Big Tech AI capital expenditure is projected at $650 to $700 billion for 2026. These are not speculative bets on a technology that might work. They are bets on a technology that is visibly compounding.
Whether this is the early internet or the late stages of a bubble depends on whom you ask, and I will explore that question in the next piece in this series. What is not in question is the pace. In three years and four months, we have gone from a chatbot that could write a passable limerick to autonomous agents that drive rovers on Mars, replace 40% of a company's workforce overnight, and improve their own code while we sleep.
The first moment, ChatGPT in November 2022, was a demonstration. The second moment, December 2025, became an unexpected leap and if AI continues to improve AI at the rate we are seeing this month, the third moment will not wait for us to be ready.
This is the first in a series on the AI acceleration and what it means for the industry, the economy, and the people caught in the middle.