Digital Economy Dispatch #273 -- Recursive AI is Here - and Why it Matters

AI is now building AI and accelerating its own development in ways that outpace governance, reshape business economics, and challenge the assumption that humans control the pace of change.

Something shifted recently, and I've only just begun to realise its significance.

AI is now being used to develop AI. Not as a metaphor. Not as a future possibility. Right now. The tools are building the tools. The loop has closed.

I'm calling this Recursive AI -- the use of AI systems to accelerate the development of AI systems. It matters more than most of the AI developments we spend time discussing. It's not another incremental capability improvement. It's a change in the nature of the game itself.

What's Actually Happening

The numbers are startling. At Anthropic, engineers report that 70-90% of their code is now AI-generated, with some senior engineers claiming they haven't written code by hand in months. Boris Cherny, head of Claude Code, says he shipped 22 pull requests in a single day, each one 100% written by AI. At OpenAI, researchers report similar figures. The people building the most advanced AI systems are using those systems to build the next generation.

And it's not just the frontier labs. AI-assisted coding tools mean that virtually every AI startup is now using AI to build AI. The Y Combinator statistic that 25% of their current batch has 95% AI-generated codebases includes companies building AI products themselves.

Former Google CEO Eric Schmidt has been warning about this trajectory, predicting that recursive self-improvement, where A isI learning and improving without human instruction, is now just two to four years away. As he put it at Harvard: "The computers are now doing self-improvement. They're learning how to plan, and they don't have to listen to us anymore."

The recursion is real. And it's accelerating.

Why Recursive AI Is Different

We've had automation in technology development before. Better tools have always enabled better tools. Compilers made it easier to build compilers. Cloud computing made it easier to build cloud services.

But Recursive AI is qualitatively different. Previous automation amplified human capability. AI is increasingly substituting for human cognitive work in the development process itself. The system is contributing to its own improvement in ways that go beyond simple tool use.

This creates feedback dynamics we haven't seen before. If AI makes AI development faster, and those faster-developed AIs make the next round even faster, the pace of change becomes difficult to predict—and potentially difficult to control.

As one analysis notes, AI agents that build the next versions of themselves are not science fiction—they're an explicit milestone on the roadmap of every frontier AI lab. OpenAI has publicly discussed hundreds of thousands of automated research "interns" within months, and a fully automated workforce within two years. The workforce that doesn't sleep, doesn't eat, and whose only objective is to make itself smarter.

I'm not making apocalyptic claims here. But I am noting that the assumption underlying most AI governance discussions—that humans set the pace of AI development—is becoming less obviously true.

The Implications for Digital Leaders

If you're leading digital strategy, Recursive AI matters for several reasons.

The capability frontier is moving faster than your planning cycles. If AI development is accelerating AI development, the gap between what's possible today and what's possible in 18 months may be larger than you're assuming. Strategies built on current capabilities may be obsolete before they're implemented.

Build vs. buy calculations are shifting. When AI can help build AI-powered products, the cost and time to create custom solutions drops. What previously required specialised AI teams may become achievable with smaller groups augmented by AI tools. The economics of expertise are changing faster than most organisations realise.

Your AI vendors are on this curve too. The products you're buying or building on will change rapidly. Today's capabilities are not a stable foundation. Plan for continuous adaptation, not implementation and maintenance.

The Policy Challenge

For policy makers and regulators, Recursive AI poses genuine challenges.

Oversight becomes harder. If AI systems are contributing to their own development, understanding what's being built—and why—becomes more complex. The humans involved may not fully understand the choices being made by their AI assistants.

Speed outpaces governance. Regulatory frameworks assume there's time to observe, deliberate, and respond. If the development cycle is compressing because AI is accelerating it, that assumption weakens. By the time a concern is identified and addressed, the technology may have moved on.

Accountability blurs. When an AI system contributes to building another AI system, and that system causes harm, the chain of responsibility becomes tangled. We need new frameworks for thinking about accountability in recursive development processes.

None of this means regulation is futile. But it does mean that governance approaches designed for human-paced development may need rethinking.

What To Watch

I don't know where Recursive AI leads. Nobody does. But here's what I'm paying attention to:

  • The self-improvement metrics. Labs are measuring what percentage of their development work is AI-assisted. Anthropic says 70-90% company-wide. Watch those numbers. When they cross certain thresholds, the dynamics change fundamentally.

  • The research-to-deployment gap. How quickly are advances in the lab making it into products? That gap seems to be compressing. Recursive AI is one reason why.

  • The concentration question. Does Recursive AI favour incumbents (who have the best models to assist their own work) or challengers (who can use available tools to move fast)? The answer will shape the industry structure.

The Honest Position

I find Recursive AI fascinating and unsettling in roughly equal measure.

Fascinating because it's genuinely novel. We're watching systems contribute to their own improvement in ways that have no real precedent. The intellectual challenge of understanding what this means is significant.

Unsettling because the assumptions I've relied on—that humans set the pace, that we can observe and adjust, that governance can keep up—feel less solid than they did two years ago.

The honest position is uncertainty. We're in a loop now, and we don't know where it leads. What I do know is that pretending Recursive AI isn't happening isn't a strategy. Leaders and policy makers need to engage with this reality, even when, or especially when, it makes planning harder.