Digital Economy Dispatch #281 -- Can Britain Turn the Power of AI into National Advantage?

Anthropic has built an AI model it considers too dangerous for public release. That’s not a reason for alarm. But it is yet another reason for the UK to move focus, urgently, from AI strategy to AI delivery.

Writing a book is a strange experience. You spend months making an argument by assembling evidence, testing the logic, and sharpening the language, only to find that somewhere in the middle of it all, a quiet doubt settles in. Not about whether the argument is right, but about whether it will matter. Whether the moment will catch up with the manuscript. Whether anyone cares. Whether the urgency and passion you feel as you write it will be evident to someone reading it six months later.

I have spent the better part of the past year making the case that the UK's AI challenge is not primarily a technology problem. It is an institutional one. The gap between what AI can do and what Britain is organised to do with it is widening at a pace that our current governance structures are not equipped to match. And I will admit that, as the final proofs went back to my publisher, I wondered whether events might prove me either too pessimistic or too late.

In the past few weeks, I’ve stopped wondering. The story of Anthropic's Mythos model has brought the argument into sharper focus than anything I could have written.

A Model Too Powerful to Release?

One evening in February, an Anthropic researcher sitting at a laptop in Bali set out to test the company's most powerful AI model. What he found stopped him in his tracks. The model, now known as Mythos, had autonomously identified and exploited a 17-year-old vulnerability in a widely used operating system. No human was involved after the initial instruction. The model found the flaw, built the exploit, and demonstrated how an attacker could take complete control of any server running the software from anywhere on the internet.

Anthropic has since confirmed that Mythos identified thousands of previously unknown vulnerabilities across every major operating system and web browser. It has not released the model publicly and doesn’t plan to. Instead, it has made a limited preview available to a small group of technology and security partners, including Amazon, Apple, Cisco, Microsoft, and Palo Alto Networks, under a new initiative called Project Glasswing, with the explicit goal of helping defenders secure critical systems before models with similar capabilities become more widely available.

Today’s Deeper AI Dilemma

It would be easy to read the Mythos story as a cautionary tale about a single unusually powerful model that a responsible company chose not to release. That framing is too narrow. What Mythos illustrates is that we have entered a phase in AI development where the gap between what is technically possible and what society is institutionally prepared to handle is widening at pace.

Anthropic's own frontier red team report describes Mythos as having "improved to the extent that it mostly saturates" existing cybersecurity benchmarks. That is a remarkable statement. It means the standard tools we have developed to measure and govern AI capability in this domain are already insufficient. The model has moved beyond the frame we built to contain it.

Anthropic's decision to restrict Mythos and invest in defensive deployment is a serious and responsible response. Project Glasswing commits up to $100 million in usage credits to help defenders get ahead of the threat. The fact that a frontier AI company identified the risk, disclosed it, and coordinated a response is, on balance, a positive signal about how the industry can behave.

But there is a much harder question. Project Glasswing is a private sector consortium, coordinated by a US company, working primarily with US technology partners. The UK government is not a named participant. UK critical infrastructure operators are not listed among the forty organisations with access to the Mythos preview. At the precise moment when AI capability has crossed a threshold that, in Anthropic's own words, "fundamentally changes the urgency required to protect critical infrastructure", Britain is largely on the outside looking in.

The UK Sovereignty Question in Focus

The week that Mythos became public knowledge, the government announced its response to exactly this kind of challenge. On 16th April, Technology Secretary Liz Kendall launched the £500 million Sovereign AI Unit at Wayve's King's Cross headquarters, describing it as "one of the single most important things this government will do for the future of this country". The fund will invest in British AI startups, provide access to supercomputing infrastructure, fast-track visas for global talent, and help portfolio companies win government contracts.

It is a serious initiative, and it deserves a serious welcome. But notice what it does and does not address. It backs the supply side: building British AI companies, securing compute capacity, and attracting talent. What it does not do is build the demand-side coordination architecture that would allow the UK to respond institutionally when a capability threshold is crossed. There was no mention of a standing mechanism for assessing threats to critical infrastructure. No procurement framework to give UK operators rapid access to defensive AI tools. No answer to the question of who coordinates the national response next time a Mythos-class model emerges. And we all know there will be a next time.

The irony is startling. The Sovereign AI Unit was launched on the same day that OpenAI quietly paused its Stargate UK data centre project, citing energy costs and the regulatory environment. Britain is announcing a fund to build sovereign AI capability at the precise moment the world's most prominent AI company has signalled that the conditions for large-scale AI infrastructure investment are not yet in place. The ambition and the mechanism are still not aligned.

What an adequate institutional response to Mythos would additionally require is worth spelling out. It would need a body with the authority and technical capability to assess implications for UK critical infrastructure at pace. It would need procurement frameworks that give UK operators a route to defensive AI tools without depending entirely on bilateral relationships with US technology companies. It would need a clear line of responsibility for who coordinates the national response when a new capability threshold is crossed. And it would need all of this to be in place before the moment of need, not assembled in response to it.

None of that infrastructure currently exists in a form that is fit for purpose. The AI Opportunities Action Plan has delivered 38 of its 50 commitments. The AI Security Institute does important work. But neither was designed for the kind of rapid, operationally serious response that a Mythos-class development demands. We are, as a country, still primarily in strategy mode. And strategy mode is not adequate for where we now are.

From AI Strategy to AI Delivery

Project Glasswing, for all its limitations from a UK sovereignty perspective, offers an interesting model. It is not a regulatory framework. It is a coordinated, time-limited, operationally focused initiative that brings together the organisations with both the capability and the responsibility to act. The UK equivalent would be a standing mechanism with real authority that can convene government, industry, and critical infrastructure operators quickly when a capability threshold is crossed. Not a consultation. Not a call for evidence. A response.

The vulnerabilities Mythos identified had survived, in some cases, decades of human review and millions of automated security tests. The systems they affect are not peripheral. They are the operating systems running NHS clinical infrastructure, the browsers processing financial transactions, the networking software underpinning government services. Britain's ability to protect itself from that kind of threat is not just a matter of having good technology. It is a matter of having the governance, the procurement, the skills, and the institutional coordination to deploy that technology effectively and in time.

That is the gap this country needs to close. It is the argument I have spent the past year developing in Making AI Work for Britain, published in a few days on 28th April by London Publishing Partnership. I wondered, in those final weeks of writing, whether the urgency I felt would translate. I need not have worried. Mythos has made the case more powerfully than I ever could have done myself.