Digital Economy Dispatch #274 -- The 9% Problem: What the Data Says About The UK’s AI Readiness

Before we talk about making AI work for Britain, we need to look honestly at where we're starting from. The government's own data tells a story that deserves far more attention than it has received.

Here's a question worth considering before you read on.

The UK government spends £26 billion every year on digital technology. It employs nearly 100,000 digital and data professionals. It has been running large-scale digital transformation programmes for over thirty years. Given all of that investment and accumulated experience, what percentage of the government's major technology programmes were assessed as "Green" (i.e., successful delivery is considered highly likely) in the government's own review, published in January 2025?

Have a guess. How about 70%? 50%? 30%?

The answer is 9%.

Less than one in ten. And those same technology programmes are 60% more likely to be rated "Red" (i.e., successful delivery is considered highly at risk) than non-technology projects sitting alongside them in the same portfolio.

This figure comes from the UK’s State of Digital Government Review, published in January 2025 and presented to Parliament by the Secretary of State for Science, Innovation and Technology. It is one of the most candid official assessments of public sector digital performance this country has ever produced. And it is the essential context for any serious conversation about AI adoption in Britain.

What Else the Data Shows

The 9% headline is striking enough, but it sits within a broader picture that needs careful attention.

The report also notes that 47% of central government services still rely entirely on non-digital methods such as phone calls, paper forms, and in-person visits. Half of all digital and data recruitment campaigns in 2024 failed to fill the role advertised; in 2019, that failure rate was 22%. The pay gap between the public and private sectors for technical architects is 35%, equivalent to around £30,000 per year. The average digital contractor costs three times as much as a permanent employee, and yet headcount restrictions make contractors easier to hire than permanent staff, so that is what organisations do.

Only four central government departments out of more than twenty have a digital leader on their executive committee. Only around 20% of senior civil servants have verified themselves as digitally upskilled against the government's own framework.

And on AI specifically: only 8% of public sector AI projects show measurable benefits, and only 16% show forecast costs.

These are not isolated data points. They form a pattern, and the review's authors are very clear about what that pattern means. The successes that do exist in UK public sector digital delivery have typically been achieved, in their own words, "despite the system rather than because of it", and dependent on the dedication of individuals navigating structures that were not designed for digital-age delivery. This shouldn’t be a surprise. The NAO noted as far back as 2021 that despite 25 years of government strategies, there is a consistent pattern of underperformance in delivering digital business change.

The Policy Challenge

The State of Digital Government review identifies five root causes for this state of affairs: leadership, structure, measurement, talent, and funding. What is striking about all five is that none of them are technology problems. They are organisational and institutional issues. And, unfortunately, they’re the kind that a more capable AI model or a new AI incubation hub will not fix.

It starts with people. Digital leaders are not consistently represented at executive level. Pay frameworks actively drive technical talent out of the public sector. Funding models are designed for capital projects, not the continuous improvement that digital services require. Governance processes were built for infrastructure delivery, not iterative technology development. And institutional knowledge has been steadily transferred to expensive contractors rather than built into permanent capability.

We need to acknowledge that this is the foundation on which the UK's AI ambitions are being built.

The Leadership Question

There is a natural temptation, when confronted with data like this, to argue that AI is different. This time the technology is powerful enough to cut through institutional inertia and deliver results that previous digital programmes could not. I understand the argument. I have heard it made sincerely by people I respect.

But consider what the data actually shows. The barriers that this review and others identify are not particular to a specific kind of technology. They are barriers to organisational change of any kind. An institution that cannot successfully commission, manage, and embed digital programmes does not automatically get better at doing so because the technology on the table is more impressive. Indeed, the depth and speed of disruption being caused by AI only increase the risks.

The question that matters is not "how do we deploy AI?". It is "what does our organisation need to be able to do differently before AI deployment can succeed?". Those are very different questions, and the gap between them is where most AI programmes quietly founder.

Stepping Back

None of these comments is written to be discouraging, and it is certainly not a criticism of the many talented and committed people working in digital roles across the public sector. The review itself is full of genuine success stories (e.g., the NHS App, GOV.UK One Login, Hillingdon Council's AI-driven contact system that saved £5 for every pound spent, and DWP's use of AI to improve bereavement notifications). The potential is real, and the commitment is genuine.

But it is worth pausing, stepping back, and considering what this data actually means.

The UK has an ambitious national AI strategy. We have real political will behind it. We have world-class research capability in our universities and a genuine concentration of AI talent. All of that is true and worth celebrating.

But the honest read of the evidence is this: we cannot simply reach for the AI magic wand and expect results to follow. The gap between AI aspiration and AI implementation in the UK is not primarily a technology gap. It is an institutional gap — in capability, in leadership, in incentive structures, in the basic organisational conditions that determine whether a complex programme succeeds or quietly joins the graveyard of previous well-intentioned initiatives.

What can we do to bridge this gap? That is the question I have been exploring in the research behind my forthcoming book Making AI Work for Britain, to be published in April by the London Publishing Partnership.

Over the coming weeks, I will be writing more about both the gap itself, ways that the UK can face up to the challenges of delivering AI at scale, and identifying organisations that are successfully executing on a path forward. If you want to get early insight into what the book says on these themes, sign up at LinkedIn to a parallel series of articles at newsletter.alanbrown.net.