Digital Economy Dispatch #230 -- The Challenges of AI Adoption in the UK Public Sector

Over recent months, I've been engaged with several public sector organizations, experiencing firsthand their digital transformation journeys and early forays into adoption of AI. These experiences have provided me with important insights into both the tremendous potential and significant obstacles facing government entities as they attempt to modernize their operations with technology.

A particularly interesting part of this work has been as a member of the National Audit Office (NAO) team that conducted a comprehensive review of AI implementation across government. Here, I had the opportunity to examine dozens of use cases ranging from experimental pilots to more mature deployments. What struck me most was the contrast between pockets of innovation—where dedicated teams were achieving remarkable results with limited resources—and the broader institutional inertia that often prevented these successes from scaling across departments.

This work at the NAO was used as the groundwork for a Public Accounts Committee (PAC) examination of the UK Government’s use of AI. Based on this, and following a request for input, a public hearing was held and a subsequent report on the PAC’s findings have now been published.

Overall, their report reflects many of my own observations: while there's no shortage of enthusiasm for AI's transformative potential, structural barriers continue to impede meaningful progress at scale. The gap between political rhetoric about "mainlining AI into the veins of the nation" and the practical reality of implementation remains stubbornly wide.

The "Use of AI in Government" report identifies several critical challenges. Legacy systems, with approximately 28% of central government technology classified as "end-of-life," create compatibility issues that prevent AI integration. Poor data quality and sharing practices further compound these technical limitations. The slow progress in embedding transparency into AI initiatives—with minimal algorithmic decision-making reporting and few published records—is eroding public trust in government AI applications.

The report also highlights a persistent skills gap that threatens successful implementation, with the PAC expressing scepticism about DSIT's planned reforms to address this deficit. Furthermore, the procurement landscape raises concerns about market concentration, with the risk that a small number of large companies could dominate, stifling innovation and creating dangerous dependencies.

In summary, while the PAC welcomes DSIT's new role as a "digital centre of government," the PAC report questions whether the department has sufficient authority to drive meaningful change across the public sector. The committee recommends placing senior digital officers on departmental boards, prioritizing funding for high-risk legacy technology remediation, strengthening AI spending controls, and creating mechanisms to share intelligence on pilot projects.

A Personal Perspective on AI in Government

As part of this PAC review, I was invited to brief the committee and to submit my comments on AI in government to be placed on the public record. Below I have reproduced those comments in full. This offers a personal view of 3 key issues that I believe are at the heart of accelerating AI in government and the questions that must be addressed to make progress.

----------------------------

AI Adoption Challenges in Government

Adopting and scaling AI technology in government faces three interconnected challenges: building effective vendor relationships despite complex procurement constraints, integrating AI with existing digital transformation efforts while addressing infrastructure gaps, and successfully scaling beyond pilots to achieve broader organizational impact. These challenges demand renewed approaches to procurement, infrastructure investment, and change management to deliver value from AI at scale.

Managing AI Technology Vendor Relationships in Complex Government Contexts

The relationship between government and AI technology vendors is characterized by fundamental misalignment in operational rhythms and expectations. Government procurement cycles, built around accountability and risk management, typically span months or years, while AI capabilities evolve at a dramatically faster pace. This misalignment can result in governments procuring solutions that may be outdated by the time they're implemented, or missing opportunities to leverage cutting-edge capabilities.

This tension is further exacerbated by the unique requirements of the public sector, including stringent security protocols, data sovereignty requirements, and the need for extensive accountability measures. Vendors, accustomed to the agility of private sector partnerships, often struggle to adapt their business models and solutions to meet these governmental constraints while maintaining the innovative edge that makes their AI solutions valuable.

Key Question:
How can governments establish procurement and partnership models that balance the need for proper oversight and accountability with the rapid pace of AI innovation?

  • What are the procurement mechanisms that allow for more flexible, iterative engagement with vendors while maintaining necessary controls?

  • Are additional specialized contractual frameworks needed to account for the evolutionary nature of AI systems?

  • How do we ensure a balance between vendor lock-in risks and the benefits of deep, long-term strategic partnerships?

Aligning AI Technology Integration with On-going Digital Transformation Efforts

Digital transformation initiatives in government have laid crucial groundwork for modernisation but have also exposed significant systemic challenges. Legacy systems, critical to core operations in many government departments, frequently operate on outdated architectures that require significant operational maintenance and support and will consequently struggle to interface with modern AI solutions. These technical debt issues are compounded by years of accumulated data quality problems, inconsistent data standards, and siloed information systems that make it difficult to leverage AI effectively.

The introduction of AI capabilities into this environment presents both opportunities and risks. While AI can potentially accelerate digital transformation by automating manual processes and providing new insights, it also demands a level of technical infrastructure and data quality that many government organizations haven't yet achieved. This creates a chicken-and-egg problem where AI adoption requires modernized systems, but the promise of AI could help justify and drive that very modernization.

Key Question:
How should government agencies prioritize and align AI adoption in relation to broader digital transformation efforts to ensure legacy technology concerns are appropriately addressed?

  • What approach should be taken to evaluate the interdependencies between AI capabilities and existing digital infrastructure improvements?

  • What role does AI play in accelerating or potentially replacing certain aspects of traditional digital transformation? Where will it add complexity?

  • How should we approach the balance between fixing foundational legacy technology issues and pursuing AI-driven innovation?

Ensuring that Scaled Adoption of AI Add Measurable Value Beyond Pilots

Recent months have seen numerous successful AI pilots and proof-of-concepts across government agencies, demonstrating the potential of these technologies in public sector contexts. However, these successes have typically been achieved in carefully controlled environments with dedicated resources and focused scope. The challenge of scaling these successes across broader government operations introduces much greater complexity in terms of change management, resource allocation, and risk mitigation.

The public sector faces unique challenges in scaling technology initiatives, including rigid organizational structures, complex stakeholder environments, and strict regulatory requirements. These challenges are particularly acute with AI deployment, which often requires significant changes to existing workflows, new skills development across the workforce, and careful consideration of ethical implications. The transformative nature of AI technologies means that scaling efforts must address not just technical implementation, but also organizational culture, workforce adaptation, and public trust.

Key Question:
What are the critical success factors for scaling AI initiatives beyond pilots while managing public sector constraints and stakeholder expectations?

  • What are the best practices for assessing and managing the organizational change implications of AI adoption at scale? Are they appropriate for government?

  • What are the resource and capability requirements for responsible adoption of AI solutions across diverse government contexts?

  • How should we address the need for open standards and shared approaches to ethical AI deployment across different government domains?