- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #263 -- A Review of the Digital Economy in 2025
Digital Economy Dispatch #263 -- A Review of the Digital Economy in 2025
As we reflect on the year we find that AI moved from experiment to enterprise reality in 2025, exposing critical gaps in governance, trust, and human-centred leadership that will demand urgent attention in the year to come.
As 2025 draws to a close, it feels like the right moment to step back and reflect on the themes that have dominated our conversations about digital transformation this year. Looking across the 50+ dispatches I've published since January, several interconnected threads emerge. Together, they paint a picture of an extraordinary year in which the digital economy was powered by AI to move from fascinating experiment to urgent strategic necessity, bringing with it profound questions about governance, trust, and human agency that we are only beginning to address.
The Year of AI-at-Scale
If I had to choose a single phrase to characterise 2025, it would be "Delivering AI-at-Scale". In the past year, the conversation has evolved dramatically from asking "What can AI do?" to wrestling with "How do we implement AI responsibly at scale while delivering real value?". I've spent countless hours with executives, CIOs, and transformation leaders, moving pilot projects toward mature enterprise initiatives. In every case, the excitement of early wins is soon tempered by the sobering reality of overcoming numerous implementation challenges. This is creating a fundamental shift in how organisations think about AI — from a fascinating technology advance to a business driver demanding serious strategic review.
It is not just personal anecdotes. The data tells a stark story. According to the 2025 McKinsey State of AI survey, 75% of organisations now use AI in at least one business function, yet only 28% have clear executive accountability for governance or oversight. AI adoption is staggering in its speed, with 78% of firms now apply AI across core operations, up sharply from 55% in 2024. Yet nearly half of C-suite executives admit their organisations are "tearing apart" under the strain of unmanaged adoption.
Cutting Through the Noise
In addition, this rapid deployment of AI tools has come at a cost. I began the year with a personal commitment: less noise, more signal. The digital landscape has become overwhelmed with AI-generated content, or "AI slop". Vast quantities of mediocre, AI-generated material is flooding our digital channels and is making valuable information harder to find. We’re now surrounded by a proliferation of shallow, surface-level analyses and basic explanations, which may be technically accurate but offer few meaningful insights. The deep impact of “AI slop” is such that for many, this term has become the “phrase of the year”. What can we do in response? The antidote is to place an explicit focus on depth over breadth, curating more selective sources and engaging in forums where meaningful AI discussions flourish.
A key part of this focus is exposing the core elements of what AI is…and what it is not. Throughout the year, I've emphasised that AI is not magic: it's economics. Drawing on Agrawal, Gans, and Goldfarb's framework from "Prediction Machines", I’ve become convinced that the true impact of AI isn't about creating sentient machines but something far more practical: making prediction cheap and accurate. This economic framing helps demystify AI and allows leaders to make more rational decisions about implementation and investment. Understanding AI as fundamentally an economic phenomenon, not a magical one, has been crucial in my attempts this year to cut through the hype.
The Governance Gap
Perhaps the most pressing theme facing leaders and decision makers in 2025 has been the widening gap between AI adoption speed and governance capability. AI sprawl is overtaking us. The rapid, uncontrolled spread of GenAI tools across organisations is creating an urgent governance crisis. The McKinsey survey reminds us that while three quarters of companies have scaled AI extensively, fewer than a third have formal governance policies in place. This mismatch has made many enterprises more dependent and more exposed than ever.
To compound these concerns, the fragility of our digital infrastructure became painfully apparent this year. A simple DNS configuration error inside AWS triggered a cascading failure that silenced half the internet for hours. Meanwhile, ransomware attacks like the one on Jaguar Land Rover demonstrated that cyber resilience cannot be expected, only prepared for. The lesson for digital leaders is clear: in deploying AI, speed without foresight multiplies risk. The challenge is not to slow innovation but to stabilise it to ensure that as we scale AI, we also secure it.
Who Do We Trust?
The effect of these operational issues with AI also raised a broader conceptual challenge for the digital economy. A recurring question throughout 2025 has been: Who do we trust with AI? Parmy Olson's "Supremacy" captured the tension well, placing us at the centre of the tech world's most important AI race. The contest between OpenAI and DeepMind, their visionary founders, and the forces of venture capital and Big Tech has shaped the direction of AI development in ways that have profound implications for everyone adopting these tools. The concentration of power in a handful of US and Chinese technology firms is not an abstract business threat. Rather, it has become embedded in our infrastructure, digital services, and strategic decision-making.
For the UK specifically, this has meant wrestling with questions of digital sovereignty and strategic dependency. At the start of 2025, the UK government announced ambitious spending commitments, including a £500 million UK Sovereign AI Unit and £750 million for the Edinburgh supercomputer. These represent attempts to boost the UK economy and reduce dependence on foreign AI capabilities. But, later in the year the UK government also announced significant US AI technology investment, raising questions about how the UK plans to deliver on these promises. Such digital technology decisions are smart only if we maintain clarity on data governance, regulatory independence, and the persistent risks of strategic lock-in. A struggle we’ll see played out throughout 2026 and beyond.
The Human Dimension
However, technological concerns are only part of the story. Throughout 2025, I've consistently argued that the real issue isn't technology itself but how it's used by those in leadership positions. The Luddite lessons from previous technology revolutions remain relevant. Successful AI integration is not just about implementing new technologies, but about guiding people through significant disruptive change. And AI-driven change is already significant. Microsoft's Work Trend Index revealed that "Frontier Firms" are fundamentally reshaping work through AI integration, with 82% of leaders expecting to leverage agent-based digital labour within the next 12-18 months.
Workplace disruption is a reality. Yet contrary to headlines suggesting AI will replace key roles in middle management layers, I've argued this year that AI may actually liberate them from administrative drudgery, allowing them to focus on what they were hired for: leading people. Every employee is increasingly becoming an "agent boss", responsible for building, delegating to, and managing AI agents to amplify their impact. The gap between leaders and employees in AI readiness (67% versus 40% familiarity with agents) represents both a challenge and an opportunity for organisations willing to invest in upskilling.
Looking Beyond GenAI
For many people in 2025, AI interest has been focused on one aspect of this broad space – Generative AI tools. While the world has been captivated by AI that generates text and images, I've repeatedly pointed out that some of the most significant breakthroughs are happening in areas that generate fewer headlines but potentially far greater impact. Predictive AI and optimisation systems are transforming areas such as weather forecasting, energy grid management, urban planning, healthcare provision, and pharmaceutical research. Our collective fixation on generative AI may be causing us to miss the forest for the trees.
Richard Susskind's framework (from his excellent book, “How to Think about AI”) reminds us that automation of common processes by using AI to computerise existing tasks is just the most obvious and limited application of what AI can do today. Ongoing AI innovation will mean delivering outcomes using radically new processes, often removing the need for certain activities entirely. Leaders trapped in a narrow vision based on today's AI capabilities risk strategic blindness to what not-yet-invented technologies will bring in the coming years.
The Path Forward
As we enter 2026, the challenge for us all is clear. Advances in 2025 have led us to a phase where we must view AI as "normal technology". It is not some exotic outlier. It is an everyday tool we must use, but it is critical we remain in control of. This perspective stands in contrast to growing fears of AI as a separate, potentially superintelligent entity that could eclipse human control. For digital leaders, this means embracing realism about AI's capabilities and limitations, prioritising strategies that maintain human control, investing in organisational adaptation, and adopting nuanced approaches to risks that undoubtedly exist with AI.
The hope for our AI future lies not in the technology itself, but in our collective ability to guide its development and deployment responsibly. We have the experience, frameworks, and wisdom gained from decades of digital transformation. The question is whether we will apply these hard-won lessons with the urgency and seriousness that this moment demands.
Finally, as we look forward to 2026, let’s remind ourselves that AI-driven digital transformation isn't something happening to us. Rather, it's something we collectively shape. By maintaining our critical thinking and human-centric values, we can ensure that AI becomes a tool of empowerment, not constraint. As I've said throughout the year: drill deeper, question boldly, and never lose sight of the human dimension in all of AI’s technological change.
Here's to a thoughtful and transformative 2026.