Digital Economy Dispatch #268 -- The Reality of AI Adoption

While boardrooms debate AI strategy and regulators wrestle with governance frameworks, a very different reality is unfolding across organisations. The actual state of AI adoption looks nothing like the official version.

I spend a lot of time talking to senior leaders about AI strategy. We discuss governance frameworks, enterprise architectures, and the importance of responsible deployment. These are valuable conversations. But I fear that they too often bear little resemblance to what's actually happening in their organisations.

The reality on the ground is messier, more organic, and far more interesting than the sanitised version that appears in boardroom discussions and strategy documents. When I talk to the people doing the work (managers, analysts, and team leaders) I hear a completely different story. AI adoption isn't waiting for the strategy to be finalised or the governance framework to be approved. It's already happening, in three distinct ways that rarely get discussed in the formal reviews.

The Personal Productivity Revolution

The first face of AI adoption is the simplest and most widespread: individuals using AI tools to get their work done faster and better. This is a different level from enterprise systems or approved platforms. It's individuals opening ChatGPT, Claude, or Gemini in a browser tab and asking for help with the task in front of them.

They're drafting emails, summarising documents, preparing for meetings, writing first drafts of reports, and getting unstuck when they hit a problem. The prompts are often basic, nothing sophisticated about them. But the productivity gains are real. A task that might have taken an hour now takes fifteen minutes. A blank page that once triggered half a day of wondering what to do next now has a working draft within minutes.

What’s impressive is how personal this has become. People have developed their own ways of working with AI. They know which tools they prefer, what kinds of prompts work for them, and where AI helps versus where it gets in the way. None of this is learned in a corporate training programme. It’s figured out individually, often in people’s own time, because it makes their working lives easier.

The scale of this quiet revolution is easy to underestimate. According to Microsoft's Work Trend Index, 75% of knowledge workers now use AI at work, with usage nearly doubling in the past year alone. Perhaps more tellingly, 78% of AI users are bringing their own tools to work, what researchers call "Bring Your Own AI", often without their employers' explicit knowledge or approval. It's become as routine as using a search engine, and just as invisible to management.

The Embedded AI Wave

The second face of AI adoption is less visible but arguably more significant: AI capabilities being quietly added to the tools we already use. We’re seeing Microsoft Copilot in Office applications. AI-powered features in Salesforce, Zoom, and Slack. Smart suggestions in email clients. Automated transcription and summaries in video calls.

This kind of AI adoption often doesn't require anyone to make a decision. It just appears, enabled by default, as part of a software update. One day, your email starts suggesting replies. Your CRM begins predicting which leads are most likely to convert. Your video conferencing tool offers to summarise the meeting you just finished.

For organisations, this creates an interesting situation. AI is being adopted at scale without anyone formally adopting it. The tools people already use are becoming AI-powered, whether or not that was part of the plan. The enterprise software vendors have made the decision for you—AI is now part of the package.

This embedded AI wave raises questions that many organisations haven't thought through. Where is the data going when Copilot summarises your document? What happens to the meeting transcript that got automatically generated? Who's responsible for checking whether the AI-suggested response to a customer query is actually correct? These aren't hypothetical concerns. They're happening now, in millions of interactions every day, mainly without oversight.

The Shadow AI Reality

The third face of AI adoption is the one that keeps cybersecurity and compliance officers awake at night: shadow AI. Uncontrolled, unmanaged experiments are happening in almost every corner of the organisation, underneath the surface, often invisible to senior leadership.

Teams are signing up for AI tools using personal email addresses or departmental credit cards. They're uploading company data to free-tier services to see what insights emerge. They're building workflows with AI components that nobody in IT knows about. Marketing is trying one set of tools, finance another, operations a third—none of them coordinated, few of them approved.

This isn't malicious. People aren't trying to circumvent controls for the sake of it. They're trying to do their jobs better, and they've found tools that help. The official channels are too slow, too restrictive, or simply non-existent. So they improvise. The gap between what employees need and what IT has approved creates the perfect conditions for shadow AI to flourish.

The scale of this shadow activity is difficult to measure precisely because, by definition, it's happening outside official view. But the numbers we do have are striking. McKinsey's 2025 State of AI survey found that while 88% of organisations now use AI in at least one business function, nearly two-thirds remain stuck in experimentation or pilot mode. Much of what's happening isn't coordinated but scattered across the organisation in unconnected pockets. Every organisation I talk to, when they look honestly, finds more unsanctioned AI use than they expected. Often much more.

The Gap Between Narrative and Reality

We need to be open about the current state of AI discussions. Senior management, auditors, and regulators are working from a mental model that assumes AI adoption is something that happens through formal channels via procurement processes, governance reviews, approved vendor lists, and controlled rollouts. That model made sense for previous generations of enterprise technology.

But AI isn't following that playbook. It's coming in through the front door, the back door, and every window simultaneously. By the time the governance framework is ready, thousands of AI interactions have already happened. And by the time the risk assessment is complete, the tools have changed, and new ones have appeared.

Of course, all this isn't an argument against governance. Good governance matters more than ever when a technology is this powerful and this easy to misuse. But governance that ignores how AI is actually being adopted will always be playing catch-up. You can't manage what you refuse to see.

What This Means for Leaders

If you're responsible for AI in your organisation, whether formally or informally, the first step is acknowledging this reality. Find out what's actually happening. Talk to people at every level about the AI tools they're using. You'll probably be surprised by what you learn.

The second step is meeting people where they are. The personal productivity uses and the shadow experiments aren't problems to be stamped out. They're signals about what your people need. The question isn't how to stop them but how to channel that energy into approaches that are sustainable and safe.

The third step is getting serious about the embedded AI that's arriving through your existing software. This needs attention now, not when you get around to it. Your enterprise tools are becoming AI-powered, whether you planned for it or not. Understanding what that means for data, privacy, and accuracy is urgent.

Finally, accept that the old model of technology adoption, where IT approves everything before it enters the organisation, isn't coming back. AI is too accessible, too useful, and too fast-moving. The task now is building governance that works with this reality rather than pretending it doesn't exist.

The official version of AI adoption is of a strategic, controlled approach proceeding according to an approved plan. But it's not what's happening. The real story is messier, faster, and already well underway. Leaders who understand this will be better placed to guide it. Those who don't will find themselves governing a fiction while the reality moves on without them.