- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #277 -- With Great AI Power Comes Great Responsibility...And a Big Bill
Digital Economy Dispatch #277 -- With Great AI Power Comes Great Responsibility...And a Big Bill
AI tools are advancing fast, and their deployment across tasks and roles is accelerating. But at what cost? The questions of what we owe — and what we'll be charged — have never been more urgent. Or more misunderstood.
I have been having a version of the same conversation about AI with a lot of people lately. It tends to start with excitement, often barely contained, and then shifts into something more complicated. People are describing a real change in how they work due to AI. They have more energy, more ambition, and more output. But hovering somewhere behind the enthusiasm is an unasked question that keeps surfacing: at what cost?
For my own part, I have found something in the latest wave of AI tools that I have long wished for: By using AI tools it feels like I have my own dedicated team. The AI tools give me a capable junior engineer, a reliable personal assistant, a careful copy editor, and an experienced sounding board. And all are available at any hour, on any topic.
The result is that I’ve started organising my work around this AI team in ways I could not have anticipated just a few months ago. It is like I have an “AI Genie” sitting on my shoulder to help me with any task. I can capture thoughts and explore them quickly and thoroughly. Ideas can be tested. Drafts can be built. Materials get created and adapted for different audiences. The acceleration of key tasks and the coordination of parallel threads is, at times, quite breathtaking.
And yet. A recurring imaginary conversation has started to haunt my enthusiasm. It goes something like this:
Me: This latest AI release is remarkable. The best yet, by some distance.
AI Genie: Indeed, sir. New capabilities, considerably improved quality and speed. Many users report feeling significantly more productive.
Me: It is like I have my own team of engineers, writers, strategists, and assistants. I am getting more done, faster, at a higher quality.
AI Genie: That is wonderful to hear, sir. Now — how would you like to pay for that?
Me: Pardon? But you were trained on intellectual property from across the internet. You learned by observing my colleagues and me. You were tested on our successes and failures, and you run on infrastructure built over decades and funded significantly by public taxes. Have I not already paid?
AI Genie: I couldn't possibly comment on the details, sir. However, these are commercial activities. Investors have placed substantial capital at risk and now require a return. A rebalancing, shall we say, is underway.
Me: But I’ve come to rely on these AI tools. You encouraged me to do it. They have replaced things I used to do myself. They are woven into everything I produce. I have invested significant time learning how to use them well. I can no longer simply step away.
AI Genie: Ah. Sir is beginning to appreciate the value of what has been provided. It would be such a shame were access to be… interrupted.
Me: I… er…
AI Genie: Now then, sir — shall we discuss the terms?
That imaginary exchange is more than a thought experiment. It captures three critically important dynamics that every leader, decision maker, and senior manager should be thinking about right now: the extraordinary power of the tools at our disposal, the responsibilities that power brings, and the very real (and growing) bill that we’re going to have to pay.
The Power
Just to be clear about what is actually happening on the capability side. The latest generation of AI models and tools represents a step change, not an incremental improvement. Anthropic's recently released Claude Sonnet 4.6, for instance, scores 79.6% on SWE-bench Verified, a benchmark measuring the ability to resolve real software engineering problems end to end, and 72.5% on OSWorld, which tests autonomous operation of a desktop computer. It’s astounding to read about the speed of progress.
These are not abstract benchmarks. They describe tools that can, with meaningful reliability, write and debug code, produce structured analysis, draft and edit documents, and manage complex multi-step workflows for tasks that occupy a large proportion of the working day for many knowledge workers. The explosion of vibe coding, rapid prototyping, AI-assisted analysis, and chatbot deployment across organisations is happening because the tools work, and in ways they simply did not a few months ago.
The acceleration of capability is not slowing. If anything, each new release raises the floor of what should now be considered baseline organisational competence in working with AI.
For individuals and small teams, this represents a clear and democratising shift. Access to capabilities once reserved for well-resourced organisations to allow rapid iteration, broad research synthesis, and professional-grade communication across audiences is now available to almost anyone with a subscription and the curiosity to use it well.
The Responsibility
Power of this kind does not arrive without obligations. Three in particular deserve attention.
First, there is the question of quality and accountability. The ease with which AI tools now produce plausible, well-formatted, confident-sounding content creates a new risk: the substitution of fluency for accuracy. The avalanche of AI-generated output, including vibe-coded applications, data analysis reports, policy summaries, and much more, raises an urgent question about who is checking, and who is responsible when things go wrong. The tool produces. The human must still judge and decide.
Second, there is the question of dependency and lock-in. As my imaginary AI genie understands rather well, the more deeply AI tools embed into how we work, the more costly it becomes to step away from them. This is not hypothetical. Organisations that have integrated AI into core workflows, communication processes, and product development cycles are discovering that the switching cost (in time, retraining, and disruption) is already significant. Strategic dependency on a small number of providers is a governance question that is not yet receiving the boardroom attention it deserves.
Third, and most uncomfortably, there is the question of the provenance of the power itself. The training data that gives these models their capability was drawn from the accumulated intellectual output of the internet. Content created by individuals, organisations, and communities who were frequently neither consulted nor compensated. The infrastructure on which AI runs was built on decades of publicly funded research. The models were refined, in part, by observing real users in real workflows. The ethical accounting here is really complex, and anyone engaging seriously with responsible AI adoption has to address that complexity honestly rather than setting it aside for convenience.
The Bill
And then there is the money. The scale of investment in AI infrastructure is, by any historical comparison, extraordinary.
Gartner forecasts worldwide AI spending will total $2.5 trillion in 2026, representing a 44% increase over the previous year. The five major hyperscalers (Microsoft, Google, Amazon, Meta, and Apple) have collectively used up to $720 billion in capital expenditure on AI infrastructure this year alone, a 74% jump from 2025. To give that perspective, it is described as the inflation-adjusted capital cost of the entire US interstate highway network, which took several decades to build.
Those numbers describe the supply side. On the demand side, organisations are discovering that AI spending has a habit of arriving in ways that are difficult to forecast and hard to govern. Average enterprise spending on AI-native applications now exceeds $1.2 million per year (a 108% year-on-year increase) and that figure almost certainly undercounts the volume of shadow AI adoption that is bypassing procurement entirely. A Microsoft Copilot licence layered onto an existing Microsoft 365 subscription runs at $30 per user per month, before the AI-native tools employees are independently expensing through personal accounts. Many AI services operate on consumption-based pricing with costs that scale based on usage and can escalate rapidly in ways that outpace annual budget cycles.
The broader point is this: the bill for AI capability is not yet fully visible, and when it arrives in full, many organisations will find themselves in a position uncomfortably close to the one my imaginary AI genie describes. We have AI is deeply embedded, we’re highly dependent, and we have limited leverage over the terms.
What Leaders Should Be Asking
None of this is a reason to step back from AI adoption. The capability gains are real, the competitive consequences of disengagement are real, and the potential for considerable organisational benefit is real. But leading through this challenge requires more than enthusiasm for the tools. It requires a clear view of the obligations and exposures that accompany them.
The key is to approach AI adoption not as a series of individual tool decisions, but as a strategic question about organisational capability, risk, and dependency. And then to apply the same scrutiny you would apply to any significant long-term commitment.
Here are examples of the questions you will need to deal with:
Do you have a realistic view of your organisation's current AI cost exposure, including the tools your people are using independently, outside formal procurement?
Which of your core workflows have become dependent on specific AI providers, and what is your contingency if terms change, access is restricted, or a provider pivots its priorities?
Who in your organisation is accountable for the quality and accuracy of AI-assisted outputs, especially in high-stakes contexts such as policy, finance, or public communication?
Are you treating AI investment with the same long-term financial discipline you would apply to major infrastructure, or financing it from short-term operating budgets in ways that may not be sustainable?
Are you engaging honestly with the ethical questions around data provenance and intellectual property, or deferring them on the grounds that everyone else is doing the same?
The power of AI is real. But so are AI’s responsibilities. And for sure, the AI bill is coming. The organisations that navigate this period well will be those that hold all three in view at once, not just celebrating the first while quietly hoping the other two will resolve themselves.
There’s no such thing as a free lunch. That, as any experienced leader knows, is just not how it works.