- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #257 -- Who Do We Trust With AI?
Digital Economy Dispatch #257 -- Who Do We Trust With AI?
Adopting powerful AI tools demands a critical focus on who owns and controls AI -- a tension very well described in Parmy Olson's book, Supremacy.
Perhaps the most important change I have seen in the last couple of years is how digital technology has moved from the sidelines to the core of company business models. So much so that it now defines how they operate. More and more, our society is shaped by these technologies. Marc Andreessen’s now-famous insight that “software is eating the world” has become an accepted business principle and a backbone of today’s digital tech community.
Consequently, as digital technology becomes the foundational layer of modern business and society, it demands we ask a deeper, more critical question about the individuals and organizations behind the key systems we all rely on. Who owns and controls the technology stacks on which businesses and society now depend? How do we choose who to trust with AI’s future? And furthermore, what are the implications if we get these choices wrong?
A Front-Row Seat to the AI Arms Race
With this in mind, I went back to Parmy Olson’s book, “Supremacy: AI, ChatGPT, and the Race That Will Change the World”, which rightly earned the Financial Times Business Book of the Year in 2024. It confronts this dilemma head-on and provides us with a context to delve deeper into these questions.
In Olson’s account, we’re placed right at the centre of the tech world’s most important AI race. A contest between OpenAI and DeepMind, their visionary founders, and the forces of venture capital and big business that ultimately have shaped their focus and directions. It’s such a strong narrative, and I found many parallels with my own experience navigating digital transformation: initial optimism colliding with the stark realities of corporate demands, non-stop evolving priorities, and a constant tension between innovation, governance, and the commercial realities of Big Tech.
I think what appeals most about the book is the way Olson skillfully describes how these labs have moved from their initial, idealistic stance toward commercial necessities required to secure resources. Then, how these morphed into powerful assets of US tech giants. The book describes the technological impacts this has had, and reveals the human and ethical dramas at the heart of decisions being made by all those adopting and scaling AI tools.
As Olson explores, there are broad consequences of this tension for all of us. Not just in terms of the availability of AI tools and services for individuals, but also for why organizations of every size will struggle to define an appropriate AI strategy by relying on one or more of these companies, and find it even harder to keep in alignment with them as their key AI technologies evolve.
Lessons that Hit Close to Home
Three themes from Olson’s book resonate especially strongly with my own work and some of the concerns I#ve been writing about recently:
Concentration of Power: The unchecked dominance of a handful of US and Chinese technology firms is not just an abstract business threat. It’s here, now, embedded in the UK’s infrastructure, digital services, and strategic decision-making. We’ve welcomed vast investments, but what are we risking for that privilege?
Ethics vs. Scale: Olson’s critique of ethical shortcuts and the prevalence of bias reminds me that our rush to deploy AI can outpace our ability to regulate or even understand it. The seductive pace of innovation tempts us to leave awkward questions for later…sometimes too late.
Operational Dependency: As our critical systems rely increasingly on outside providers, we’re constantly evaluating continuity risks. What if these partners pivot, restrict access, face government pressure, or simply fail? In my own advisory roles, I’ve stressed the importance of describing and mapping these technical dependencies to understand the subtle influence these players exert on our autonomy and values. But, how much to we really know or understand about how these AI technology providers will take their solutions?
The UK’s AI Crossroads, a Personal Perspective
Taking a UK perspective, the consequence can be enormous. As I’ve written previously, the UK stands at a strategic AI crossroads. The choices we make in the coming years will define the sovereignty, security, and resilience of our digital economy for a generation. I’ve argued before, and Olson’s book only sharpens my view, that our “third way” strategy, balancing US-style dynamism and EU-style governance, is as fraught with risk as it is with opportunity. Welcoming US AI technology investment is smart, but only if we maintain clarity on data governance, regulatory independence, and the persistent risks of strategic lock-in. My own recent writing cautions that early choices on investment can quickly become long-term liabilities if we lose sight of who is ultimately calling the shots and deal with the consequences.
These macro-level concerns are echoed in every organization adopting AI. Digital leaders and policymakers are wrestling with these issues every day. The debate isn’t just policy. It’s deeply personal, shaping how we prepare teams, update frameworks, and educate executives for a world in which control can be illusory and risk ever-present.
Advice from the Trenches
What does this mean in practice? All of us involved in advocating and supporting digital transformation initiatives must respond to these challenges. Here are a few thoughts on how:
1. Scrutinize Trust, Don’t Assume It
Interrogate every strategic partnership: Who benefits? Who controls the data? What happens when interests diverge?
2. Demand and Demonstrate Transparency
Push for deeper transparency - not just in technical specifications, but in operational processes and commercial terms. It’s not enough to trust; we must verify.
3. Balance Innovation Against Sovereignty
Urge your teams to ask: How much autonomy do we really have, and what are we prepared to trade for speed and innovation?
4. Foster Open, Honest Debate
From executive workshops to project kick-offs, make space for uncomfortable truths. Optimism should not eclipse caution, nor allow us to avoid difficult policy or ethical questions.
5. Prepare for Complexity and Divergence
As regulations and provider policies evolve, coach organizations to expect and adapt to change: compliance, working with different regimes, and surviving technological and operational disruptions.
AI Supremacy and Its Implications
Re-reading Olson’s “Supremacy” provides much more than a historical perspective on infighting between today’s AI technology billionaires. It emphasizes the urgency of choices all senior managers must confront today. As the UK pushes forward in AI adoption, my own experience echoes her warning: Be careful who you trust. Every deal, every new system, and every governance compromise helps define the shape of your future autonomy and resilience. And that’s as true for your organization’s AI strategy as it is for the UK as a whole.
In all the AI technology excitement, it’s easy to forget that with digital technology revolutions, it is essential not just to ride the wave, but to guide the direction with care, vigilance, and humility. The AI revolution promises much, but it is our responsibility to make sure it delivers in a way that preserves the trust, sovereignty, and values we hold dear.