- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #249 -- How to Think About AI: A Framework for Leaders
Digital Economy Dispatch #249 -- How to Think About AI: A Framework for Leaders
I've been having countless conversations with senior leaders and managers about AI adoption over the past year, and I've started to see a recurring pattern. While everyone acknowledges AI's transformative potential, most are struggling to develop a solid framework for how to think about this technology strategically. They're caught between competing narratives – AI as salvation versus AI as existential threat – and lack the conceptual tools to navigate this complexity thoughtfully and responsibly.
That's why I was so pleased to discover Richard Susskind's latest book, "How to Think About AI: A Guide for the Perplexed". Susskind, who has been working on AI since the early 1980s and wrote his doctorate on artificial intelligence and law at Oxford University, addresses this concern head-on. His book offers precisely what busy executives need: a clear conceptual framework for thinking about AI that cuts through the noise and enables better decision-making.
The Process vs. Outcome Distinction: The Need to Focus on AI Disruption
The most powerful insight from Susskind's book is the distinction between "process-thinking" and "outcome-thinking" when it comes to AI. Process-thinkers are intrigued by the operational details of AI (how it works) while outcome-thinkers are preoccupied with its overall impact. This distinction is absolutely critical for leaders because it determines how to approach AI adoption in an organization.
Most professionals, Susskind argues, are trapped in process thinking. They focus on how they do their work, their processes, knowledge, and expertise, rather than the outcomes their clients actually want. When AI enters the picture, they immediately worry about whether machines can replicate their specific methods and skills. This leads to what Susskind calls "not-us thinking" and the belief that AI might work in other fields, but surely not in ours.
I have seen this constantly in my conversations with leaders about digital transformation. Legal professionals tell me AI can't understand nuance in the law. Medical professionals insist AI lacks empathy. Financial professionals worry AI can't handle the complexity of regulations. Educators are convinced that teaching requires an experienced guide. Each group believes its domain is somehow uniquely resistant to AI disruption.
But as Susskind points out with slight mischief, people don't want neurosurgeons – they want health. They don’t want time with a lawyer; they want legal certainty and resolution. In most cases, customers don't care about your internal processes; they care about outcomes. Once you shift to outcome-thinking, entirely new possibilities emerge. You start asking different questions: What results do our clients actually want? How might AI help us deliver those outcomes more effectively, efficiently, or accessibly?
Similar to Christensen’s “jobs-to-be-done” approach, this mindset shift is foundational because it opens your thinking to AI's true potential rather than limiting it to automating existing workflows.
Beyond Substitution: Transformation Through Three Pathways
The second key framework from Susskind's work addresses how AI will effectively change organizations. Most analysts see automation as the biggest AI threat to jobs, but Susskind argues that innovation and elimination may be bigger game changers.
He identifies three ways AI transforms work, echoing the “digitizing vs. digital transformation” distinction that I have emphasized for many years:
Automation is what most leaders focus on, using AI to computerize existing tasks. This is the most obvious application, but also the most limited. It's essentially taking your current processes and making them faster or cheaper.
Innovation means delivering the outcomes clients want using radically new underlying processes. Instead of automating how you currently solve problems, you find entirely different ways to achieve the same results. For legal services, this might mean using AI to implement different approaches to conflict resolution or prevention of legal problems that could reduce or replace litigation as we know it.
Elimination goes even further by not just solving problems differently but preventing them from arising in the first place. This is the most transformative but also the hardest for leaders to envision because it requires imagining a world where certain types of work simply aren't necessary anymore. Examples can be seen in SwissRe’s move towards parametric insurance and Vitality’s focus on preventative healthcare.
As a leader, you need to think beyond automation. Ask yourself: What entirely new approaches might AI enable? What problems could we eliminate rather than just solve more efficiently? These questions unlock AI's transformative potential rather than just its efficiency gains.
A Balanced Approach to Risk: Moving Beyond Polarization
The third crucial element of Susskind's framework addresses risk. I've watched too many leadership teams get stuck in unproductive debates about whether AI is fundamentally good or bad. Susskind argues that "balancing the benefits and threats of artificial intelligence is the defining challenge of our age", but doing so requires moving beyond polarized thinking.
Rather than engage in unfocused handwringing about AI risks, Susskind organizes his analysis using a structured approach to risk categories. While the specific details of his risk taxonomy are complex, the key insight for leaders is that you need a much broader understanding of risk than most organizations currently possess.
Most leadership discussions about AI risk focus narrowly on job displacement and data security, or define endless AI governance checklists. These are important, but they're just two categories in a much larger risk landscape. Susskind emphasizes measured urgency rather than end-of-the-world hysteria, arguing that policymakers and leaders need to grasp the size and speed of current AI shifts not because disaster is inevitable, but because thoughtful preparation is essential.
The framework requires you to systematically evaluate risks across multiple dimensions while simultaneously assessing potential benefits. This isn't about being optimistic or pessimistic about AI. It's about being comprehensive and analytical. You need processes for identifying risks you haven't yet considered, not just managing the obvious ones.
Thinking Beyond Today's Technology
One final element of Susskind's framework that's crucial for leaders: thinking beyond current AI capabilities. He positions ChatGPT and GenAI as just the latest chapter in the ongoing story of AI, arguing that not-yet-invented technologies will have far greater impact in the 2030s than the tools we have today. Leaders are trapped into a very narrow vision based on today’s AI.
This temporal dimension is critical for strategic planning. Most organizations are developing AI strategies based on today's capabilities, but Susskind advocates for "what-if AGI thinking" by proceeding as if artificial general intelligence is the most likely outcome. This doesn't mean panicking about superintelligence, but it does mean thinking about trajectories rather than current states.
As a leader, you need strategies that work with today's AI but remain robust as capabilities advance dramatically. This requires scenario planning and adaptive approaches rather than fixed plans based on current limitations.
The Critical Foundation for AI Adoption
Susskind’s book is a quick read that I thoroughly recommend. What I find most valuable about his approach is that it provides a structured way of thinking about AI that's both grounded and forward-looking. His goal isn't to sell hype or pour cold water on everything. Instead, it's to replace fuzzy impressions with crisp mental models that give us better ways to think about AI.
These frameworks help leaders avoid common traps: getting stuck in process-thinking when they should focus on outcomes, limiting AI to automation when innovation and elimination offer greater potential, falling into polarized debates about AI's goodness when they should be systematically evaluating specific risks and benefits, and planning based on today's capabilities when they should be preparing for rapid advancement.
Having this first step toward a coherent framework for thinking about AI is absolutely critical for successful AI adoption in any organization. Without clear mental models, you'll be reactive rather than strategic, tactical rather than transformative. You'll optimize for today's problems rather than tomorrow's opportunities.
The leaders who will thrive in an AI-enabled world are those who can think clearly about this technology's implications. Susskind's frameworks provide the conceptual foundation that makes everything else possible. Once you have the right way of thinking about AI, you can begin to develop strategies, allocate resources, and make decisions that position your organization for success in a rapidly changing landscape.
We’re all now getting to grips with how AI will transform industry and society. Your progress in this task will depend on whether you have the frameworks to understand and navigate that transformation thoughtfully. That understanding starts with how you think about AI itself.