Digital Economy Dispatch #245 -- What Makes for a Good AI Strategy?

Defining an effective strategy is a messy business. Here are three key frameworks that can help in the disruptive era of AI.

In recent weeks, I've been grappling with a fundamental question that keeps resurfacing in my work with organizations across various sectors: "What makes for a good AI strategy?" I've been helping leaders navigate how their existing approaches should evolve to embrace AI capabilities and how to coordinate disparate activities into aligned priorities that drive meaningful action. In this work, one thing has become clear—creating strategy is a messy business, with many competing views and perspectives on what matters.

In this chaotic time, we're all constantly juggling competing needs and priorities, compromising on urgent requirements to balance financial realities, and investing in future opportunities while delivering on today's necessities. These tensions can tie any leadership team in knots. The arrival of AI tools and capabilities has only amplified this complexity, adding enormous volatility to the strategic development process.

Whether you believe you need a separate "AI strategy" or not, one reality is undeniable: AI will be a core component of every organization's activities for the foreseeable future. The question isn't whether to incorporate AI into your strategic thinking—it's how to do it effectively.

But, where to start? Through my experience working with organizations in vastly different situations, I've found myself repeatedly returning to three foundational frameworks that cut through the strategic chaos. Each addresses a critical dimension of effective strategy formation, and together they provide a robust foundation for navigating the AI-driven transformation ahead.

Situational Analysis: Understanding Your Competitive Landscape

Simon Wardley's approach to situational analysis fundamentally challenges how most organizations approach strategic planning. His central insight is that most strategic failures don't stem from poor execution or lack of good ideas—they occur because organizations don't properly understand their situation. They're essentially navigating without a map.

Wardley's methodology demands that effective strategy begin with situational awareness. This means comprehensively mapping your value chain, understanding the evolutionary characteristics of each component, and recognizing the dynamic forces shaping your business environment. This contrasts sharply with frameworks that jump directly to strategic choices without establishing a clear picture of the current state and change forces.

Wardley Maps are visual representations that plot business value chain components against two key axes: visibility to users (vertical) and evolutionary stage (horizontal). The visibility axis ranges from invisible backend infrastructure to highly visible user-facing components. The evolutionary axis progresses from genesis (novel, uncertain) through custom-built and product stages to commodity (well-understood, standardized).

These maps reveal relationships between components, showing how value flows through your system and where dependencies exist. By positioning each element according to its visibility and evolutionary stage, you can identify which components are ripe for disruption, where inefficiencies exist, and how competitive dynamics will likely shift.

The strategic power lies in revealing patterns invisible in traditional analysis. Maps help you understand when to build versus buy, where to focus innovation efforts, and how to anticipate competitive moves based on evolutionary pressures. For example, a component moving from product to commodity stage signals an opportunity for others to build higher-order capabilities on top of it, while indicating that competing directly on that component may become increasingly difficult.

Managing Uncertainty: Embracing Complexity

Dave Snowden's approach to understanding uncertainty through evolutionary systems fundamentally reframes how organizations should approach decision-making in complex environments. Rather than applying linear, cause-and-effect thinking to all situations, Snowden argues that different problem types require fundamentally different approaches based on system nature and uncertainty levels.

His work emphasizes that complex systems are characterized by emergent properties, non-linear relationships, and evolutionary dynamics where small changes can have disproportionate effects. This challenges traditional management approaches that assume predictability and control, instead advocating for adaptive strategies that work with inherent uncertainty and the evolutionary nature of complex systems.

The Cynefin framework categorizes problems into five distinct domains:

  • Simple (Obvious): Best practices apply because cause and effect relationships are clear and predictable.

  • Complicated: Good practices and expertise are needed where cause and effect relationships exist but may not be immediately apparent.

  • Complex: Emergent practices are needed because cause and effect can only be understood in retrospect.

  • Chaotic: Novel practices and immediate action are required to establish stability.

  • Disorder: Situations where it's unclear which domain applies.

The strategic value lies in helping organizations match their response to the problem's nature, preventing the common mistake of applying complicated or simple domain solutions to complex domain problems. In complex systems, Snowden advocates for managing the evolutionary potential rather than trying to control specific outcomes. This involves creating conditions for beneficial emergence, conducting multiple small experiments in parallel, amplifying what works, and dampening what doesn't.

Rationalizing Strategy: The Discipline of "Faking" Rationality

Paul Clements and Dave Parnas's approach, though grounded in software development, offers crucial insights into communication for strategic planning. Their seminal insight addresses a fundamental challenge: while truly rational strategic processes are impossible to achieve in practice, the discipline of "faking" such processes produces significantly better strategic outcomes than proceeding without structure.

Their philosophy acknowledges that strategic planning involves incomplete market information, evolving competitive landscapes, human cognitive limitations, external disruptions, organizational politics, and resource constraints that force suboptimal compromises. Rather than abandoning systematic approaches due to these limitations, the framework advocates for embracing the discipline of rational strategic processes while acknowledging their inherent imperfections.

The core concept of "faking" rational strategy centres on the distinction between the messy reality of how strategic decisions are actually made and the clean, logical presentation of those decisions. While strategy development may involve intuition, political negotiations, trial and error, and serendipitous discoveries, the final strategic framework should be presented as a logical, well-reasoned structure that others can understand, evaluate, and execute.

This "fake" rationality provides essential organizational benefits: it offers guidance for teams facing overwhelming complexity, brings organizations closer to optimal decisions than ad hoc approaches, enables standardized processes across business units, facilitates progress measurement, and makes external review and investment decisions possible.

Key Lessons for AI Strategy

There is no panacea for creating a successful AI strategy. Just a lot of hard work. Yet, these three frameworks offer crucial insights for how to shape your strategic thinking in the AI era.

First, map before you move. AI is rapidly evolving from its early piloting through custom solutions toward AI-at-Scale and eventual commoditization. Organizations that understand where AI capabilities sit on this evolutionary spectrum—and where their own value chain components are positioned—will make better strategic decisions about when to build AI capabilities internally, when to buy existing solutions, and when to partner with AI providers. The key is avoiding the trap of trying to differentiate on AI components that are becoming commoditized while missing opportunities to build higher-order capabilities on top of emerging AI commodities.

Second, embrace experimental approaches for AI initiatives. Many AI strategy challenges exist in Snowden's complex domain, where cause and effect can only be understood in retrospect. This means traditional strategic planning approaches—which assume predictable outcomes—are fundamentally ill-suited to AI strategy. Instead, organizations need to create portfolios of safe-to-fail experiments, rapidly sense what's working, and adapt based on emergent results. The goal isn't to predict AI's impact but to position your organization to evolve with it.

Third, rationalize your AI strategy even when the path is unclear. The discipline of creating structured AI strategic frameworks—even knowing they're based on imperfect information—drives better outcomes than ad hoc approaches. This means developing clear AI governance structures, establishing measurable AI objectives, and creating systematic processes for evaluating and scaling AI initiatives. The strategic plans you create may not perfectly capture the messy reality of AI experimentation, but they provide essential organizational guidance and enable better decision-making over time.

The question of "what is strategy" remains challenging, but these frameworks offer a foundation for navigating strategic complexity in the AI age. The organizations that will thrive are those that can map their competitive landscape, embrace uncertainty through experimentation, and maintain the discipline of rational strategic thinking—even when the path forward is far from clear.