- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #264 -- AI Bottlenecks, Jagged Edges, and the Real Barriers to AI-at-Scale
Digital Economy Dispatch #264 -- AI Bottlenecks, Jagged Edges, and the Real Barriers to AI-at-Scale
In 2025, AI capability has outpaced institutional readiness. The bottleneck has shifted from what AI can do to what organisations will allow it to do.
As we head into 2026, the conversation about AI adoption has shifted. We're no longer debating whether AI works. Instead, leaders and decision makers are wrestling with a more uncomfortable question: Why isn't it delivering the transformative results we were promised?
Ethan Mollick, the Wharton professor whose work on AI I've referenced many times in these Dispatches, recently offered a compelling framework for understanding this puzzle. His concept of the "Jagged Frontier" describes something that everyone working with AI will recognise: the technology's bewildering pattern of being superhuman at some tasks while remaining stubbornly inadequate at others. So, while AI can now outperform human doctors at many kinds of medical diagnosis and solve complex mathematics problems that would stump most experts, it still struggles with simple visual puzzles and may be getting worse at simple tasks such as solving anagrams.
This jaggedness matters because it creates bottlenecks. A system is only as functional as its weakest components. Even if AI reaches superhuman capability across ninety-nine percent of a task, that remaining one percent can prevent full automation. Consider Mollick's example of AI-powered medical literature reviews. Researchers found that GPT-4.1, when properly prompted, could reproduce and update an entire issue of Cochrane reviews (reviewing evidence for medical tests) in 2 days, representing approximately 12 work-years of traditional systematic review effort. The AI outperformed human reviewers on accuracy. Yet it is brittle because it cannot access supplementary files or email authors to request unpublished data, things human reviewers do routinely. That means 12 work-years can become 12 days, but only if a human handles the edge cases.
From Intelligence Bottlenecks to Institutional Bottlenecks
However, what strikes me most about Mollick's analysis isn't the technological limitations. It's his observation about a different kind of bottleneck altogether. Keeping with the healthcare example, as Mollick describes, AI can now find promising drug candidates far faster than before. But clinical trials still need real patients who take significant time to recruit and monitor. Similarly, regulators still require human review before sign off. So even if AI generates ten times more good drug ideas, the bottleneck shifts from discovery to approval. Intelligence speeds up; institutions don't.
This insight should be required reading for every digital leader and policy maker planning their AI strategies for the coming year. We've spent the past few years focused almost exclusively on the technology itself: which models to deploy, how to prompt them effectively, where to find the best use cases. But in practice the real constraints on AI-at-Scale have nothing to do with AI capability at all.
Think about what this means in practice. Your organisation might successfully pilot an AI system that can process procurement requests in minutes rather than days. The technology works. The pilot succeeds. But then you discover that your procurement regulations require human sign-off at three different levels. Your finance systems weren't designed for this volume of transactions. Your audit processes assume human decision-making at key stages. The AI is ready, but your institution isn't.
This is the uncomfortable truth that rarely makes it into vendor presentations or analyst reports. Organisations aren't just collections of processes waiting to be automated. They're complex institutional structures shaped by regulations, professional standards, liability frameworks, union agreements, cultural norms, and deeply embedded ways of working. These structures exist for reasons. Some are outdated and should be reformed. Others encode hard-won lessons about accountability, safety, and fairness that we ignore at our peril.
For digital leaders, this reframing demands a fundamental shift in how we approach AI strategy. Instead of asking "What can AI do?", we need to ask, "What will our institutions allow AI to do?". AI success in not measured by pilot project metrics. Instead, we need to map the institutional pathways that determine whether those pilots can ever scale. Hiring more data scientists and AI engineers won’t help. Investing in regulatory expertise, change management capability, and the patient work of institutional reform will.
The Migration of AI Bottlenecks
This isn't insurmountable. Institutions do change. Regulations get updated. Professional standards evolve. But they do so on their own timescale, and that timescale rarely matches the breathless pace of technology announcements. The organisations that succeed with AI-at-Scale will be those that understand this dynamic and plan accordingly.
What we're witnessing is a predictable pattern in how bottlenecks migrate as AI capability advances. Initially, the constraint is capability itself. We ask: Can AI perform the task at all? Can it analyse these documents, generate this code, and identify these patterns? For many tasks, we've now moved past this stage. The technology works.
The next bottleneck is process. Even when AI can perform a task, organisational processes weren't designed for AI-speed execution. Workflows assume human timescales. Handoffs between teams create delays. Legacy systems can't ingest AI outputs. Approval chains remain unchanged. This is where many organisations find themselves today, discovering that their operational infrastructure becomes the limiting factor once AI capability is proven.
But there's a third bottleneck that receives far less attention: verification. Maybe, like me, you’re now finding that you spend a lot of time reviewing AI outputs. As AI takes on more consequential decisions, the question shifts from "Can AI do this?" to "How do I know AI did this correctly?".
In regulated industries, this verification burden is explicit and mandated. Financial services firms must demonstrate model governance. Healthcare organisations must validate clinical AI against established standards. Legal teams must ensure AI-generated contracts meet professional obligations. But in less regulated contexts, the need for human verification is just as important, even if it may not be so obvious.
This verification bottleneck is particularly challenging because it scales poorly. If AI can process a thousand applications in the time a human processes ten, but each AI decision still requires human review, you've simply moved the bottleneck rather than eliminated it. To overcome this, some organisations respond by sampling, reviewing only a percentage of AI decisions. Others implement exception-based workflows, where AI handles straightforward cases autonomously while flagging edge cases for human attention. Of course, both approaches introduce risk and require sophisticated governance frameworks that most organisations have yet to develop.
The implications are significant. As AI capability continues to advance, the bottleneck will keep migrating. Today's process constraints will eventually be resolved through system modernisation and workflow redesign. But verification challenges may prove more stubborn, particularly in domains where errors carry serious consequences and accountability structures remain anchored in human decision-making.
As you plan your AI rollout, there are several practical implications of this “Jagged Frontier” worth considering. First, audit your institutional constraints with the same rigour you apply to technical assessments. Which regulations govern your AI use cases? Which professional standards apply? Which stakeholders have legitimate authority over process changes? Understanding these factors upfront will save you from the frustration of successful pilots that go nowhere.
Second, invest in institutional capacity alongside technical capability. This means building relationships with regulators, engaging with professional bodies, participating in standards development, and developing internal expertise in governance and compliance. These activities feel slow and unglamorous compared to spinning up AI projects, but they determine your organisation's ability to capture value from AI over the long term.
Third, choose your battles wisely. Some institutional bottlenecks are immovable in the short term. Others are ripe for reform. Focus your AI efforts on areas where institutional constraints are manageable, while working in parallel to shift the constraints in areas with higher strategic value.
Finally, remember that institutional bottlenecks affect everyone equally. Your competitors face the same regulatory requirements, the same professional standards, the same procurement rules. This creates an interesting strategic dynamic. The advantage goes not to whoever deploys AI fastest, but to whoever best understands and navigates the institutional landscape in which AI must operate.
The Long Game of AI-at-Scale
What I’ve learned over more than two decades of digital transformation is that technology rarely delivers value on its own terms. Success comes from the hard work of organisational change, process redesign, and capability building. AI is proving no different. The jagged frontier of AI capability will continue to advance. The more fundamental question is whether our institutions can keep pace and whether we have the wisdom and patience to help them do so.