- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #233 -- AI as Normal Technology: A Pragmatic AI-at-Scale Strategy Approach
Digital Economy Dispatch #233 -- AI as Normal Technology: A Pragmatic AI-at-Scale Strategy Approach
Hot on the heels of "AI Snake Oil", in Narayanan and Kapoor forthcoming book they advocate viewing AI as "normal technology", emphasizing maintaining human control and assessing progress realistically as fundamental for an effective AI-at-Scale strategy.
As I’ve discussed previously, I was thoroughly impressed by Arvind Narayanan and Sayash Kapoor's recent book, "AI Snake Oil". In it, they deliver a detailed critique of the inflated promises and doomsday scenarios surrounding AI. The authors meticulously dismantled the hype, describing how many AI applications are oversold and highlighting the significant challenges in deploying AI solutions effectively.
"AI Snake Oil" has received a lot of commentary, acting as a crucial reality check, urging us to question the narratives that often dominate the AI conversation and to adopt a more critical and grounded perspective. So, I was excited to learn that Narayanan and Kapoor are planning a new book, and they have now released a document providing a sneak peek into some of their emerging ideas.
No surprise, this latest work expands on their attempts to add a voice of reason to the discussions on AI by offering an alternative perspective to polarised utopian and dystopian dialogues that dominate popular AI discussions. They propose a concept of "AI as normal technology", providing a grounded framework for digital leaders and decision-makers navigating the complexities of AI strategy.
A Counterpoint to Superintelligence
The core argument now presented by Narayanan and Kapoor is that AI should be viewed as a tool that we can and should remain in control of. This perspective stands in stark contrast to the idea of AI as a separate concept or a potentially superintelligent entity that could eclipse human control. The authors argue that the "normal technology" framing is not merely a description of the current state of AI but also a prediction about its foreseeable future and a prescription for how we should approach it.
The Pace of Progress: A Dose of Realism
One of the most insightful aspects of this new perspective is its analysis of the speed of AI progress. The authors make a crucial distinction between AI methods, AI applications, and AI adoption, emphasizing that these three areas progress at different timescales. They caution against the assumption that rapid advances in AI methods will automatically translate into equally rapid economic and societal transformation. Something that I have been arguing for some time!
I found their discussion of the factors that slow down AI adoption, particularly in safety-critical areas, to be especially pertinent. The authors highlight the "capability-reliability gap" and the limitations of benchmarks in accurately measuring real-world utility.
Rethinking the Division of Labor
They also offer a thought-provoking perspective on the future division of labour between humans and AI. By unpacking the concepts of "intelligence", "capability", and "power", the authors challenge the notion that AI will inevitably render human labour superfluous. They argue that control will remain primarily in the hands of people and organizations, with a growing proportion of human work involving AI control and task specification.
Navigating the Risks: A Pragmatic Approach
In their analysis of AI risks, Narayanan and Kapoor again emphasize the importance of viewing AI as a tool. They examine accidents, arms races, misuse, misalignment, and systemic risks, arguing that a "normal technology" perspective leads to different conclusions about mitigation strategies compared to those arising from "superintelligence" scenarios.
Implications for AI Strategy
For digital leaders and decision-makers, the "AI as normal technology" framework has several important implications that reinforce many of the ideas of delivering AI-at-Scale:
Embrace Realism: Avoid both utopian and dystopian extremes. Adopt a realistic view of AI's capabilities and limitations, recognizing that progress will be gradual and uneven.
Prioritize Control: Focus on strategies that maintain human control over AI systems. Emphasize human-AI collaboration and the importance of human oversight.
Invest in Adaptation: Recognize that the successful integration of AI will require significant adaptation from individuals, organizations, and institutions. Invest in training, education, and organizational change to facilitate this process.
Adopt a Nuanced Approach to Risk: Develop risk management strategies that are tailored to the specific context and potential harms of AI applications. Avoid one-size-fits-all solutions or overly restrictive regulations that stifle innovation.
Foster Collaboration: Encourage dialogue and collaboration between researchers, policymakers, and industry leaders to ensure the responsible development and deployment of AI.
Perhaps most importantly, the “AI as normal technology” worldview provides a valuable lens for understanding and navigating the complexities of AI. By embracing a pragmatic perspective, digital leaders can develop AI strategies that are both innovative and responsible, maximizing the benefits of this powerful technology while mitigating its risks. I’m looking forward to seeing these ideas develop in their new book!