- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #171 -- AI Leadership: Learning to break free of the past
Digital Economy Dispatch #171 -- AI Leadership: Learning to break free of the past
Digital Economy Dispatch #171 -- AI Leadership: Learning to break free of the past
18th February 2024
Sometimes it feels like I’m stuck in the past. Too often when faced with a new challenge, my first inclination is not to face forwards with an open mind, but to look backwards to try to extract lessons from previous experiences that help me to describe and understand it. And while relying on what’s happened before can be very helpful in many circumstances, it also brings the real danger of being too blinkered, biased, or backward. I can’t work out if my past experience is my greatest asset, or the main anchor that holds me back.
It is a challenge that all of us face as we try to solve new problems, whether it is individuals reliving past glories, or organizations constraining innovation outside of existing cultural norms. We’ve all experienced it in one way or another: “Sorry, that’s not the way we do things round here!”.
Unfortunately, it is also a significant concern when looking to implement and apply AI. It sees the future through the eyes of the past. Despite the futuristic allure of AI, its intrinsic strength lies in the analysis of large amounts of historical data to extrapolate future scenarios. This approach raises important questions: Is AI overly reliant on the past in steering a course through an ever-evolving strategic and operational landscape? And if so, what are the implications for how we use AI to take us forward?
The Strengths of AI's Backwards Looking Approach
The analytical prowess of AI, rooted in processing extensive historical data, shines a light on hidden trends, correlations, and anomalies that often elude human observation. To illustrate, consider AI's role in enhancing many different kinds of forecasting capabilities. By scrutinizing past sales patterns, supply chain movements, customer behaviour, and market fluctuations, AI can predict future demand with an unprecedented level of accuracy. Such predictive insights not only optimize inventory management but also facilitate the personalized tailoring of marketing campaigns to individual preferences, build communities around shared products and services, and influence global trends.
Moreover, AI's ability to streamline operations is exemplified through its analysis of historical performance data. This empowers organizations to identify operational bottlenecks, optimize production processes, and predict equipment failures. The result is a tangible improvement in efficiency and a reduction in downtime, underscoring the transformative impact of AI on industrial operations.
The innovation acceleration facilitated by AI is equally noteworthy. The mining of past research papers, patents, and industry trends enables AI to expedite the discovery of novel ideas, materials, and products. From designing new drugs to finding hidden deposits of raw materials, this newfound agility provides organizations with a competitive edge, illustrating how the well-tuned use of historical data can propel organizations and industries forward.
The Pitfalls of Relying on the Past to Predict the Future
Yet, as we have seen all too clearly recently, predicting the future is fraught with challenge. While historical trends offer valuable insights, they can be particularly fragile when faced with the unknown. Of course, black swan events, like pandemics or technological breakthroughs, can shatter established patterns. However, often it is the more routine challenges that are a greater threat. Complex systems like platforms, markets, or societies are inherently dynamic, with countless factors interacting in unpredictable ways. As a result, even minor adjustments in starting conditions or small variations in the operating context can lead to wildly divergent outcomes, making precise predictions near impossible. While data is crucial for understanding the past and present, embracing the inherent uncertainty of the future is key to making informed decisions and navigating the uncharted waters that lie ahead.
Consequently, a nuanced understanding of the limitations inherent in AI's past-dependence in its use of data is essential. A clear example is the potential introduction of data bias. AI algorithms trained on skewed or outdated data risk perpetuating existing biases and inequalities. For instance, a recruitment AI system may be trained on past hiring data that embeds cultural and corporate biases concerning candidates’ background, education, ethnicity, and gender. The risk is that the AI system might inadvertently replicate this bias in future recommendations, exacerbating imbalances within the workforce.
Another significant limitation arises from AI's propensity to primarily extrapolate from existing patterns, rendering it less adept at predicting disruptive innovations or unforeseen events. A case in point is a large language model, like ChatGPT, trained on historical news articles. Such a model might struggle to accurately predict groundbreaking scientific discoveries or significant political upheavals due to its limited exposure to alternative possibilities beyond historical data.
Furthermore, an overreliance on AI predictions has been seen to foster a false sense of certainty among decision-makers. It is crucial for leaders to remember that predictions, despite their precision, are still probabilistic in nature, necessitating a balanced approach that considers alternative perspectives.
AI's Black Box
Underlying this challenge is often a poor understanding in leaders and decision makers of the fundamental concepts of AI and data science. Hence, many people beginning to rely on AI systems have little meaningful understanding of what’s inside the “AI black box”. A deeper scrutiny of AI's use of data for prediction exposes several important principles that must be recognised by anyone involved with the responsible use of AI:
Correlation is not causation. AI excels at identifying correlations within data but often falters in comprehending the underlying causal relationships. Whether it is the correlation of ice cream sales with sunburn, or Asthma patients recovering faster from pneumonia, while the correlations exist, they do not imply a causal relationship, and relying on such correlations for decision-making can lead to erroneous conclusions.
Extrapolation hampers innovation. AI's proficiency in identifying patterns and extrapolating from existing data proves invaluable for short-term predictions. However, this very attribute reduces its capacity to anticipate truly disruptive innovations or paradigm shifts. An AI trained on data related to a narrow set of solution approaches may limit its understanding and cause it to overlook the transformative potential of new ways to address problems.
Missing variables and hidden biases skew data. Even within the most comprehensive datasets there are gaps. Despite the extensive nature of the data, these omissions can significantly impact the accuracy of AI predictions. For instance, an AI trained on job applications from a specific region, ethnicity, or culture may inadvertently favour candidates that reflect these characteristics, potentially overlooking qualified individuals from diverse backgrounds.
Garbage in, Garbage Out. The quality of AI predictions is intrinsically tied to the quality of its training data. Utilizing flawed, incomplete, or outdated data inevitably leads to unreliable and potentially harmful outcomes. Social media is particularly prone to this. For example, AI trained on a dataset that includes extreme language and hate speech can inadvertently amplify harmful narratives, exacerbating social division.
Beware of overfitting. AI’s attempts to find patterns in data underscore the potential illusion of certainty created by AI algorithms. These algorithms may perform exceptionally well on training data but fail to generalize accurately to new data, leading to misleading conclusions and decisions. Under pressure to obtain precise responses, issues such as overfitting require careful consideration.
When the Past Misleads: The lessons from Covid
The COVID-19 pandemic serves as an illustrative case study, demonstrating how reliance on pre-pandemic data can lead to misleading predictions. Consider the fragility of AI-supported supply chains as they struggled to cope during the pandemic. Due to drastic swings in production, surges in demand, and re-design of supply chains, AI predictions during that period varied widely from the new business reality. While seemingly logical, both during and post-pandemic, shifts in product production and consumer behaviour frequently rendered such predictions entirely inaccurate.
This scenario underscores several potential pitfalls. Firstly, the occurrence of unforeseen events, such as the pandemic, can significantly impact markets and behaviours. AI models trained on pre-pandemic data lack the context to understand and predict such shifts, highlighting the limitations of past data in foreseeing unprecedented events.
Secondly, the concept of temporal bias must be addressed. Data collected during specific periods may not be representative of long-term trends. Predictions based on data influenced by the pandemic might not hold true in a post-pandemic world, emphasizing the importance of continuously updating and refreshing training data.
Finally, the contrast between static and dynamic environments becomes evident. The world is in a constant state of flux, with AI models rigidly reliant on past data potentially failing to adapt to changing market conditions, consumer preferences, and unforeseen disruptions. Similar to technical debt in software, data debt in AI systems can be equally corrosive.
Breaking Free of the Past
Overcoming AI data limitation issues is far from easy. To navigate through these intricate challenges, digital leaders must adopt a strategic and proactive stance to data management, including:
Embracing data diversity and remaining vigilant is paramount. AI systems should be trained on diverse, up-to-date datasets that encapsulate the ever-evolving intricacies of the world in which they operate. Moreover, leaders must actively monitor for potential biases, ensuring fairness and accuracy in the predictions generated by AI models.
Human-AI collaboration is central to effective deployment of AI. Digital leaders should view AI as a powerful tool that complements human judgment rather than a replacement for it. The synergy between AI's predictive capabilities and human ingenuity is key to navigating complex situations and exploring uncharted territories.
Embracing experimentation and agility is crucial. Rather than being tethered to past successes or failures, digital leaders must foster a culture of experimentation and agile decision-making. This adaptability is indispensable for organizations to navigate rapidly changing market dynamics and seize unforeseen opportunities.
To be effective, a deeper understanding of AI’s use of historical data is critical. While looking backwards remains a cornerstone of AI's predictive capabilities, it should not dictate our vision for the future. By acknowledging and actively addressing the limitations inherent in AI's reliance on historical data, organizations can unlock its potential and take a more responsible approach to the use of AI to lead them forwards.