Digital Economy Dispatch #184 -- How Not to Fail with AI and What We Learn When We Do

Digital Economy Dispatch #184 -- How Not to Fail with AI and What We Learn When We Do
19th May 2024

AI has become a key topic of conversation in the boardrooms of companies everywhere. The promise of automation, data-driven insights, and enhanced decision-making has fuelled significant investment in this powerful technology. However, the path to AI success is not without its pitfalls.

The initial wave of enterprise enthusiasm for AI has collided with a sobering reality: many large-scale AI projects are failing to deliver on their promises. With expectations for AI so high, perhaps this is inevitable. However, it is important not just to ignore these experiences.

What can we learn from previous AI adoption mistakes? Are there lessons we can extract from the various ways organizations have misused or misjudged AI? What cautionary tales and practical tips are relevant for digital leaders as they navigate this complex landscape?

Did AI Fail the Covid Test?

Perhaps a good place to start in considering AI’s impact is to take look back at one of the world’s most intense recent crises, the Covid pandemic, and ask how well AI performed in helping to predict its path and support us through the worst of its effects.

In an interesting review of the use of AI during the pandemic by Bhaskar Chakravorti, he concluded that AI failed to live up to expectations during that time. In his view, the use of AI in diagnosing Covid, predicting its course through a population, and managing the care of those with symptoms all delivered inadequate and misleading results.

Let’s consider one area where AI is particularly promising to see what happened: disease diagnosis. According to his review, the evidence is rather damning:

“A systematic review in The BMJ of tools for diagnosis and prognosis of Covid-19 found that the predictive performance was weak in real-world clinical settings. Another study at the University of Cambridge of over 400 tools using deep-learning models for diagnosing Covid-19 applied to chest x-rays and CT scans data found them entirely unusable. A third study reported in the journal, Nature, considered a wide range of applications, including predictions, outbreak detection, real-time monitoring of adherence to public health recommendations, and response to treatments and found them to be of little practical use.”

As we prepare to rebuild a more robust AI, it's important to learn from these setbacks. According to Chakrovorti’s review, at the core of the AI failings were AI’s inability to handle four key issues: flawed datasets, automated discrimination, human errors, and the complex global context. Issues that are broadly relevant beyond the Covid crisis.

Four Warnings and a Failing

Although the Covid experiences were relatively recent, it is fair to argue that AI has moved on rapidly since then. From major Graphics Processing Unit (GPU) hardware advances powering sophisticated deep learning algorithms to major software releases of new Generative AI tools, a lot has happened. So, are AI projects more successful today?

Reports of AI failure persist, with several AI projects encountering notable failures, underscoring the challenges in this rapidly evolving field. Issues have ranged from biased algorithms producing discriminatory outcomes to systems trained on poor quality and incomplete data. For instance, some facial recognition systems have faced criticism for inaccurately identifying individuals, particularly those from minority groups, leading to wrongful accusations and privacy concerns. Autonomous vehicles have also stumbled, with incidents involving self-driving cars failing to correctly interpret their surroundings, resulting in accidents. Additionally, chatbots and language models have sometimes generated inappropriate or offensive content, reflecting biases present in their training data. These failures highlight the importance of robust testing, ethical considerations, and transparency in AI development to mitigate risks and enhance reliability.

Yet, rather than focus on the downsides and disappointments, these stumbles also hold valuable lessons for digital leaders venturing into this transformative technology. By understanding the pitfalls that led to past failures, fundamental issues can be surfaced to help businesses navigate the exciting, yet complex, world of enterprise AI adoption and achieve the significant potential it offers. But where to start? By looking at current experiences with AI adoption, we can observe four key warning signs and one major failing.

One of the most common warning signs is the misunderstanding of AI capabilities. AI is not a magic bullet. It excels at pattern recognition and works best with well-defined tasks and structured data. Organizations that expect AI to solve broad, ambiguous problems or function with messy, incomplete data sets are setting themselves up for disappointment. A classic example is the implementation of AI chatbots in customer service. While chatbots can handle routine inquiries efficiently, complex requests often require human intervention, leading to frustration for both customers and staff.

Another critical challenge is the overestimation of data quality. AI is only as good as the data it is trained on. Biased or incomplete data will lead to biased or flawed outcomes. For instance, an AI system designed for facial recognition trained on a dataset predominantly featuring one ethnicity may struggle to identify individuals of different backgrounds. This highlights the importance of data cleansing and ensuring diverse representation within training datasets. The failure to address data quality issues can have real-world consequences, impacting everything from loan approvals to hiring decisions.

Furthermore, organizations can fall prey to the cult of novelty. The allure of cutting-edge technology can overshadow a thorough needs assessment. Implementing AI for the sake of being “innovative” without a clear understanding of how it aligns with business goals is a recipe for failure. Consider the case of a retail chains that deploy AI-powered product recommendations. While the technology is impressive, it must also be adjusted to account for seasonal trends or customer loyalty programs. Without this it can lead to irrelevant suggestions and ultimately a decline in sales.

The human factor is often underestimated in the rush towards AI adoption. Neglecting the importance of human-AI collaboration can lead to a lack of trust and user resistance. A successful AI implementation requires clear communication and training for human employees who will be working alongside the technology. Imagine a hospital that utilizes AI-assisted diagnostics. While the AI can identify potential problems, the final decision on patient care should always reside with a qualified medical professional who understands the nuances of a patient’s history and condition.

Finally, the biggest failing that organizations must address involve the complex ethical considerations surrounding AI. Algorithms trained on biased data can perpetuate social inequalities. Transparency and explainability of AI decision-making are crucial to ensure fairness and build public trust. Concerns include AI recruitment tools that favour male candidates based on historical hiring patterns. Such incidents highlight the need for ethical oversight and continuous monitoring of AI systems to mitigate potential bias.

Avoiding AI Adoption Pitfalls: A 5-Point Guide for Digital Leaders

What can we take away from these experiences? Each of these challenges highlights a path forward for digital leaders looking to improve their success with AI:

  1. Define a clear problem: Start by identifying a specific business challenge that AI can demonstrably address.

  2. Prioritize data quality: Invest in data cleansing and ensure diverse representation within training datasets.

  3. Build human-AI partnerships: Develop a culture of collaboration and train human employees to work effectively with AI systems.

  4. Embrace ethical AI: Implement transparent and accountable algorithms that are free from bias.

  5. Measure and iterate: Continuously monitor AI performance, gather feedback, and refine your approach based on ongoing analysis.

By recognizing the potential pitfalls and adhering to these guiding principles, digital leaders can maximize the benefits of AI and avoid the costly trap of failure. AI is a powerful tool, but like any powerful tool, it requires careful consideration and responsible implementation. By understanding the limitations and fostering a human-centric approach, businesses can leverage the power of AI to drive innovation and achieve tangible results.