Digital Economy Dispatch #218 -- Establishing a Path to Secure AI

As AI adoption grows, ensuring AI systems are resilient is critical. This requires building user trust and ensuring the trustworthiness of deployed AI technology.

We are now all very much aware that digital systems are vital to our lives and livelihoods. It’s become impossible to imagine how we’d get by without them. But we also know that digital systems can fail. Sometimes catastrophically. These failures, such as service outages and data breaches, have significant negative impacts on individuals, businesses, and society as a whole.

It is with these thoughts in mind that I was delighted to provide the foreword to a new book that has been launched this week. “Resilience of Services: Reducing the Impact of IT Failures” brings together data and thinking from more than 200 experts to provide both an understanding of the consequences of failures in digital systems and an approach to reducing these failures and their impacts. Its authors, Gill Ringland and Ed Steinmueller, have deep expertise and many years of experience exploring resilience in IT systems. Their work has been significant in advancing our understanding of building better, more robust digital systems.

The messages contained within the book are important for managing existing software-intensive systems. But, in talking with Gill and Ed, I realize that their work may have an even more significant role as organizations enter the age of AI. The enterprise adoption of AI has reached a critical point. As organizations move beyond experiments and small-scale solutions to integrate AI capabilities into core business operations and strategic decision-making processes, they face a key barrier holding back progress: How to ensure the security of AI.

In talking with a wide range of people on resilience of AI systems, I have found that two distinct, but complementary challenges must be considered to make progress: Building user trust in AI systems and ensuring deployed AI technology is trustworthy.

Building User Trust in AI Systems

Public trust in AI is declining, with many expressing concerns about AI safety. Building and maintaining user trust is crucial for widespread AI adoption.

While many factors contribute to trust in AI, a new report from the World Economic Forum (WEF) identifies three critical factors in building and maintaining trust in AI systems.

Transparency as Foundation

According to the WEF report, the primary factor contributing to the trust deficit is the lack of transparency surrounding AI systems. Users often perceive AI as a "black box," with limited understanding of how decisions are made, what data is used for training, and where potential biases might exist. Organizations must prioritize transparency in their AI development and deployment strategies, providing clear explanations of model functionality, data sources, and decision-making processes.

Data Privacy and Security

With increasing reliance on AI, organizations are processing larger volumes of sensitive data. The WEF report highlights this is driving rising user concerns about data privacy. As a result, robust data protection measures are critical in ensuring trust, including actions such as:

  • Strong encryption protocols.

  • Comprehensive access controls.

  • Compliance with regulations like GDPR and CCPA.

  • Regular security audits and assessments.

Stakeholder Alignment

Finally, misunderstandings and miscommunication form another important reason for lack of trust in AI, according to the WEF report. Success in AI adoption requires engaging with employees, customers, and other stakeholders throughout the development and deployment process. Organizations enhancing trust in AI adoption are seen to clearly communicate AI benefits, conduct regular surveys to gather stakeholder feedback on AI concerns, and ensure alignment with organizational values and ethical principles.

Ensuring Trustworthiness of AI Systems

Yet, maintaining trust in AI systems is just one side of the AI security coin. Equally important is to ensure the trustworthiness of any deployed AI system. In recent years organizations have made significant investments in cybersecurity infrastructure. However, AI systems present unique challenges that go beyond traditional security approaches. The technical landscape of AI security presents unique challenges that go beyond traditional cybersecurity approaches.

Emerging Threat Landscape

A recent Booz Allen Hamilton study into securing AI systems highlights that widespread AI deployment has created an expanded attack surface that adversaries actively target. Threat actors both use AI to advance their activities, and focus more on attacking poorly secured enterprise AI systems. They have been some of the early adopters of widely available AI capabilities to develop sophisticated mathematical and algorithmic methods specifically designed to compromise AI systems, leading to substantial losses, operational disruption, and reputational damage for those affected.

Current Security Limitations

According to the Booz Allen Hamilton study, modern AI systems, particularly those based on Large Language Models (LLMs), present three critical security blind spots:

  1. Monitoring Challenges: The non-deterministic nature and complexity of AI models make traditional anomaly detection largely ineffective.

  2. Third-Party Risks: Pre-trained models often introduce substantial risks that misalign with enterprise security requirements.

  3. Perimeter Management: Distributed AI deployment, including shadow AI usage and embedded third-party capabilities, creates a fragmented security perimeter.

Each of these security concerns is a significant barrier to the adoption of trustworthy AI systems.  In combination they represent a major headache for IT teams already overburdened with a lengthy backlog of on-going operational concerns.

Furthermore, the threat actors targeting AI systems span a broad spectrum. Not only are they individual actors and hacktivists, they also include a range of financially motivated criminal organizations, nation-states seeking to compromise intelligence and decision-making capabilities, and other opportunistic actors attempting to exploit systems for personal gain.

While catastrophic attacks on AI systems have not yet materialized, many experts view such events as inevitable given the valuable information these systems contain and their overall risk profile. The current limited deployment of AI systems in production environments, while offering temporary protection, is not a sustainable solution as it prevents organizations from realizing AI's full benefits.

Attack Surface Considerations

Some of the most interesting insights from the Booz Allen Hamilton study concern the ways in which AI systems are being targeted. The AI attack surface presents a unique set of vulnerabilities that make it particularly attractive to threat actors. Compared to traditional cybersecurity targets, AI models often prove easier to compromise, and many systems have yet to receive full security certification through established compliance frameworks.

This vulnerability is compounded by the potential for magnified impact from successful attacks, as well as the inadequacy of traditional security measures in protecting against AI-specific threats. Organizations must recognize that conventional cybersecurity approaches, while necessary, are insufficient for securing AI systems, requiring specialized protection mechanisms and security frameworks.

In particular, the study highlights five attacks against AI systems that enterprises deploying AI systems must guard against:

  • Data poisoning: Adversaries manipulate training data to compromise model behaviours and insert backdoors.

  • Malware: Adversaries package malicious code within model files and libraries.

  • Model evasion: Adversaries perturb model inputs to control model outputs.

  • Data leakage and model theft: Adversaries infer and steal sensitive training data, model behaviour, and/or intellectual property.

  • Large language model misuse: Adversaries override an LLM’s instructions and safety alignment.

Addressing these activities is far from easy. Despite widespread recognition of AI security's importance, it is recognised that a significant gap exists between acknowledged responsibility and actual execution. Organizations must implement clear lines of accountability while fostering cross-functional cooperation between security teams and AI specialists. Not always easy in today’s complex enterprise organizational structures. Yet, without this collaborative approach, it is almost impossible to ensure security considerations are thoroughly integrated throughout the AI lifecycle, from initial development through deployment and ongoing operations.

Looking Ahead

Building trust in AI systems demands a concerted approach to both understand user needs and build systems that are more secure. Books sooks such as Gill and Ed’s “Resilience of Services: Reducing the Impact of IT Failures” reminds us of the fundamental need to ensure that these systems are secure and robust. The rapid evolution of AI technology and threats demands continuous adaptation of security strategies to deal with new threats. Organizations must maintain vigilance while driving innovation, finding the balance that will define successful enterprise AI adoption.

Recognizing the challenges is just the first step. Regular assessment of emerging threats, evolving best practices, and organizational readiness will be critical for maintaining effective AI security posture. There is no doubt that the journey toward secure AI adoption will be an ongoing challenge, requiring sustained commitment and investment from organizational leadership.