- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #148 -- Demystifying Digital Transformation with the GAO's AI Accountability Model
Digital Economy Dispatch #148 -- Demystifying Digital Transformation with the GAO's AI Accountability Model
Digital Economy Dispatch #148 -- Demystifying Digital Transformation with the GAO's AI Accountability Model
10th September 2023
For those of us involved with digital transformation activities, it’s so easy these days to get confused, isn’t it? Every time I feel like I am getting my head around the characteristics of digital disruption or making progress deploying the latest digital capabilities with a client, along comes yet another technology twist to throw things back up in the air. Whether they are substantive or mere distractions, the effect is the same: Progress is stalled and confusion reigns.
Just in the past few days I have been reading about why we’ll soon all be 3D printing buildings, how the most successful drones deployed in the Ukraine conflict are made from cardboard, and doom-scrolling endless commentaries on how the latest smart uses of generative AI tools will disrupt all aspects of education, revolutionize real-time language translation, and spit out reams of code to build entire software systems and websites. Some of it for better and some for worse.
While many of these advances are to be applauded as significant for people across every domain, their rapid adoption is adding fuel to the apocalyptic voices predicting the end of humanity. The challenge we face is to try to make sense of it all. How do we place each of these technological marvels into perspective? What frameworks can we use to relate them to technologies currently in use? What are the right ways to adopt them to receive the benefits but adoption reduce risk? Where do they fit in the on-going digital transformation journey for our organizations?
It’s an AI Jungle Out There
Addressing these (and many other) questions is now at the forefront of much of the work for those of us trying to understand more about digital technology adoption and support digital transformation efforts. Many strategists, researchers and industry commentators are focused on providing ways to consider these issues and it seems like every day they offer new way that we should be looking to adopt digital technologies in our organizations. Some of these ways of thinking are undoubtedly helpful. However, it is difficult not to be overwhelmed with so many different voices and viewpoints.
Of course, the volume has now been “turned up to 11” with the latest wave of generative AI tools. Their wide availability and seemingly unbounded applicability have forced all organizations to rethink their digital transformation plans, reallocate scarce research funding, and review policies for everything from hiring practices to customer service handling procedures.
So where should an organization start to get their house in order with the impact of AI on digital transformation? Well, one obvious place would be to set up a strong accountability framework across the organization. That’s certainly the advice of the US Government Accountability Office (GAO) to government organizations in the US.
The GAO AI Accountability Framework
Under its mandate “to improve government and save taxpayers billions of dollars”, the GAO has issued an AI accountability framework for government agencies and other groups. It identifies key accountability practices to ensure AI is used responsibly. It offers a very useful foundational structure for all organizations as they consider their AI journey. Indeed, I would suggest that this would be an excellent basis for every digital transformation programme to consider.
Based on a detail review of the literature and conversations with AI leaders, the GAO has created a very useful review of the core elements to be considered in AI adoption. And while there are some particular concerns it addresses in AI (particularly around data and the training of models), the resulting framework is sufficiently broad that it can be seen to offer a substantial basis for assessing many aspects of responsible digital transformation.
The framework is structured around four interrelated principles: governance, data, performance, and monitoring. Within each of these principles, the framework outlines essential practices for federal agencies and other organizations contemplating, choosing, and executing AI systems. Each practice is accompanied by a series of focus areas to guide teams, managers, and assessors in their evaluations, along with established procedures for auditors and third-party assessors to follow. In a simplified version, and contextualized for a specific situation, they could readily become a guide or checklist that anyone can use as core to their digital transformation process.
Briefly summarized, in this framework AI accountability is broken down into 4 areas:
Data – Ensure quality, reliability, and representativeness of data sources and processing.
Governance – Promote accountability by establishing processes to manage, operate, and oversee implementation.
Monitoring – Ensure reliability and relevance over time.
Performance – Produce results that are consistent with program objectives.
Much more is described about these areas, and many useful suggestions are made about how to understand, review, and assess these 4 areas in the report. I strongly recommend that you take a look and review each of them.
Of course, there is no “silver bullet” for managing your digital transformation efforts. However, this framework may well provide you with a meaningful framework on which you can build your understanding of the impact of AI, and create a foundation for reducing the confusion you will face as you pursue your digital transformation journey.