- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #170 -- Responsible Use of Generative AI: A commentary on the UK Government's Generative AI Framework
Digital Economy Dispatch #170 -- Responsible Use of Generative AI: A commentary on the UK Government's Generative AI Framework
Digital Economy Dispatch #170 -- Responsible Use of Generative AI: A commentary on the UK Government's Generative AI Framework
11th February 2024
With increased focus on AI in the public sector around the world, there are growing expectation on the role AI will play in transforming government services and driving efficiencies. From healthcare and education through to tax and welfare systems, significant productivity improvements, cost savings, and staff reduction are expected. The UK Deputy Prime Minster recently described AI as the key to billions of pounds in savings.
However, to do so, major concerns must be addressed regarding how to adopt technologies such as AI in an ethical and responsible way in complex contexts typical of the public sector. Nowhere is this seen more clearly than in the UK government’s use of generative AI, a widely available and rapidly deployed set of AI capabilities generating text, images, or other data using generative models in response to user-defined prompts. A UK survey of almost a thousand public sector professionals in December 2023 indicated that almost half of them were aware of Generative AI and more than one in 5 of them are already actively using it. Yet, less than a third of respondents felt like there was clear guidance on generative AI usage in their workplaces.
It is in this context that in January 2024 the UK Government published its Generative AI Framework. Aimed at both a technical and non-technical audience, the UK Government's Generative AI Framework is a timely and crucial step in guiding the controlled use of this powerful technology. Its straightforward approach is organized as 10 principles that form a foundation for ethical and responsible AI deployment. While quite broad and high level in nature, each of these principles focuses attention on core concerns that must be addressed in any public sector Generative AI use case.
The 10 Principles for Using Generative AI
The UK Government’s Generative AI Framework is based around 10 principles. Rather than repeat them here, let's review their key focus:
Understanding and Limits: Principle 1 rightly emphasizes the need for clear knowledge about generative AI's capabilities and limitations. Public sector leaders must be aware of potential biases, inaccuracies, and security risks before diving headfirst.
Responsible Use: Principles 2-5 delve into responsible use, encompassing legal, ethical, and security aspects. Engaging compliance professionals, mitigating bias, and ensuring data security are crucial steps towards building trust and preventing harm.
Human Control and Collaboration: Principles 4 and 7 highlight the importance of human oversight and collaboration. Keeping humans in the loop for quality control and decision-making, and embracing transparency through the ATRS, are vital for accountability and public trust.
Lifecycle Management and Skills: Principles 5 and 9 address the full lifecycle of generative AI solutions, from procurement and deployment to maintenance and skill development. Utilizing existing government resources like the Technology Code of Practice and investing in acquiring necessary skills will be key for success.
Right Tool for the Job: Principle 6 reminds us to choose the right tool for the specific task. Understanding use cases and evaluating tools like LLMs wisely are critical to achieving desired outcomes.
Beyond the Framework: Additional Considerations
From a conceptual perspective, it is difficult to find fault with the UK Government’s Generative AI Framework. Its 10 principles bring clarity to several concerns that are priorities for those considering generative AI. However, in a rapidly-evolving landscape where digital leaders face considerable daily operational challenges to deliver effective public services, the ready application of the UK Framework needs additional elaboration. In my experience, several additional considerations could strengthen its impact:
Focus on Impact, Not Technology: While the framework aptly cautions against technology-driven solutions, emphasizing the need for a clear problem statement and user-centric approach would further solidify this point. The UK government's service manual can be a valuable tool to ensure this focus on solving the right problems.
Addressing High-Risk Use Cases: The framework's list of high-risk use cases is a valuable starting point, but it is far from complete. This could be further expanded and needs to be regularly updated to emphasise the safety-critical nature of many uses of Generative AI, and to reflect the diverse contexts and evolving public sector context in which it is being deployed.
Continuous Monitoring and Adaptation: The framework highlights the need for continuous monitoring and review to ensure generative AI solutions remain ethical, unbiased, and effective. However, the costs of providing flexibility and adaptability can be substantial. A key point that is too frequently under resourced in public sector settings. This will require further strengthening by establishing clear metrics and feedback mechanisms for ongoing evaluation and adaptation.
Public Awareness and Education: The public needs to be informed and engaged in the use of generative AI in the public sector. There has to be more focus on this, with maximum transparency and accessibility of information about how the technology’s tools are used and their potential impact. Our society is only recently learning about AI’s disruptive impact on our lives and livelihoods: trust and legitimacy depend on us understanding how we may reap the rewards while managing the risks.
Perhaps, given this reflection, I would suggest a further 11th principle that could be usefully added to this framework:
Principle 11: Building Public Trust through Ongoing Dialogue
Openly discussing the challenges and opportunities of generative AI with the public, through regular town halls, public forums, and interactive platforms, can build trust and encourage constructive dialogue. This continuous engagement will be essential for engaging with all stakeholders to ensure responsible and ethical adoption of this powerful technology in the public sector.
Implementing the Principles: From Theory to Practice
Yet, as always, the value of any framework is whether, where, and how it is used in practice. The UK government's Generative AI Framework provides clear conceptual guidance, but translating its wisdom into tangible action is the true test. It is only by establishing meaningful and robust practices to translate these principles into day-to-day operations that we can deliver on the promise of responsible generative AI use in the public sector.
Effective implementation starts with embedding the framework's principles into the DNA of every project. This requires considerable investment into training public sector professionals on the principles and ensuring they are understood and adhered to from inception to deployment. One useful approach is to create checklists or decision-making matrices that incorporate the principles, making them an immediate reference point when navigating generative AI projects.
Furthermore, robust governance structures are essential. Public sector agencies are already beginning to appoint dedicated AI leads and ethics committees to oversee generative AI projects. These should now be tasked with ensuring compliance with the framework and fostering a culture of ethical decision-making. Regular assessments and audits should be conducted to identify potential risks and ensure ongoing adherence to the principles. Transparency becomes paramount here, with clear communication channels established to inform stakeholders about how generative AI is being used and its potential impact.
In practice, the success of generative AI in the public sector requires a proactive, multifaceted approach. Embedding the framework's principles into everyday practice, instituting strong governance structures, and prioritizing transparency are key pillars for success. By taking these steps, digital leaders can leverage the power of generative AI for good, while minimizing risks and building trust with citizens and stakeholders.
Best Foot Forward
We all face the challenge of how to use digital technologies such as Generative AI to drive quality improvements and identify cost efficiencies. Nowhere is this more needed than in the provision of our public services. However, to do this successfully requires careful consideration to balance wide adoption of these innovative capabilities with their responsible use. The UK Government’s Generative AI Framework is a great starting point for all digital leaders as a core set of principles for how to achieve this. Yet, it is only a first step. It needs additional effort to understand how it can practically be operationalized to deliver results in your context.
Moving forward, strengthen your leadership in responsible use of Generative AI with the following steps:
Embed the Framework: Make the UK Government’s Generative AI Framework a core resource for all AI projects, integrating its principles into every stage of development and deployment.
Build Governance Structures: Establish dedicated AI leads and ethics committees to oversee projects, monitor compliance, and foster ethical decision-making.
Prioritize Transparency: Communicate openly and proactively about how generative AI is being used, its potential impact, and the measures taken to ensure responsible implementation.
By committing to these steps, digital leaders will be in a much better position to navigate the world of generative AI with confidence, unlocking its potential while ensuring its responsible and ethical use.
(Note: This is an extended version of an article written for Amazon Web Services (AWS) and published here as a contribution to the AWS Institute, who also recently published an AI/ML Masterclass looking at AI in the public sector.)