- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #265 -- What Do We Want From AI?
Digital Economy Dispatch #265 -- What Do We Want From AI?
The latest survey from the Ada Lovelace Institute reveals that while the UK public sees potential in AI for areas like healthcare, many fear its current harms and lack of accountability.
As we enter 2026, the year begins with a great deal of hope and expectation in the UK to progress rapidly on its ambitious digital transformation journey with AI – a technology expected to unlock growth, transform public services and secure the country’s place in the global digital race. Yet recent research from the Ada Lovelace Institute suggests that, away from ministerial speeches and industry showcases, public expectations are both more grounded and more demanding: people want AI that works in their interests, is properly governed, and comes with real protections when things go wrong.
UK citizens are therefore walking into 2026 with clear, increasingly confident views about AI. But they are not the views UK policymakers seem most keen to hear. People see real benefits, especially in health and some public services, but they are also reporting widespread harms, deep concern about opaque decision-making, and a loud call for stricter rules and real accountability.
What do we actually use AI for?
The Ada Lovelace Institute’s latest survey of 3,500 UK residents shows that AI is no longer an abstract future technology; it is woven into everyday digital habits. Awareness and use of general-purpose large language models (LLMs) such as ChatGPT have grown at a remarkable speed: 61% of people have heard of LLMs, and 40% say they have used them for at least one task. A third of the public already uses these tools to search for answers and recommendations, and around one in five uses them for education or for routine tasks, such as drafting emails.
At the same time, adoption is highly context dependent. Only 11% have used LLMs to support job applications, while nearly four in ten say they would not want to use them for that purpose at all – a strong signal that people draw a line when AI is involved in high-stakes personal decisions. Those with lower incomes or fewer digital skills are more likely to be closed off to using LLMs across the board, underlining a growing divide between those who can make use of AI for their own purposes and those who see it as something being done to them.
Where do people see benefits – and risks?
When the survey moves away from “AI in general” and looks at specific applications, the picture becomes much more nuanced – and more useful. People see clear upside in some uses: 86% think using AI to assess cancer risk from scans will be beneficial, and 91% see benefit in facial recognition for policing, at least in principle. Around two-thirds think LLMs overall are beneficial, suggesting that for many, the promise of speed and efficiency still outweighs the downsides.
Yet concern is never far away. Three-quarters of respondents are concerned about driverless cars, 63% about mental health chatbots, and 59% about the use of AI to assess eligibility for welfare benefits. Even in the “high-benefit” cases, anxiety is substantial: 39% are concerned about facial recognition in policing, and 64% worry that AI-driven cancer diagnostics could lead to over-reliance on technology at the expense of professional judgement. People can clearly hold two ideas at once: AI might make things faster and more accurate, but it also introduces new routes for error, unfairness and unaccountable decisions.
Who feels the downsides most?
The survey is particularly valuable because it does not treat “the public” as a single, homogenous bloc. It deliberately oversamples people on lower incomes, those with fewer digital skills, and Black and Asian communities, so we get a clearer view of how different groups experience AI.
Several patterns stand out. Black and Asian respondents are more positive than average about some emerging tools such as LLMs and robotics, but significantly more concerned about facial recognition in policing: over half of Black (57%) and Asian (52%) respondents are fairly or very concerned, compared with 39% of the general population, and they are particularly anxious about false accusations. People on lower incomes consistently report lower “net benefit” scores across most AI technologies – their concerns outweigh perceived benefits more often than for higher-income groups, even after controlling for other factors. This should worry anyone deploying AI in welfare, credit scoring or public services, because it is precisely these groups who are most exposed to automated decisions.
AI harms are already a lived experience
These concerns are not hypothetical; they are rooted in lived experience. Two-thirds of the UK public say they have encountered at least one form of possible AI-generated harm online a few times, and nearly four in ten report encountering such harms many times. The most common are false or misleading information (experienced by 61%), financial fraud or scams (58%), and deepfake images or videos (58%).
Unsurprisingly, anxiety about the spread of harmful AI-generated content is almost universal: 94% of respondents say they are very or somewhat concerned. Younger adults report particularly high exposure to deepfakes and misinformation, while older groups report more frequent encounters with financial frauds – a reminder that “online harm” looks different depending on where you stand in the digital ecosystem. Despite this, awareness that AI sits behind many of these experiences remains patchy; around one in five people are unsure whether the harms they experienced were AI-generated or not.
What do people expect from government and governance?
If there is one message that comes through clearly, it is that the public wants more active, visible governance of AI. In this 2024–25 wave, 72% say that laws and regulations would make them more comfortable with AI technologies, up ten percentage points from the previous survey. People support a multi-stakeholder model for AI safety: 58% think an independent regulator should be responsible for ensuring AI is used safely, and 58% also expect responsibility from the companies developing AI.
Crucially, they want regulators and government to have real powers, not just guidance documents. 87% think it is important that government or regulators – not just private companies – can stop the use of an AI product if it poses a risk of serious harm, and similarly high numbers want active monitoring of risks, robust safety standards, and access to information about system safety. At the same time, 83% are concerned about public bodies sharing their data with private companies to train AI systems, and half of respondents say they do not feel their values are represented in decisions being made about AI and how it affects their lives. There is a palpable sense that AI is something being done by powerful institutions and vendors, with citizens largely on the receiving end.
So, what do we want from AI?
Taken together, these findings outline a distinctive public agenda for AI in the UK. People are not asking for “an AI pause”, as was requested in March 2023 following numerous concerns about AI safety. Nor are they blindly embracing whatever comes next, especially if driven by commercial concerns. Instead, several expectations are emerging:
Use AI where it clearly helps, especially in areas such as health diagnostics and behind-the-scenes efficiency – but prove that the benefits are real. Speed and accuracy are attractive, yet they need to be evidenced and audited, not just promised in vendor slide decks.
Keep humans in the loop for consequential decisions. Across welfare, credit scoring, healthcare and policing, people remain deeply uncomfortable with opaque, automated judgment that cannot be questioned or appealed.
Recognise that harms are already here. From scams and deepfakes to biased policing, citizens are living with AI’s downside risks now, not in some distant AGI future.
Build governance that matches the stakes. The public is asking for independent regulation, strong “safe before sale” style powers, clear red lines, and meaningful routes for challenge and redress.
Include those most affected in decisions. Lower-income and marginalised groups are more sceptical, more exposed to harms, and less likely to feel represented in AI decision-making. Any credible AI strategy has to start with their experiences, not treat them as an afterthought.
As 2026 advances, we must recognise that AI is no longer a novel experiment on the edge of the digital economy; it is deeply embedded in the infrastructure of everyday life. The question “What do we want from AI?” is therefore less about speculative futures and more about aligning today’s systems with public expectations of fairness, safety and human dignity.
The Ada Lovelace Institute’s work suggests that UK citizens have already done their homework and are asking sharper, more grounded questions than many of the strategies and press releases produced in their name. The challenge now is whether government, industry and institutional leaders are prepared to listen – and to act accordingly.