Digital Economy Dispatch #276 -- Lies...Damn Lies...And AI

Why the data on AI and jobs refuses to tell a straight story — and why firms are acting on the narrative anyway.

Something is happening to work. Everyone can feel it. But ask for the evidence and you'll get a dozen contradictory answers, each delivered with equal conviction. Welcome to the most confusing labour market story of our time.

In the space of a single week in late February, a fictional scenario from an obscure financial firm spooked Wall Street into a market sell-off, Jack Dorsey cut 40% of Block's workforce citing "intelligence tools", and Anthropic got into a public row with the Pentagon over who controls AI safety guardrails. As Ethan Mollick observed in his latest Substack piece, The Shape of the Thing, each of those stories turned out to be less clear-cut than it first appeared. The Citrini report was speculative fiction. The Block layoffs were almost certainly as much about post-COVID over-hiring as about AI. And the Pentagon's dispute with Anthropic was tangled in governance questions that still haven't been resolved. Yet taken together, they created an overwhelming sense that AI is reshaping the world of work right now, this minute, whether the data agreed or not.

 The Data Says… Everything… And Nothing…

If you're a leader trying to make sense of AI's impact on employment, good luck finding a consistent signal. Consider just a small sample of recent findings.

Anthropic's own labour market research found no clear impact on unemployment rates for workers in the most exposed occupations. Yet Brookings, publishing just this week, notes that the research itself is still contradictory: different studies using different AI-exposure measures reach opposing conclusions, and even the timing of job posting declines correlates better with rising interest rates than with the launch of ChatGPT. The CEO of Randstad, the world's largest staffing firm, told CNBC in Davos that the role of AI in recent job cuts is being overstated. And yet in that same CNBC report, Mercer's Global Talent Trends survey finds employee anxiety about AI-related job loss has leapt from 28% in 2024 to 40% in 2026, while the IMF's Kristalina Georgieva warned that AI is hitting the labour market "like a tsunami" and most countries are not prepared.

Then turn the page. The Dallas Fed reports that employment among workers aged 22 to 25 in AI-exposed occupations has fallen by 13% since 2022, but wages in those same sectors are rising faster than the national average. AI is simultaneously substituting for entry-level workers and complementing experienced ones. Even the same data tells two stories at once.

Pick your study. Pick your headline. You can build a case for almost anything.

 Acting on the Vibes

The most uncomfortable part of this is that while the evidence remains murky, corporate behaviour is not. Firms are acting, and acting decisively, on the expectation that AI will be transformative regardless of whether the data yet supports the specifics.

Block is the most vivid recent example. Dorsey laid off over 4,000 people (nearly half the company) from a business he described as "strong", with gross profit growing at 26% year-on-year. His stated rationale: AI tools mean smaller teams can outperform larger ones, and this trend is compounding weekly. He predicted most companies would reach the same conclusion within a year. The stock jumped 17%.

But Dorsey's own former head of communications, Aaron Zamost, argued in the New York Times that the cuts look more like standard corporate downsizing dressed up in an AI narrative. Look at the specifics, Zamost suggested (cuts to the policy team, elimination of diversity roles) and it reads like prioritisation and cost management, not AI-driven reinvention. Block had already tripled its headcount during the pandemic and run multiple rounds of layoffs before this one. Even Dorsey admitted he'd over-hired during COVID.

And Block is not alone. Just this week, Atlassian announced 1,600 job cuts (a tenth of its global workforce) citing the need to redirect resources toward AI. And throughout 2025, Microsoft, Amazon, Salesforce, and others all linked major reductions to AI-driven restructuring. Whether AI is genuinely doing the work of those departed employees, or whether it's providing convenient language for what would have happened anyway, is a question nobody can definitively answer.

Deutsche Bank analysts put it bluntly: "AI redundancy washing will be a significant feature of 2026." Companies attributing job cuts to AI should be taken "with a grain of salt."

The Shape We Can Almost See

Mollick's framing is the most honest I've read. He describes a world of "rolling disruption" where AI capability crosses thresholds and unlocks new use cases that change people's views overnight about what's possible. At the same time, organisations experimenting with AI discover new ways of working that lead to sudden announcements about strategy shifts and headcount. The result is an environment that feels perpetually unstable — not because nothing is real, but because everything is moving.

The benchmarks are genuinely impressive. As Mollick details, AI systems now outperform graduate students on knowledge tests, match experienced human professionals on complex tasks over 80% of the time, and can autonomously complete hours of human work in minutes. A three-person team at StrongDM built a "Software Factory" where AI agents write, test, and ship production software without human involvement in the code. These are not hypothetical capabilities.

But there's a critical gap. As Mollick notes, despite these amazing capabilities in tests, companies are still very early in adopting AI. In practice, remarkably little has changed in most organisations. The distance between what AI can do in a benchmark and what it is doing inside the average organisation remains enormous.

 The UK Dimension

For UK leaders, this ambiguity matters even more. We have our own version of the data confusion. DSIT's assessment, published in January, found that UK job postings have declined more sharply in AI-exposed occupations. But it also acknowledged that establishing whether AI is actually causing these patterns remains challenging. Parliament's POST report from last week concluded that evidence of widespread AI-driven job loss is still limited, and that jobs are more likely to be partially automated than entirely replaced.

Meanwhile, IPPR's modelling ranges from a worst case of 8 million jobs at risk to a best case of no job losses and significant GDP gains, depending entirely on policy choices. Moreover, the UK economy has its own structural challenges of lower enterprise technology adoption rates, a persistent digital skills gap, a public sector that struggles to implement large-scale technology change. These make direct extrapolation from Silicon Valley announcements unreliable at best.

When Dorsey says "most companies are late", he's speaking from inside an ecosystem where engineers already use AI coding tools daily and where spending $1,000 a day on AI tokens per engineer is a plausible operating model. That is not the reality for a mid-sized UK professional services firm, an NHS Trust, or a local authority trying to maintain frontline services. The rhetoric of inevitability coming from California doesn't map neatly onto organisations operating with legacy systems, constrained budgets, and workforces that have had little AI exposure at all.

What we need is less prophecy and more evidence. Less "most companies will reach the same conclusion" and more rigorous analysis of where AI is actually changing work, for whom, under what conditions, and with what consequences. That's precisely the kind of grounded, evidence-based thinking I've tried to bring together in my forthcoming book, Making AI Work for Britain, which tackles these questions head-on for UK leaders navigating the gap between Silicon Valley rhetoric and UK organisational reality. As the Brookings team put it this week, research on AI and the labour market is still in the first inning. We're making policy and restructuring decisions as if the game’s almost over.

 What Leaders Should Take From This

If the data won't give us a clean story, what should leaders actually do? Three things seem clear even through the fog.

First, distinguish between signal and narrative. When a CEO announces AI-driven layoffs and the stock price jumps 17%, that tells you something about investor expectations, not about whether AI is actually doing 4,000 people's jobs. The incentive to frame any restructuring as AI-driven is now extremely strong. Be sceptical of the label. Look at the specifics of what's actually being cut, and what's being kept.

Second, invest in understanding your own organisation's AI readiness rather than reacting to someone else's announcements. The gap between AI capability and organisational absorption is real and varies enormously by sector, by function, and by the nature of the work. A blanket "AI will replace X% of roles" prediction is almost certainly wrong for your specific context. The Dallas Fed's insight is instructive here: AI may substitute for entry-level codified knowledge while complementing experienced workers' tacit expertise. That's a much more nuanced picture than the headlines suggest.

Third, prepare for the instability itself. Mollick is right that the pattern of the last few weeks shows sudden capability revelations, rapid market reactions, dramatic corporate announcements, and growing governance tensions. That single week in February (a fictional scenario moving markets, a profitable company cutting half its workforce, and a government blacklisting its own AI supplier) wasn't an anomaly. It was a preview. Organisations that build adaptive capacity, that treat AI strategy as a continuing experiment rather than a series of one-off crisis responses, will be better positioned than those lurching from headline to headline.

We all feel like something substantial is coming. It's just that every time we think we're beginning to make sense of it, it morphs into something new. The lies, damn lies, and AI statistics will keep coming. The job of a thoughtful leader is to resist the urge to pick the most dramatic version of the story and act on it. Instead, do the harder work of building an organisation that can adapt as the real picture slowly, unevenly, comes into focus.