Digital Economy Dispatch #271 -- Vibe Coding and the Dawn of Disposable AI

While AI coding assistants dramatically lower the barrier to building software, the true shift lies in the move toward "disposable code", where the traditional value of a permanent codebase is replaced by a landscape of rapid prototyping, security risks, and the evaporation of intellectual property moats.

For the past month, I've spent time every day creating tools, utilities, and applications using AI coding assistants. Claude Code, Cursor, Copilot, ChatGPT, Gemini, I've tried them all. It's been fun. And frustrating.

I’ve had a blast. But if you're only paying attention to the fun and the frustration, you're missing the bigger picture. Something fundamental is shifting. And the implications go far beyond whether these tools are any good at spitting out code.

I should start with a warning. I'm probably not the typical target user for what Andrej Karpathy called "vibe coding"; that approach where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists." I have over thirty years of software development experience, from BASIC and FORTRAN to Prolog and Haskell. I know quite a bit about what's happening under the hood.

That background gives me a useful vantage point. I can see what these tools get right, where they fall short. And what it all means for individuals, businesses, and society. Let me take you through the journey.

The Magic Show

The first thing that strikes you is how much seems possible. With just a few prompts, these tools produce astonishing results. You describe a contact form with validation, and minutes later you have working code. It feels like magic. And it’s taking hold at many organizations.

I focused on simple web applications that I can host quickly using HTML, CSS, JavaScript, PHP, and MySQL. From idea to running prototype can happen in minutes rather than days. It's seductive. You start thinking: why would I ever code manually again? But don't stop here. The magic is real, but it's also a distraction from what’s really changing.

The Cracks Appear

The more you use these tools, the more you come up with worrying questions and notice inconsistencies. Remarkably smart in some areas, these tools are bafflingly stupid in others. In one session, a tool elegantly solved a complex data transformation. In the next, it couldn't figure out why a simple CSS rule wasn't applying and went into an endless code rewriting loop.

The intelligence is genuine but uneven. They pattern-match brilliantly until they don't.

The "Nearly" Trap

The most insidious problem is how many times the generated code almost does what you want. Nearly right. Just not quite. Followed by endless pushing, pulling, and fiddling.

So, for example, the form validation works, but error messages appear in the wrong place. You're 90% there, and that last 10% becomes maddening. Endless loops of refinement, each prompt fixing one thing while breaking another.

The Hammer Problem

Spend enough time with these tools, and everything starts looking the same. The AI has preferred patterns, favourite libraries, and default approaches. It reaches for React when vanilla JavaScript would suffice.

Every AI-generated project feels like every other. The tool's personality overwrites yours.

The Knowledge Dividend

Here's where my decades of experience proved invaluable. When things go wrong, knowing what's happening under the hood saves enormous time. I can recognise why code is failing. I can give the AI precise instructions. I can spot dead ends.

Why does this matter? Veracode's 2025 research found that 45% of AI-generated code contains security vulnerabilities. If you can't recognise them, there is a strong chance you won't know to ask about them and won’t notice the impact until it is too late.

This matters more than you might think. The tools are democratising code creation. They're also democratising insecurity.

The Prototype Cliff

From idea to prototype to show-and-tell is phenomenal. These tools excel at getting something working that you can demonstrate and iterate on.

But there's a cliff edge. The moment you want to move beyond prototype to something robust, maintainable, secure, everything changes. The quick wins become technical debt. This is fine if you know where the cliff is. Dangerous if you don't. And most people don't.

The Hidden Obligations

Given my interests, almost all my experiments involved storing data, managing sign-ons, and creating new knowledge from multiple sources. The technical parts are tricky but doable.

But creating and sharing apps is much more than coding. do you really understand the obligations you're taking on when you save a user's email and password? When you collect personal data? When you infer new knowledge from their inputs?

The AI will happily generate a user registration form. It won't ask about GDPR compliance, data retention policies, or breach notification requirements. These aren't technical problems. They're governance problems. And they don't appear in the code.

This is where the bigger picture starts to come into focus. The tools make building easy. They don't make responsibility easy. And that gap is about to cause a lot of pain.

Oh No, Not the Comfy Chair

For individual users: Vibe coding is genuinely useful for personal productivity and experimentation. But the further you venture toward anything involving other people's data, money, or trust, the more you need to understand what the code is actually doing. The barrier to building has collapsed. The barrier to building responsibly hasn't.

For managers: Your people are already using these tools—whether you've sanctioned it or not. You need visibility. What's being built? Where is it deployed? Who's accountable when something breaks? The productivity gains are real. So are the risks you can't see.

For policy makers: The security research is sobering. Between 25% and 45% of vibe-coded applications have security flaws. The democratisation of coding has democratised insecurity. Frameworks around software liability are about to be tested like never before.

The Deeper Shift: Welcome to Disposable AI

But there's a bigger realisation that reframes everything. To vibe code effectively, you need to fundamentally shift how you think about what you're creating. And that shift has consequences far beyond your own productivity.

For my entire career, software development has been about building things that last. You invest in architecture because code will be maintained for years. The economics of traditional software demanded durability. Vibe coding inverts this entirely.

When I stopped to think about this, everything changed. I wasn't failing to build lasting software. I was succeeding at something different: building disposable software, fast.

  • Testing an idea. Does this concept make sense? The code isn't the point—the learning is.

  • Exploring a space. What's possible with this API? Code as a thinking tool.

  • Solving a momentary problem. A utility for a specific task. Something for this context, this moment.

In all these cases, longevity isn't just unnecessary. Instead, it's counterproductive.

Clone, Personalise, Move On

But what really hit me was when I showed someone an idea I'd been working on, curious what they thought. An hour later, they'd used AI to deconstruct it, rebuild it, and add new features personalised to their specific needs.

They didn't ask for the source code. They didn't request documentation. They just took the concept and made their own version. Clone, personalise, move on.

Stop and think about what this means. The competitive advantage of having built something evaporates. The moat you thought you had is gone in an hour. The months of development work are replicable in an afternoon by anyone who sees what you've made. This isn't a small shift. It's a fundamental reordering of how value is created and captured in software.

The old rules were that creativity was hard and copying was harder. This seems to no longer apply. Intellectual property is increasingly meaningless. First-mover advantage has shrunk from years to days. The craftsmanship you invested in is invisible to anyone who clones the idea and rebuilds it their way. New rules are emerging. New forms of value. New winners and losers.

For many kinds of solution (but not all), the losers will be those who cling to the old model of protecting codebases, hoarding technical knowledge, and believing that what they've built can't be replicated.

In these situations, the winners won't be those who build the most. They'll be those who learn the fastest, spot opportunities first, and move on before the crowd arrives.

The New Literacy

We're entering an era where a new kind of literacy matters: knowing when to build to last and when to build to discard. For decades, the cost of creating software meant anything worth building was worth building properly. Now that calculus has shifted. Creating software is cheap. The expensive thing is maintaining it, securing it, and governing it.

The people who thrive will be those who can fluidly move between modes—building disposable tools for learning, then switching to serious engineering when something proves its worth.

They'll understand that sometimes the best code is code that was never meant to last. My experiences have highlighted to me that the magic show is real. The frustrations are real. But neither is the point.

The point is that the economics of software have fundamentally changed. The barriers that protected value have fallen. The skills that mattered are shifting. The rules are being rewritten.

Don't get distracted by whether these tools are amazing or annoying. They're both. What matters is what comes next. The vibes are temporary. The disruption is permanent.