Unpack vibe coding in its current form and it starts to feel more like a reckless state of mind.
Of all the AI trends that should worry cybersecurity professionals, ‘vibe coding’ deserves our immediate and full attention. Coined as a term in early 2025 by OpenAI’s Andrej Karpathy in an X post, vibe coders start a programming project by telling an AI tool what they want their code to do through a series of prompts – coding on feel, or vibes, if you like.
Once the tool has generated vibe code, the programmer’s job becomes one of refinement and debugging, so-called code first, refine later. That’s the theory at least, and it’s worth stating that vibe and AI-assisted coding can be used in a wide variety of ways depending on who is wielding it.
It is not in itself a terrible idea – why not speed up programming using natural language tools? Nor is the notion of using prompts to generate code even that new.
Unpack vibe coding in its current form and it starts to feel more like a reckless state of mind. This is what happens when you give something a name. At that point, an idea enters the culture and can quickly accelerate out of control in ways that quickly overwhelm caution. It should be obvious to anyone with an eye on cybersecurity, that despite its benefits, vibe coding brings risks, possibly very big ones.
Generation vibe
There are lots of experienced programmers who use AI coding to automate repetitive tasks, for rapid prototyping and to experiment with different approaches to a problem. For these people, it is a huge time saver. But these are professionals with up to 20 years’ experience who understand what they’re doing. They always review new code, aware of the technology’s limitations and potential for error.
At the other end of the spectrum are less experienced or novice programmers for whom vibe coding and AI assistance let them attempt things they might not otherwise contemplate. When the AI is doing the leg work, it all seems very easy – too easy in fact. The risk is that for programmers in a hurry, if the code appears to work, that is good enough. In this environment, security quickly takes a back seat to speed and excitement.
The danger is that code is built on a deploy and forget basis without proper assessment or peer review. If that code contains a bug or weakness that opens a security vulnerability, that won’t be noticed until much later, if ever.
Once bad vibe code is built into applications, fixing it becomes a lot more challenging, not to mention expensive. Now imagine that careless vibe coding takes hold and starts to affect open-source packages everyone relies on. Even if the volume of code is small, this scales to a huge problem, opening the sort of back doors the industry will struggle with over many years.
In the perfect world we used to assume programming was founded on, this wouldn’t happen. Three decades of major security vulnerabilities in supposedly rock-solid code suggests otherwise. There will always be people who don’t review code carefully because it’s quicker and easier not to. This is already a major headache, but vibe coding could supercharge the problem.
It’s not just programmers who are the problem: the term vibe coding was only weeks old when a security vulnerability was uncovered in one of the new online coding platforms that enable it, Lovable. Tracked as CVE-2025-48757, the logic flaw in how the tool verifies Row Level Security (RLS) policies which exposed sensitive data in web applications.
Worse, even when it’s working well, vibe coding can inadvertently act as a highway leading to other weaknesses, for example incorrectly secured databases. This was the case when using Lovable with a separate but preferred database service offered by startup Supabase, researchers found.
The devil unleashed
Too many tech professionals still view new technologies such as AI through the lens of two fixed beliefs. The first is the old chestnut about moving fast and breaking things - don’t think about effects, just see what they are after the fact and adjust accordingly. The second is that technology is value neutral. This idea holds that it’s not technology that creates problems but humans who misuse it.
The obvious counter is that technologies create new possibilities with unintended consequences, especially when they are implemented optimistically. The USB thumb drive was invented to make data portable, which seemed like a great idea until people started losing them in car parks and causing data breaches.
These problems arise because humans and the organisations they work for have a persistent habit of underestimating technology’s risks and where innovation might lead. Is this the fault of humans or technology? Arguably, it’s both but what matters is that we are realistic about these risks before unleashing the devil and causing real damage.
Written by
Conor Agnew
Head of professional services
Closed Door Security