Vibe Coding Risks: Security, Quality, and What Can Go Wrong
In February 2026, a smart contract auditor noticed something unusual. The PRs for a DeFi lending protocol showed commits co-authored by Claude — and one of them had set a token price to $1.12 instead of ~$2,200. The result: a $1.78 million exploit. It’s widely considered the first major financial loss directly tied to AI-authored code.
That incident crystalized a debate that had been building for months. Vibe coding is shipping real software, commanding real salaries, and powering 4% of all public GitHub commits. But the risks are also real — and they’re now quantified.
This is the honest accounting.
Is Vibe Coding Risky? The Data Says Yes.
The security numbers are hard to argue with.
Tenzai — a security startup — tested 5 major AI coding tools (Claude Code, Codex, Cursor, Replit, Devin), building 3 identical web apps with each. The result: 69 vulnerabilities across 15 applications. Every tool introduced SSRF vulnerabilities. None built CSRF protection. None set security headers.
Escape.tech went bigger. They analyzed 5,600 publicly deployed vibe-coded apps and found 2,000+ vulnerabilities, 400+ exposed secrets, and 175 instances of personally identifiable information — including medical records, IBANs, and phone numbers.
Aikido Security surveyed 450 developers, AppSec engineers, and CISOs. Their finding: AI-generated code is now the cause of 1 in 5 security breaches. 69% had discovered vulnerabilities introduced by AI code in their own systems. One in five of those incidents caused material business impact.
And the broadest study — Veracode’s 2026 State of Software Security, covering 1.6 million applications — found that 82% of companies now carry security debt, up from 74% a year ago. Their headline: “The velocity of development in the AI era makes comprehensive security unattainable.”
The Carnegie Mellon research may be the most damning single stat. Their SusVibes benchmark tested AI coding agents on 200 real-world tasks: 61% of solutions were functionally correct, but only 10.5% were secure. That means 82.8% of working AI code had security flaws.
Working code isn’t safe code. That’s the core risk.
What Actually Goes Wrong
Security researcher Nagli audited hundreds of enterprise-built vibe-coded apps and found the same four mistakes appearing over and over:
The pattern is consistent across studies. Here’s what AI-generated code gets wrong most often:
1. Exposed secrets. API keys, database credentials, and authentication tokens hardcoded in client-side code. The Escape.tech study found 400+ exposed secrets across 5,600 apps — Supabase tokens, OpenAI keys, and database URLs sitting in plain text.
2. Missing or broken authentication. AI tools default to client-side auth checks that can be bypassed with a browser’s dev tools. The Base44 vulnerability — discovered by Wiz in the Wix-acquired vibe coding platform — let attackers bypass SSO by providing only a non-secret app_id visible in any URL.
3. Insecure defaults. Firebase buckets configured as publicly accessible. Supabase databases without row-level security. Storage services with no access controls. These are the defaults AI tools generate — and the defaults most vibe coders never change.
4. Hallucinated packages. A 2025 study found that 19.7% of LLM-recommended packages don’t exist — and 43% of those are hallucinated consistently across repeated prompts. Attackers register these phantom package names to deliver malware. It’s called “slopsquatting,” and it’s a supply chain attack vector that didn’t exist before AI coding.
The Incidents
These aren’t hypothetical scenarios. These are things that happened.
The Tea Dating App (July 2025) — A women-only safety app exposed 72,000+ user images, 13,000+ government-issued IDs, and over a million private messages. The cause: a Firebase bucket configured as publicly accessible with zero authentication. The kind of default AI tools generate.
Moltbook — An AI social network had a misconfigured Supabase database exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The founder had previously tweeted: “I didn’t write a single line of code… AI made it a reality.”
Orchids Platform (2026) — A BBC investigation demonstrated a zero-click hack where a security researcher injected malicious code into a journalist’s project without any user interaction — gaining access to edit code and control the reporter’s laptop.
Lovable Phishing Abuse (2026) — Proofpoint detected tens of thousands of Lovable URLs used for credential phishing, payment harvesting, and crypto wallet draining — affecting 5,000+ organizations. Free users could clone any public site and launch phishing campaigns in minutes.
The Moonwell Exploit (February 2026) — The $1.78M smart contract exploit described above. The PRs showed Claude co-authored commits. It’s the first high-profile case where AI-authored code directly caused a major financial loss.
And these are just the ones that made the news.
Is Vibe Coding Bad for Code Quality?
Security is the headline risk. Code quality is the slow-burning one.
The CMU study of 807 GitHub repositories that adopted Cursor found a 3–5x increase in lines added in the first month — but also a 30% increase in static analysis warnings and a 41% increase in code complexity. The productivity gains were real. So was the quality degradation. And the productivity gains dissipated after two months while the complexity remained.
Steve Krouse — founder of Val Town — named it precisely:
The METR study added another uncomfortable data point: experienced open-source developers using AI tools were actually 19% slower than without them. The kicker? Those same developers predicted they’d be 24% faster and believed afterward they had been 20% faster. The perception-reality gap is real.
And the Opsera benchmark puts a number on the rework: AI-generated code requires 15–25 percentage points of rework, eating into the 30–40% productivity gains.
Max Musing — founder of WorkOS — articulated the trap:
One developer in our community put it more bluntly: “Vibe coding allows a single developer to generate the technical debt of 50 developers.”
The Cleanup Economy
Where there’s mess, there’s money.
404 Media reported that cleanup specialists now charge $200–$400/hour to untangle AI-generated codebases. One specialist — Hamid Siddiqi — maintains 15–20 cleanup projects simultaneously. VibeCodeFixers.com launched as a dedicated marketplace and signed up 300 specialists within weeks.
The demand grew 300% in six months. It’s now a full category on our job board.
This isn’t a failure story. It’s a market story. Vibe coding creates a two-sided economy: builders who ship fast, and specialists who make it production-ready. Both earn well.
Is Vibe Coding a Bad Habit?
Linus Torvalds said it plainly at the Linux Foundation Open Source Summit: vibe coding is “fine for getting started” but a “horrible, horrible idea from a maintenance standpoint, if you actually tried to make a product.”
Then he went and vibe coded his own personal projects.
That contradiction contains the answer. Vibe coding isn’t inherently bad — it’s contextually risky. Prototyping an internal tool? Low risk. Deploying a financial application with zero security review? Catastrophic risk.
Andrej Karpathy — who coined the term — now draws the same distinction. He prefers “agentic engineering” for professional work, and argues that deep technical expertise is more valuable in the AI era, not less:
The enterprise world agrees. Daniel Newman — CEO of The Futurum Group — reports that the hype hasn’t reached the server room:
But Gartner predicts 40% of new enterprise production software will use vibe coding techniques by 2028. The question isn’t whether it’ll be used — it’s whether it’ll be used responsibly.
How to Vibe Code Without Getting Hacked
The risks are real. They’re also manageable. Here’s what’s working.
Treat AI code as untrusted by default
Every line of AI-generated code should go through the same review process you’d apply to a pull request from a junior developer you just hired. Access control, authentication, secrets management — these are non-negotiable review checkpoints.
Run automated security scanning
SAST, SCA, DAST, and secrets scanning in your CI/CD pipeline. Palo Alto Networks’ SHIELD framework provides a structured governance model: Separation of duties, Human in the loop, Input/output validation, Enforce security-focused helper models, Least agency.
Use tools that prevent vulnerabilities at generation time
Vercel’s v0 has prevented token leaks in 16,200+ generated applications. Anthropic launched Claude Code Security — free for open-source maintainers — that scans codebases and suggests patches for human review. Lovable’s Security Checker 2.0 blocks roughly 1,000 malicious projects per day.
Never deploy secrets in client-side code
This is the #1 vulnerability in vibe-coded apps. Use environment variables, server-side API routes, and secret management services. If your AI tool puts an API key in a React component, that’s your cue to intervene.
Know the 70% rule
Every AI coding tool gets you about 70% of the way to production. That first 70% used to take weeks — now it takes hours. But the remaining 30% — security, edge cases, production hardening — still needs engineering judgment. The risk isn’t in using the tool. It’s in shipping the 70% version and calling it done.
The Bottom Line
Vibe coding risks are real and quantified:
- 82.8% of functionally correct AI code has security flaws (CMU)
- 1 in 5 security breaches now involve AI-generated code (Aikido)
- 2,000+ vulnerabilities found in 5,600 public vibe-coded apps (Escape.tech)
- $1.78M lost in the first major AI-authored code exploit (Moonwell)
- 30% more static analysis warnings after adopting AI tools (CMU)
But these are engineering problems, not existential ones. Every new development paradigm — from web apps to mobile to cloud — introduced a new class of vulnerabilities. The industry built tools, practices, and expertise to manage them. That’s happening now with AI-generated code.
The developers who are getting hired and earning the most in 2026 aren’t the ones who ignore these risks. They’re the ones who understand them — and ship anyway, with the right guardrails.
The question was never “is vibe coding risky?” Of course it is. The question is whether you’re managing the risk or pretending it doesn’t exist.
If you’re looking for roles where managing these trade-offs is the job, browse vibe coding jobs — from security-focused engineering to AI-assisted development across every category.
Find roles where understanding AI risks is a competitive advantage — security engineering, cleanup specialists, and AI-first development.
Browse Vibe Coding Jobs →