Vibe coding is a fine way to build a prototype. It is a terrible way to build a product. The workshops do not mention that part.

There is a very specific type of LinkedIn post that has become its own genre in 2025.
It goes something like this: “I built a full SaaS in 3 days with zero coding knowledge. Got my first paying customer. Security is not important at this stage. The pays is important. Here is my free workshop on how you can do the same.”
The likes roll in. The comments fill up with “bro you are an inspiration.” A workshop gets scheduled. Forty non-coders show up to learn how to vibe their way to an MVP. And somewhere down the line, one of those apps ends up with a real user whose real data ends up somewhere it absolutely should not be.
This is not a story about whether AI tools are good or bad. They are good. They are genuinely useful. This is a story about what happens when people who do not understand what they built start teaching others to build the same way, while loudly declaring that understanding the security implications is someone else’s problem.
First, What Vibe Coding Actually Means
Let us be precise, because the term gets misused constantly.
“Vibe coding” was coined by Andrej Karpathy in a post on X on February 2, 2025. Karpathy is a co-founder of OpenAI and the former Director of AI at Tesla. He knows how software works at a deeply technical level. His exact words: “There’s a new kind of coding I call vibe coding, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
He also said, in the same post: “It’s not too bad for throwaway weekend projects, but still quite amusing.”
Throwaway. Weekend. Projects.
The person who invented the term specifically framed it as something you do when the stakes are low and nothing matters if it breaks. He was not describing a business model. He was describing a fun experiment. The tech community then took “throwaway weekend project” energy and applied it to production systems handling real user data. Then started running workshops about it.
That is the problem we are actually discussing.
The Code That Looks Fine Is Not Fine
Here is what the research actually shows about AI-generated code, because feelings are not data.
A 2024 study published on arXiv (paper 2404.18353) analysed AI-generated code across multiple large language models and found that at least 62% of the generated programs contained vulnerabilities. The differences between models were described as minor. They all showed similar weaknesses with slight variations. This was not a small sample. It was a rigorous analysis using formal verification methods.
Veracode’s 2025 GenAI Code Security Report evaluated over 100 LLMs across 80 curated coding tasks and found that AI-generated code introduced security vulnerabilities in 45% of cases. The report’s own CTO, Jens Wessling, stated: “The rise of vibe coding, where developers rely on AI to generate code typically without explicitly defining security requirements, represents a fundamental shift in how software is built. This shift leaves secure coding decisions to LLMs, which are making the wrong choices nearly half the time, and it’s not improving.”
A December 2025 analysis by CodeRabbit examined 470 open-source GitHub pull requests and found that AI co-authored code contained approximately 1.7 times more “major” issues compared to human-written code. On specific vulnerability types, the numbers get worse. AI-generated code was 2.74 times more likely to introduce cross-site scripting (XSS) vulnerabilities and 1.88 times more likely to introduce improper password handling than code written by human developers.
By June 2025, AI-generated code was introducing over 10,000 new security findings per month across the repositories studied by security firm Apiiro. That is a 10x spike in just six months compared to December 2024.
None of this means “stop using AI to write code.” It means: the code needs to be reviewed by someone who understands what they are looking at. If that person is you, great. If you have no idea what row-level security, CSRF tokens, SQL injection or rate limiting even are, the AI is not going to save you. It is going to confidently generate something broken and you will accept all changes and ship it.
It Has Already Happened. With Real Users. At Scale.
This is not hypothetical. Here is a documented case from 2025.
Lovable is a Swedish AI-powered vibe coding platform that helps non-developers build web applications. It reached $50 million in annual recurring revenue in about six months. That is a legitimate achievement.
In March 2025, Matt Palmer, a software engineer at Replit, noticed a vulnerability in a Lovable-created website. The Supabase database behind the app was not configured correctly. Palmer and a colleague then scanned 1,645 Lovable-created web applications featured on Lovable’s own site. Of those, 170 allowed anyone to access information about the site’s users, including names, email addresses, financial information and secret API keys for AI services, without logging in.
The vulnerability was registered as CVE-2025–48757. The root cause was missing or misconfigured Row Level Security (RLS) policies, a foundational database security concept that controls which users can access which rows of data. The apps looked fine. They had login screens. They accepted payments. They stored user data. And that data was sitting wide open.
In 47 minutes, a Palantir engineer independently found and publicly demonstrated the same vulnerability on multiple apps from Lovable’s own recommendation page, extracting personal debt amounts, home addresses and API keys. In 47 minutes. Without special tools.
Escape, a security research firm, later analysed over 5,600 publicly available vibe-coded applications across Lovable, Base44, Create.xyz and similar platforms. They found more than 2,000 vulnerabilities, over 400 exposed secrets and 175 instances of personally identifiable information (PII), including medical records, bank account numbers, phone numbers and email addresses, sitting exposed in apps built by people who were told that the platform would handle security for them.
This is what “ship fast, security later” looks like when it meets real users.
The Workshop Problem Is a Multiplier Effect
One person shipping an insecure app damages their own users. That is bad.
One person running a workshop teaching 50 non-coders to ship insecure apps, framing security as unnecessary and sending them away confident that they know what they are doing. That is a different magnitude of damage.
The specific harm is not just the insecure code. The harm is the false confidence. People leave those workshops believing they have learned something complete. They have learned how to make something that runs. They have not learned how to make something that is responsible to the people who use it.
There is a meaningful difference between saying “I built this for fun and I know there are probably security gaps” and “I built this as a product, I am charging money for it. Security is not a priority.” The first person is honest about what they made. The second person is making promises they cannot keep to users who trusted them.
And when those users get breached, it is not the vibe coder who faces the consequences alone. It is the users whose addresses, medical history or payment information ended up in a database dump on a forum somewhere.
What Actually Happens When Your Users Get Breached
Since money is what we are talking about, let us talk about money.
The IBM Cost of a Data Breach Report 2024, based on analysis of 604 real organisations across 16 countries conducted by the Ponemon Institute, found that the global average cost of a data breach reached $4.88 million in 2024. That is a 10% increase from 2023 and the largest single-year jump since the pandemic. These costs include business disruption, lost customers, post-breach customer support, regulatory fines and remediation.
That is the global average across all company sizes and industries. For a small startup with no security team, no incident response plan and no cyber insurance, a breach does not just cost money. It ends the company.
The “pays” people are chasing with vibe coding is measured in thousands or tens of thousands of dollars of early revenue. The liability they are creating can be measured in millions.
For those building products in Malaysia specifically: the Personal Data Protection (Amendment) Act 2024 increased maximum fines for violating data protection principles to RM 1 million, with imprisonment of up to three years. A mandatory data breach notification requirement came into effect in June 2025. Failure to notify the Personal Data Protection Commissioner of a breach can result in a fine of up to RM 250,000. This is the law that applies to the app you built over the weekend. Whether or not you knew it.
What AI Code Gets Wrong by Default
Here is a short, concrete list of what AI will happily generate without you noticing anything is wrong:
No rate limiting on authentication endpoints. Your login form will accept 10,000 password attempts per minute without complaint. An attacker with a common password list will walk through it while your app looks completely fine.
Hardcoded API keys in client-side code. The AI will put your secret keys directly into JavaScript that ships to every user’s browser, where anyone can view it in the developer tools in about three seconds.
Missing row-level security on database tables. As the Lovable case demonstrated at scale: the database accepts queries, but nothing enforces that a user can only see their own data. Query the right endpoint and you get everyone’s data.
No CSRF protection. Cross-site request forgery allows an attacker to trick a logged-in user’s browser into performing actions the user did not intend. The AI generates the form. It does not always generate the token that prevents this.
Mass assignment vulnerabilities. The AI creates an update endpoint that accepts whatever fields you send it. A user sends an extra field like is_admin: true and the app happily writes it to the database.
These are not exotic attack techniques. These are items 1 through 5 on OWASP Top 10, the most widely used reference list of web application security risks, which has been publicly available since 2003. An attacker does not need to be sophisticated to exploit them. They need to be patient and have a script.
The AI generates code that works. It does not always generate code that is safe. Understanding the difference is not optional if you are building something that handles other people’s data.
The Honest Version of the Workshop
None of this means non-coders should not build things. They should. AI has genuinely lowered the barrier to building. That is a good thing for the world.
The honest version of the vibe coding workshop looks like this: here is how to build something fast. Here is also what you do not understand about what you just built. Here are the five security questions you must be able to answer before you put this in front of real users. Here is how to get a security review. Here is what OWASP is and why it matters.
That workshop is harder to run. It takes longer. Fewer people leave feeling like geniuses. But the people who built things in that workshop go home with an honest picture of what they made and what it can and cannot safely do.
The gap between “I built something that runs” and “I built something I can responsibly offer to other people” is not a gatekeeping barrier invented by developers who want to protect their jobs. It is the gap between a prototype and a product. It has always existed. AI has not closed it. It has made it easier to accidentally skip over it without noticing.
The Reframe on “The Pays”
Nobody is arguing against making money. The pays is important. That part is not wrong.
What is wrong is the assumption that security and revenue are in tension: that focusing on one means sacrificing the other. They are not in tension. They are sequential. Security is what allows you to keep the pays. A breach that exposes your users’ data will cost you the product, the reputation, the users and in some jurisdictions, the freedom to operate. A single misconfigured database table can undo everything.
Karpathy himself, when he coined the term, said vibe coding is “not too bad for throwaway weekend projects.” He did not say it was a business model. He did not say security is someone else’s problem. He was describing what it feels like to experiment with powerful tools on low-stakes personal projects.
The people teaching workshops did not read that part. Or they read it and decided it did not apply to them.
It applies.
References and Citations
- Andrej Karpathy. Original “vibe coding” post on X. February 2, 2025. https://x.com/karpathy/status/1886192184808149383
- Wikipedia. Vibe coding. https://en.wikipedia.org/wiki/Vibe_coding (cites Karpathy (2025) and Simon Willison (2025))
- Bisztray, T. et al. (2024). How secure is AI-generated Code: A Large-Scale Comparison of Large Language Models. arXiv:2404.18353. https://arxiv.org/abs/2404.18353
- Veracode. 2025 GenAI Code Security Report. Cited in SD Times: https://sdtimes.com/security/ai-generated-code-poses-major-security-risks-in-nearly-half-of-all-development-tasks-veracode-research-reveals/ and FutureCISO: https://futureciso.tech/ciso-alert-ai-code-vulnerabilities-on-the-rise/
- CodeRabbit / The Register. AI-authored code needs more attention, contains worse bugs. December 2025. https://www.theregister.com/2025/12/17/ai_code_bugs
- Apiiro. 4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks. September 2025. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
- Palmer, M. Statement on CVE-2025–48757. May 2025. https://mattpalmer.io/posts/statement-on-CVE-2025-48757/
- Semafor. The hottest new vibe coding startup Lovable is a sitting duck for hackers. May 2025. https://www.semafor.com/article/05/29/2025/the-hottest-new-vibe-coding-startup-lovable-is-a-sitting-duck-for-hackers
- Escape Research. Methodology: 2k+ Vulnerabilities in Vibe-Coded Apps. October 2025. https://escape.tech/blog/methodology-how-we-discovered-vulnerabilities-apps-built-with-vibe-coding/
- IBM. Cost of a Data Breach Report 2024. https://newsroom.ibm.com/2024-07-30-ibm-report-escalating-data-breach-disruption-pushes-costs-to-new-highs
- Malay Mail. Understanding the Personal Data Protection Act and what you can do in case of personal data breach. August 2025. https://www.malaymail.com/news/malaysia/2025/08/28/understanding-the-personal-data-protection-act
- InCorp Malaysia. PDPA Compliance Malaysia 2025: Complete Guide. https://malaysia.incorp.asia/guides/pdpa-compliance-malaysia-complete-guide/
- OWASP Top 10. https://owasp.org/www-project-top-ten/