How to Make AI-Generated Code Safer: A Developer’s Guide
Emma Klinteberg - May 29, 2025

AI tools like Copilot and ChatGPT promise rapid development—but without the right practices, they can introduce serious security, maintenance, and architectural risks. In this post, we explore actionable ways developers can use AI more safely, from code reviews to CI/CD checks and team guidelines. Practical, fact-based, and built for real-world teams.
How to Make AI-Generated Code Safer: A Developer’s Guide
The rise of tools like GitHub Copilot and ChatGPT has transformed how developers write code. With just a few prompts, you can generate functions, build prototypes, and ship faster than ever before.
But that speed comes with a cost — especially when it’s not backed by the right safeguards.
In our last post, we broke down the risks: AI-generated code often contains vulnerabilities, doesn’t scale well, and can be hard to maintain.
This time, we’re not here to bash AI. We’re here to show you how to use it more safely — without losing the productivity boost.
1. Don’t Trust — Review Every Line
Treat AI like a junior developer: fast, helpful, and full of potential — but not ready to deploy unreviewed code.
A 2023 study found that 29.5% of Copilot’s Python code and 24.2% of its JavaScript code included known security risks, such as hardcoded secrets and weak randomness.
(Fu et al., 2023)
That’s not “slightly flawed.” That’s nearly 1 in 3 AI-generated snippets containing vulnerabilities right out of the box.
What to do:
- Always review AI-generated code — even for small tasks
- Use static analysis tools like ESLint, SonarQube, or CodeQL
- Scan for vulnerabilities with tools like Snyk, Checkmarx, or npm audit
2. Automate the Checks AI Will Miss
AI won't prevent you from writing insecure code — in fact, it might replicate insecure patterns it has seen in public repositories.
That’s where your CI/CD pipeline becomes essential.
Introduce security checks early using:
- SAST (Static Application Security Testing) tools like Veracode or Fortify — to find vulnerabilities in code before it runs
- DAST (Dynamic Application Security Testing) tools like OWASP ZAP — to catch issues during runtime testing
- Software composition analysis tools like Black Duck — to identify risk in third-party dependencies
Catching problems in the pipeline means you can stop them before they hit staging or production.
(Source: Black Duck AI Security Insights)
3. Know the Red Flags in AI Code
AI-generated code can look fine on the surface — but miss key safety practices.
Common red flags include:
- Hardcoded credentials (API keys, passwords, tokens)
- Insecure random number generation (e.g. using
Math.random()
for security tokens) - Poor or missing input validation
- No error handling
AI doesn’t know your app’s context or risk model — that’s up to you to define and enforce.
(CSET, 2022)
4. Use AI for the Right Stuff
AI is excellent for repetitive tasks — but it doesn’t know when the stakes are too high to guess.
Great use cases:
- Boilerplate code
- Regex patterns
- Dummy data
- Basic unit tests
What should always be human-led:
- Authentication and authorization logic
- Financial data processing
- Infrastructure-as-code scripts
- Anything with performance or security trade-offs
Use AI like a Swiss Army knife — a useful tool, not a senior engineer.
5. Build a Team Culture That Doesn’t Blindly Trust AI
Being personally cautious isn’t enough. You need team-wide awareness to avoid silent failures caused by over-trusting AI.
Ways to build a stronger culture:
- Add AI risk awareness to onboarding and checklists
- Share examples of problematic AI-generated code in retros
- Write internal guidelines for safe AI tool usage
- Keep developers and security teams in sync
If AI is here to stay, our processes need to evolve to match.
Final Thoughts
AI-generated code isn’t inherently unsafe — but it isn’t responsible by default.
Secure architecture, human oversight, and rigorous testing are still essential, especially when you’re shipping code into production.
Use AI to move faster. Just don’t remove the guardrails.
It’s your job to catch what the AI doesn’t even know it’s missing.
References
Fu, Y., Liang, P., Tahir, A., Li, Z., Shahin, M., Yu, J., & Chen, J. (2023, October 3). Security weaknesses of Copilot-Generated Code in GitHub Projects: An Empirical Study. arXiv.org
Center for Security and Emerging Technology (CSET). (2022). Machine Learning and Cybersecurity: Hype and Reality.
Synopsys Black Duck. The Hidden Risks of AI and Open Source.