Trust but Verify: The Art of AI Code Review
Baljeet Dogra
Generative AI writes code at superhuman speeds. But speed without accuracy is technical debt. As AI becomes a standard part of our toolchain, the role of the developer shifts from "Writer" to "Reviewer." Here is how to audit AI-generated code effectively.
The Hallucination Trap
LLMs are probabilistic, not deterministic. They don't "know" libraries; they recall patterns. This leads to "Hallucinations"—inventing functions that look real but don't exist.
Spotting Fake APIs
Example: You ask for a function to validate an email in a specific library.
// Hallucination: This method doesn't exist in 'validator.js'
validator.isEmailValid(email, { checkDNS: true });
The Fix: Always verify method signatures against official documentation or your IDE's autocomplete.
Subtle Logic Bugs
AI code often runs without errors but fails on edge cases. Off-by-one errors and loop boundary issues are common because the model mimics the average code it has seen, not necessarily the correct code for your context.
The "Almost Correct" Loop
// Task: Iterate through the last 5 items
for (let i = items.length - 1; i > items.length - 5; i--) { ... }
Ideally, it should be i >= items.length - 5. AI might
miss the inclusive/exclusive boundary.
Security Blindspots
Copilot is trained on public code, including code with bad practices. It might suggest weak encryption or insecure defaults if not prompted correctly.
Checklist for Reviewers
- Inputs: Is input validated and sanitized?
- Secrets: Are API keys hardcoded? (They shouldn't be).
- Dependencies: Did it import a malicious or deprecated package?
Conclusion
Treat AI as a junior developer who works incredibly fast but makes rookie mistakes. Trust their output, but verify every line. In the AI era, your value as a senior engineer isn't just in writing code—it's in your ability to discern good code from bad.