Your team does code review on every PR. So why do bugs still reach production? The problem isn't effort — it's the limits of human attention.
Studies show that code review catches only 60-70% of defects on average. That means roughly 1 in 3 bugs makes it past review. For a team shipping 50 PRs per week, that's a lot of defects reaching production.
The bugs that slip through aren't random. They follow predictable patterns — the same categories of defects that human reviewers consistently overlook.
The classic: using <= instead of <, or starting at index 1 instead of 0. These look correct at a glance and pass most tests, but fail on edge cases.
// Bug: accesses index beyond array bounds
for (let i = 0; i <= items.length; i++) {
process(items[i]); // undefined on last iteration
}
// Fixed: use strict less-than
for (let i = 0; i < items.length; i++) {
process(items[i]);
}
Accessing properties on values that might be null. The code works in the happy path, crashes in production when data is missing or an API returns an unexpected response.
// Bug: user.address could be null
const city = user.address.city;
// Fixed: optional chaining with fallback
const city = user.address?.city ?? 'Unknown';
Missing await on async functions, shared state accessed concurrently, or operations that depend on timing. These bugs are intermittent, making them incredibly hard to reproduce and diagnose.
// Bug: missing await, data is a Promise not the actual value
const data = response.json();
console.log(data.name); // undefined
// Fixed: await the async operation
const data = await response.json();
console.log(data.name);
Missing try-catch blocks, swallowed exceptions, unhandled promise rejections, and error callbacks that ignore the error parameter. These don't cause visible bugs until something goes wrong — then they cause cascading failures with no useful error messages.
Inverted conditions, incorrect boolean logic, and missing else branches. Reviewers often read what they expect to see rather than what's actually written.
// Bug: inverted condition, deletes active users instead of inactive
if (user.isActive) {
deleteUser(user.id);
}
// Fixed: negate the condition
if (!user.isActive) {
deleteUser(user.id);
}
It's not about skill — it's about how human attention works:
i <= length and read it as "loop through all items" without noticing the off-by-one.await on a function call looks identical to a correct synchronous call.TypeScript's strict mode, ESLint rules, and static analysis tools catch entire categories of bugs automatically. Enable strict null checks, no-unused-variables, and exhaustive switch statements. These are free wins that require no human attention.
Before reviewing logic, check that tests cover boundary conditions: empty arrays, null inputs, maximum values, concurrent access. If edge cases aren't tested, they're probably broken.
Research shows optimal review sessions are 200-400 lines at a time, for no more than 60 minutes. Anything beyond that and defect detection drops sharply. Break long reviews into multiple sessions.
Keep a running list of bug patterns your team has shipped. Review each PR against this list. Common items: bounds checking, null safety, error handling, async correctness, and input validation.
AI code review excels at exactly the patterns humans miss. It doesn't get fatigued, doesn't skip edge cases, and checks every line with the same attention. Use it as a first pass to catch the mechanical bugs so human reviewers can focus on design and logic.
CodeSentri reviews every PR for off-by-one errors, null pointer risks, missing awaits, and logic errors. It catches the patterns that slip past tired eyes.
Install Free on GitHub