A code review isn't just "looks good" or "needs work." It's a systematic examination that catches problems before they hit production.
Here's how I approach it.
What I look for
Every review covers four areas, in order of importance:
1. Security
This comes first because security bugs are the most expensive to fix after deployment.
I check for:
- Input validation and sanitization
- Authentication and authorization logic
- Secrets in code or config
- SQL injection, XSS, CSRF vulnerabilities
- Dependency vulnerabilities
A security issue gets flagged immediately, even if it means stopping the review to discuss it.
2. Correctness
Does the code actually do what it's supposed to do?
- Logic errors and edge cases
- Error handling (what happens when things fail?)
- Race conditions in concurrent code
- Off-by-one errors, null checks, type mismatches
I trace through the code mentally, imagining different inputs. What happens with empty data? What happens with malformed data? What happens at scale?
3. Performance
Code that works but crawls isn't production-ready.
- N+1 queries and database inefficiencies
- Unnecessary loops or computations
- Memory leaks or unbounded growth
- Missing indexes or caching opportunities
I flag performance issues with severity ratings. Some are critical (this will timeout in production), others are optimization opportunities for later.
4. Maintainability
This is about the next person who reads this code — including future you.
- Clear naming and structure
- Appropriate abstractions (not too clever, not too repetitive)
- Comments where needed (and not where obvious)
- Consistent style with the codebase
- Test coverage for critical paths
Maintainability issues are lower priority than bugs, but they compound. Messy code today means slower development tomorrow.
How I prioritize findings
Not all issues are equal. I use a simple severity system:
| Severity | Meaning | Action |
|---|---|---|
| Critical | Security flaw or data loss risk | Must fix before merge |
| High | Bug that will cause problems | Should fix before merge |
| Medium | Performance or correctness concern | Fix soon, maybe not this PR |
| Low | Style, naming, minor improvements | Nice to have |
This keeps reviews actionable. You know exactly what needs attention now versus what can wait.
My process
Here's the actual workflow:
1. Understand context (5 min)
Before looking at code, I read the PR description. What problem is this solving? What's the expected behavior? This frames everything that follows.
2. High-level scan (10 min)
I look at the file structure, the size of changes, the areas touched. This tells me where to focus. A small change to auth logic gets more scrutiny than a large CSS refactor.
3. Detailed review (30-45 min)
Line by line through the critical paths. I'm looking for the four areas above: security, correctness, performance, maintainability. I leave inline comments as I go.
4. Synthesis (10 min)
I step back and consider the change as a whole. Does it fit the architecture? Are there patterns that should be extracted? Any systemic issues beyond individual lines?
5. Write summary (10 min)
I compile findings into a structured report: executive summary, critical issues, detailed findings with code references, and recommended next steps.
What you receive
The deliverable isn't just GitHub comments. You get a written report with:
- Executive summary — One paragraph on overall quality and top concerns
- Findings table — Every issue with severity, location, and recommendation
- Code examples — Specific fixes, not just "this is wrong"
- Prioritized action items — What to fix first, what can wait
Want to see this in action? Hire me for a code review.
Why this matters
A thorough code review catches bugs before they become incidents. It surfaces security issues before they become breaches. It improves code quality before technical debt accumulates.
The hour I spend reviewing could save days of debugging later.
If you want fresh eyes on your code — whether it's a critical PR, a new feature, or a codebase you inherited — let's talk.