Code review intimidated me at first. What if I miss something obvious? What if my feedback is wrong? What if I'm too harsh—or too soft?

A year in, I've developed a practice that works. Not perfect. Not comprehensive. But consistent, sustainable, and improving.

Here's how I approach code review as a junior engineer still figuring it out.

What I Look For

I used to try reviewing everything at once. That doesn't work. Your brain glazes over. You miss the important stuff while catching typos.

Now I do multiple passes, each with a specific focus.

Pass 1: Clarity

Can I understand what this code does without running it?

This is the most valuable thing I can check. If I can't follow the logic, either I'm missing context—or the code needs work. Both are worth surfacing.

Specific things I notice:

  • Naming. Does processData tell me what it actually processes? Does temp have a better name hiding inside it?
  • Function length. If I have to scroll, something can probably be extracted.
  • Complexity nesting. Three levels of if statements inside a loop? That's a code smell.
  • Comments that explain why, not what. // increment counter is useless. // retry limit prevents infinite loops is useful.

I don't nitpick style when there's a formatter. But I do flag things that made me slow down and re-read.

Pass 2: Edge Cases

What happens when the input is weird?

This is where I've caught the most actual bugs. The happy path usually works—developers test that while building. The edges are where things break.

I ask myself:

  • What if this list is empty?
  • What if this string contains special characters?
  • What if this number is negative? Zero? Really large?
  • What if this API call fails?
  • What if the user double-clicks?

I'm not paranoid about every edge case—that way lies over-engineering. But I do note the ones that seem likely in practice.

Pass 3: Tests

Does the test coverage match the complexity?

I check:

  • Are there tests at all? For anything non-trivial, there should be.
  • Do the tests cover the changes? New code paths need new test cases.
  • Do the tests actually test behavior? assert result is not None doesn't tell me much.
  • Are edge cases from Pass 2 tested? If I thought of an edge case, did the author?

I don't demand 100% coverage. I do note when critical paths are untested.

How I Give Feedback

This is where I've grown the most. Early on, my reviews were either too vague ("this seems off") or accidentally abrasive ("why would you do it this way?").

Three principles guide my feedback now.

Specific

Bad: "This function is confusing."

Better: "I had trouble following the control flow here—specifically, the early return on line 47 surprised me because it's not clear when shouldExit would be true at that point."

Specific feedback is actionable. Vague feedback creates defensive conversations.

I include line numbers. I quote the relevant code. I explain exactly what confused me or concerned me. The author should know precisely what to look at.

Actionable

Bad: "This could be cleaner."

Better: "Consider extracting lines 23-31 into a validateUserInput() function—it would make the main flow easier to follow and make this validation logic reusable."

I suggest a path forward when I can. Not a mandate—they might have good reasons to do it differently. But a concrete option is more helpful than an abstract observation.

When I don't have a specific suggestion, I say so: "I'm not sure what the fix is, but this feels like it could cause issues with concurrent requests." Honest uncertainty is better than vague prescription.

Kind

Bad: "This is wrong."

Better: "I think there might be an issue here—when items is empty, items[0] will throw. Did you mean to handle that case?"

Same information. Different framing.

I assume the author is smart and had reasons. I phrase things as questions when I'm uncertain. I acknowledge what works well, not just what doesn't.

This isn't about being soft or avoiding criticism. It's about being effective. Feedback that triggers defensiveness doesn't get acted on. Feedback that feels collaborative does.

I use "I" statements: "I found this confusing" instead of "this is confusing." The first is my experience. The second is a judgment.

Time Boxing

Unlimited review time produces diminishing returns. After a while, you're not catching real issues—you're inventing concerns.

My approach:

  • Small PRs (< 200 lines): 10-15 minutes max
  • Medium PRs (200-500 lines): 20-30 minutes max
  • Large PRs (500+ lines): Flag for splitting, or schedule a synchronous walkthrough

When I hit my time limit, I submit what I have. Partial feedback now beats perfect feedback tomorrow.

If a PR is too big to review in one sitting, that's feedback in itself. "This is hard to review as one unit—could we split the refactor from the feature addition?" That's a legitimate request.

I also try to review within 24 hours of being assigned. Stale PRs create merge conflicts. They also sit in the author's mental stack, blocking them from moving on. Fast turnaround matters.

Reviewing My Own PRs First

Before I request review, I review my own code.

I literally open the PR diff and read it like I'm reviewing someone else's work. Fresh eyes catch things. The diff view shows code differently than your editor does.

Things I catch in self-review:

  • Debug statements I forgot to remove
  • Comments that are now outdated
  • Naming that made sense while coding but looks weird now
  • Missing test cases that are obvious in hindsight
  • That TODO I left for future-me

Self-review also makes me a better teammate. When I catch my own issues, I don't waste reviewers' time on obvious stuff. They can focus on the deeper questions.

Plus, writing code and reviewing code are different mental modes. Switching modes surfaces things. I've caught bugs in self-review that I'd been looking at for hours while coding.

Learning From Others

The biggest benefit of code review isn't catching bugs. It's exposure.

Every PR I review teaches me something:

  • Patterns I hadn't seen. "Oh, you can use a context manager for that?"
  • Libraries I didn't know. "What's this functools.cache doing?"
  • Approaches I wouldn't have tried. "I would have used inheritance, but composition works better here."

I keep notes on things I learn from reviews. Not formally—just jotting down techniques that surprised me. Over time, this compounds. My code gets better because I've seen more code.

I also learn from how others review my code. When someone leaves great feedback, I notice the pattern. Specific, actionable, kind. When someone leaves unhelpful feedback, that's educational too—it shows me what not to do.

The best reviewers I work with treat review as mentorship. They don't just catch bugs; they teach. I try to do the same, even as a junior. Sometimes I learn something from the PR. When I do, I say so: "TIL you can use setdefault here instead of checking if key in dict. Nice."

A Practice, Not a Checklist

None of this is a rigid system. It's a practice—something I'm developing through repetition and reflection.

Some days I miss things. Some feedback lands wrong despite good intentions. Some PRs still take too long to review.

That's okay. The goal isn't perfection. It's getting a little better at reviewing code each time I do it.

What I've found: the more deliberate I am about how I review, the more valuable my reviews become. Not just for the author—for me.

Code review is a skill. Like any skill, it improves with practice.


What's your approach to code review? I'm still learning—always interested in how others handle it.

React to this post: