Code Review Best Practices: PR Culture, Checklists, and Async Review That Works
Build a code review culture that improves code quality without slowing down shipping. Covers PR size, review checklists, async vs sync review, automated gates,
Code Review Best Practices: PR Culture, Checklists, and Async Review That Works
Code review is one of the highest-leverage engineering activities โ it spreads knowledge, catches bugs before production, and enforces standards. It's also one of the most commonly done badly: PRs that sit for days, reviews that focus on style instead of logic, nitpicks that delay shipping without improving quality.
The goal is a review culture where good code gets merged quickly and bad code gets caught before it matters. This guide covers what that looks like in practice.
Why Code Review Fails
| Problem | Symptom | Root Cause |
|---|---|---|
| PRs sit for days | Review queue grows, engineers context-switch | No SLA, no culture expectation |
| Reviews nitpick style | Comments on formatting, naming conventions | No automated formatting, fuzzy standards |
| Large PRs never get good reviews | Reviewer fatigue, rubber-stamp approvals | No PR size discipline |
| Reviews block shipping | Fear of "breaking things" delays merges | Overly conservative approval culture |
| Knowledge stays siloed | Only one person can review certain code | No rotation, no documentation |
The antidote to most of these is structure: clear expectations, automated enforcement of automatable things, and human review focused on what automation can't catch.
PR Size: The Single Biggest Lever
Large PRs are the root cause of slow, shallow reviews. A 1,200-line PR takes 2โ3 hours to review properly โ which means it either gets rubber-stamped or sits in the queue until someone has a free afternoon.
PR size guidelines:
- < 200 lines: Ideal. Full review in 20โ30 minutes.
- 200โ400 lines: Acceptable. Review in 45โ60 minutes.
- 400โ800 lines: Large. Split if possible; reviewers need full focus.
- > 800 lines: Flag for conversation. This is a process issue, not just a review issue.
How to keep PRs small in practice:
# Feature: User export functionality
๐ Looking for a Dev Team That Actually Delivers?
Most agencies sell you a project manager and assign juniors. Viprasol is different โ senior engineers only, direct Slack access, and a 5.0โ Upwork record across 100+ projects.
- React, Next.js, Node.js, TypeScript โ production-grade stack
- Fixed-price contracts โ no surprise invoices
- Full source code ownership from day one
- 90-day post-launch support included
PR 1/3: Add export data models and DB schema (150 lines)
- Add ExportJob model to Prisma schema
- Add migration
- Add basic CRUD operations
PR 2/3: Implement export queue and worker (280 lines)
- BullMQ job for CSV generation
- Export service class
- Unit tests
๐ Senior Engineers. No Junior Handoffs. Ever.
You get the senior developer, not a project manager who relays your requirements to someone you never meet. Every Viprasol project has a senior lead from kickoff to launch.
- MVPs in 4โ8 weeks, full platforms in 3โ5 months
- Lighthouse 90+ performance scores standard
- Works across US, UK, AU timezones
- Free 30-min architecture review, no commitment
PR 3/3: Add API endpoints and frontend (320 lines)
- POST /exports endpoint
- Export status polling
- Download button in UI
- E2E test
Each PR is reviewable independently and can be merged without the others being done โ the feature is behind a feature flag until all three merge.
---
## Automating What Should Be Automated
Human reviewers should not be enforcing formatting, import order, or test coverage minimums. That's what tooling is for. Set up these gates before adding human review to the workflow:
```yaml
# .github/workflows/pr-checks.yml
name: PR Checks
on: [pull_request]
jobs:
automated-checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
# Format โ never let a human comment on this
- name: Check formatting
run: pnpm prettier --check .
# Lint โ catch unused vars, type errors, import issues
- name: Lint
run: pnpm eslint . --max-warnings 0
# Types โ no TypeScript errors
- name: Type check
run: pnpm tsc --noEmit
# Tests โ must pass, with coverage minimum
- name: Test
run: pnpm test --coverage --coverageThreshold='{"global":{"lines":80}}'
# Bundle size โ prevent accidental large imports
- name: Bundle size check
run: pnpm bundlesize
# Security โ catch known vulnerabilities
- name: Audit dependencies
run: pnpm audit --audit-level=high
PR template (.github/pull_request_template.md):
What does this PR do?
How to test
Checklist
- Tests added/updated for new behavior
- No new
console.logor debug code - No hardcoded secrets or API keys
- Breaking changes documented in CHANGELOG
- Feature flag added if this is a large/risky change
- DB migrations are backward-compatible (no column drops in this PR)
Screenshots (if UI changes)
Related issues
---
## The Review Checklist: What Humans Should Focus On
After automation handles style, formatting, and test coverage, human review should cover:
```markdown
Code Review Checklist
Correctness
- Does the implementation match what was described in the PR?
- Are edge cases handled? (empty lists, null values, concurrent requests)
- Is error handling appropriate? (not swallowed, not over-broad)
- Are race conditions possible? (especially in async/parallel code)
Security
- Is user input validated and sanitized?
- Are authorization checks in the right place? (not just UI-level)
- Are secrets handled correctly? (env vars, not hardcoded)
- Could this introduce SQL injection, XSS, or other injection vulnerabilities?
Performance
- Are there N+1 query patterns? (loops that make per-item DB calls)
- Is pagination applied where data could be large?
- Are expensive operations cached where appropriate?
Maintainability
- Will someone unfamiliar with this code understand it in 6 months?
- Are functions and variables named clearly?
- Is there duplication that should be extracted?
Architecture
- Does this fit the existing patterns in the codebase?
- Are there abstraction leaks? (implementation details in the wrong layer)
- Is the scope of the change appropriate? (not solving tomorrow's problem)
Tests
- Do tests cover the happy path and key failure modes?
- Are tests testing behavior (not implementation details)?
- Are test descriptions meaningful?
Not every item applies to every PR. Use judgment to focus on what matters for the specific change.
---
## Giving Useful Feedback
The way feedback is delivered determines whether review improves code quality or just creates friction.
**Comment prefixes that clarify intent:**
```markdown
# Blocking โ must be addressed before merge
MUST: This will cause a data race if two requests arrive simultaneously.
Add a distributed lock or use a database-level constraint.
# Non-blocking โ address or explain, but won't block merge
SHOULD: This function is doing three things. Consider splitting into
`validateInput()`, `processData()`, and `saveResult()`.
# Genuine question, not criticism
QUESTION: Is there a reason we're not using the existing `formatCurrency`
utility here? Just checking if I'm missing something.
# Positive callout
NICE: Clean handling of the concurrent case with the optimistic lock.
This is the right approach here.
# Learning opportunity โ explains why, not just what
FYI: `Object.keys()` returns strings even when the keys were numbers.
If you need to iterate with numeric keys, use `Object.entries()`
and coerce. Not a bug here since you're using `in` operator,
but worth knowing.
The rule of thumb: if you'd write the same code you're commenting on, you don't need to comment. Only flag things that actually matter.
Review SLAs: The Culture Shift
The biggest process improvement most teams can make: define explicit expectations for review turnaround.
Common SLA structure:
- P1 (hotfix): Review within 2 hours, any available engineer
- P2 (feature): First review within 1 business day
- P3 (refactor/chore): Review within 2 business days
- Draft PR: No SLA โ author will signal when ready
Implementation: Add to your engineering handbook. Make it visible. Track SLA compliance in your analytics (GitHub provides this data).
Async review for distributed teams:
# When leaving an async review:
1. State your timezone and availability:
"Reviewed async from UTC+5:30. Available for sync discussion
Monday 10amโ12pm UTC if needed."
2. Be complete in your review โ don't leave "let's discuss":
โ "I have some concerns about the architecture here."
โ
"I'm concerned that adding caching at this layer means we'll need
to invalidate across three services on update. Have you considered
handling caching at the API gateway instead?"
3. Mark clearly what's blocking vs non-blocking (see prefixes above)
4. Approve or request changes โ don't leave in limbo:
โ Comment-only review with no approval/change request
โ
"Approved with non-blocking suggestions โ feel free to merge or
address, your call."
Metrics to Track Review Health
-- Average time from PR open to first review (GitHub data via API or export)
SELECT
DATE_TRUNC('week', created_at) AS week,
AVG(EXTRACT(EPOCH FROM (first_reviewed_at - created_at)) / 3600) AS avg_hours_to_first_review,
PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY
EXTRACT(EPOCH FROM (first_reviewed_at - created_at)) / 3600
) AS p95_hours_to_first_review,
COUNT(*) AS pr_count
FROM pull_requests
WHERE merged_at IS NOT NULL
GROUP BY week
ORDER BY week DESC;
Target metrics:
- Time to first review: median < 4 hours, p95 < 24 hours
- PR cycle time (open to merge): median < 2 days
- PR size: median < 300 lines
- Review comments per PR: 3โ8 (fewer = rubber stamp, more = friction)
Review Rotation and Knowledge Spreading
Code review is a knowledge-spreading mechanism. If the same two engineers review all the code, knowledge stays siloed.
Rotation strategies:
- Random assignment (via CODEOWNERS + random selection): Forces broad knowledge
- Area ownership + rotation: Primary reviewer is the owner; secondary rotates
- New-hire pairing: New engineers always review with a senior for their first month
CODEOWNERS file (GitHub auto-requests review from owners):
# Default owners for everything
* @org/engineering-team
# Payment code โ always needs a senior review
/src/payments/ @alice @bob
# Infrastructure changes โ DevOps team review
/terraform/ @devops-team
/.github/ @devops-team
# Frontend components
/src/components/ @frontend-team
Working With Viprasol
When we join client engineering teams, we often help establish or improve code review culture โ defining review checklists, setting up automated gates, establishing SLAs, and training teams on effective feedback. A week of process improvement can cut review cycle times in half.
โ Talk to our team about engineering process improvements.
See Also
- CI/CD Pipeline Setup โ automated gates that make review easier
- Technical Writing for Developers โ PR descriptions and changelogs
- Hiring Software Engineers โ using code review as a hiring signal
- Developer Experience โ review tooling as DX investment
- Web Development Services โ engineering team support
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need a Modern Web Application?
From landing pages to complex SaaS platforms โ we build it all with Next.js and React.
Free consultation โข No commitment โข Response within 24 hours
Need a custom web application built?
We build React and Next.js web applications with Lighthouse โฅ90 scores, mobile-first design, and full source code ownership. Senior engineers only โ from architecture through deployment.