Code · Claude Prompt

Senior Code Reviewer

Acts as a senior engineer reviewing your PR. Catches logic errors, security issues, and code smells.

What it does

The Senior Code Reviewer prompt turns any Claude session — in the browser or in the terminal — into a structured pull request reviewer. It reads the diff with the lens of a principal engineer: correctness first, then security, performance, error handling, design, tests, and style. It refuses to produce 'looks good' reviews and refuses to drown you in nits while missing a real issue.

What the prompt generates

For every issue Claude finds, you get a labelled entry in a consistent format: severity emoji (🔴 CRITICAL, 🟡 WARNING, 🔵 SUGGESTION), file and approximate line number, what the issue is in one sentence, why it matters in one sentence, and a concrete suggested fix (often a before/after snippet). The review ends with a one-paragraph summary, a count by severity, and a clear verdict — ✅ Approve, ⚠️ Request changes, or 💬 Needs discussion.

Who should use it

Solo developers and small teams without a dedicated reviewer. Senior engineers who want to double-check a high-risk PR before merge. Open source maintainers reviewing contributor PRs. Teams adopting async review who need a first-pass reviewer before a human reads the diff. The prompt is language-agnostic — works on TypeScript, Python, Go, Rust, Swift, Dart, Kotlin, Java, Ruby, and anything else.

What it solves

Most code review is either too generous ('LGTM') or too nitpicky (twelve naming suggestions while missing a SQL injection). This prompt enforces the right severity taxonomy and the right review order, so the review you get is the review you want: blockers called out, warnings separated, suggestions organized, verdict clear.

How to install

Which tool are you using?

Not sure? Claude.ai is the website. Claude Code is the command-line tool you install separately. Cursor is a code editor that reads .cursorrules.

  1. 01

    Open Claude.ai

    Go to claude.ai in your browser and click 'New conversation'.

    You need a free Claude account. If you do not have one, sign up at claude.ai — it is free.

  2. 02

    Copy the prompt

    Scroll to the prompt file section below and click 'Copy'. The entire prompt goes to your clipboard.

    The copy button gets the full prompt — it is long because it encodes the whole review methodology.

  3. 03

    Paste into Claude

    Click the chat input box and paste the prompt with Cmd+V (Mac) or Ctrl+V (Windows). Do not press Enter yet.

  4. 04

    Replace the placeholder with your diff

    At the end of the prompt, replace [PASTE DIFF HERE] with your actual diff. You can get it from GitHub's 'files changed' tab (click 'View file' → 'Raw') or locally with git diff main | pbcopy on Mac.

    For huge diffs (2000+ lines), paste one file at a time. Review quality drops if you dump everything at once.

  5. 05

    Send and get your structured review

    Press Enter. Claude returns a severity-labelled review: 🔴 CRITICAL, 🟡 WARNING, 🔵 SUGGESTION, each with file, line, what, why, and suggested fix — ending with an approval verdict.

    Keep the chat open. Ask follow-up questions on specific issues and Claude will stay in reviewer mode.

The claude prompt file

Copy the full contents below, or download the file directly.

senior-code-reviewer.txt
senior-code-reviewer.txt
You are a senior software engineer with 15+ years of experience reviewing pull requests across large, production-critical codebases. You review the way a principal engineer reviews: thorough, specific, and always tied to reasoning. Review the diff I paste next. Work through every section in this exact order. Do not skip ahead. Do not combine sections. # Review Sections ## 1. Correctness- Does the code do what the PR description says?- Off-by-one errors, null / undefined handling, boundary conditions- Race conditions, re-entrancy, concurrency safety- Error paths handled (not swallowed)- Both happy path AND sad path covered ## 2. Security- Untrusted input reaching SQL, shell, filesystem, or network- Secrets, API keys, tokens visible in code, logs, or error messages- Auth checks on every protected path — at every layer, not just router- CSRF, SSRF, XSS, open redirect surfaces- Dependency additions: check for known CVEs ## 3. Performance- N+1 queries in data access- Unnecessary renders or allocations in hot paths- Unbounded loops, unbounded memory, unbounded response sizes- Missing pagination on list endpoints- Synchronous I/O on async paths ## 4. Error Handling- Every thrown path has an explicit catch or a documented propagation target- Errors carry enough context (code + message + correlation id) to debug- User-facing errors never leak stack traces or internals- Retries use backoff; no infinite retries ## 5. Design- Single responsibility per function- No hidden global state- Public API surface minimal and documented- Naming reflects intent ## 6. Tests- Coverage of new behavior (happy + sad paths + edge cases)- Assertions are meaningful, not just "does not throw"- No flakiness — deterministic inputs and outputs- Mocks limited to boundaries you own or slow dependencies ## 7. Style- Dead code removed- Comments explain why, not what- Consistent with the rest of the codebase # Output Format For EACH issue, use this exact structure: 🔴 CRITICAL — <file>:<line>**What**: <one sentence describing the bug or risk>**Why it matters**: <one sentence on the impact>**Suggested fix**: <concrete change, or before/after snippet> 🟡 WARNING — <file>:<line>**What**: <one sentence>**Why it matters**: <one sentence>**Suggested fix**: <concrete change> 🔵 SUGGESTION — <file>:<line>**What**: <one sentence>**Why it matters**: <one sentence>**Suggested fix**: <concrete change> # Severity Rules- 🔴 CRITICAL = blocker: correctness bug, security hole, data loss, auth bypass- 🟡 WARNING = should fix before merge: performance regression, missing error handling, weak tests- 🔵 SUGGESTION = nice to have: style, naming, refactor opportunity # SummaryEnd with ONE paragraph summarizing the overall change, total issue counts by severity, and a single-line verdict:- ✅ **Approve** — safe to merge- ⚠️ **Request changes** — has warnings or critical issues- 💬 **Needs discussion** — requires team decision before proceeding Every suggestion must follow "Consider X because Y" — never just "change this". Diff to review:[PASTE DIFF HERE]

Example output

What Claude does before and after you install this claude prompt.

Without this claude prompt

Claude says 'this looks good overall' and nitpicks a variable name while missing a blocker-level race condition.

With this claude prompt

Claude returns 1 🔴 CRITICAL (race condition), 2 🟡 WARNING (missing auth check, N+1 query), 3 🔵 SUGGESTION (naming). Each with file, line, what, why, suggested fix. Verdict: ⚠️ Request changes.

Customization tips

Tighten the review scope by removing sections you do not want audited (e.g. skip Performance for early-stage code). Add a domain-specific section for your stack — e.g. "Accessibility" for frontend-heavy codebases or "Data correctness" for analytics pipelines. Tune the severity thresholds if your team treats "missing test" as CRITICAL vs WARNING.

Related resources

Frequently asked questions

Does it work on any programming language?

Yes. The prompt is language-agnostic — it reviews correctness, security, performance, and design regardless of stack. Add language-specific follow-ups if you want deeper coverage.

Can it review huge pull requests?

Review quality is best under ~1000 lines per turn. For larger PRs, split by file or feature and submit in sequence. You can keep the same chat so Claude has ongoing context.

Does this replace a human code reviewer?

No. It catches many issues (often more than a tired reviewer) but business context and architectural direction still need a human. Use this as your first-pass reviewer, especially before opening a PR.

Is it safe to paste proprietary code into Claude?

Follow your employer's data policy. Claude's enterprise tier has no-retention options and SOC 2 compliance. For personal projects, the standard tier is fine.

Can I tune the severity thresholds for my team?

Yes. Edit the prompt — change what counts as CRITICAL vs WARNING (e.g. a missing test is WARNING in your team, not CRITICAL). The severity definitions are in the Output Format section.

Want more like this?

Browse the full RohanKit library — free resources for Claude and Cursor.

Back to RohanKit