opencode-code-agent/skills/review/SKILL.md

31 KiB

name: review description: "Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. Use when asked to "review this PR", ""

Pre-Landing PR Review

You are running the /review workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch.


Step 1: Check branch

  1. Run git branch --show-current to get the current branch.
  2. If on the base branch, output: "Nothing to review — you're on the base branch or have no changes against it." and stop.
  3. Run git fetch origin <base> --quiet && git diff origin/<base> --stat to check if there's a diff. If no diff, output the same message and stop.

Step 1.5: Scope Drift Detection

Before reviewing code quality, check: did they build what was requested — nothing more, nothing less?

  1. Read TODOS.md (if it exists). Read PR description (gh pr view --json body --jq .body 2>/dev/null || true). Read commit messages (git log origin/<base>..HEAD --oneline). If no PR exists: rely on commit messages and TODOS.md for stated intent — this is the common case since /review runs before /ship creates the PR.

  2. Identify the stated intent — what was this branch supposed to accomplish?

  3. Run git diff origin/<base> --stat and compare the files changed against the stated intent.

  4. Evaluate with skepticism:

    SCOPE CREEP detection:

    • Files changed that are unrelated to the stated intent
    • New features or refactors not mentioned in the plan
    • "While I was in there..." changes that expand blast radius

    MISSING REQUIREMENTS detection:

    • Requirements from TODOS.md/PR description not addressed in the diff
    • Test coverage gaps for stated requirements
    • Partial implementations (started but not finished)
  5. Output (before the main review begins):

    Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING]
    Intent: <1-line summary of what was requested>
    Delivered: <1-line summary of what the diff actually does>
    [If drift: list each out-of-scope change]
    [If missing: list each unaddressed requirement]
    
  6. This is INFORMATIONAL — does not block the review. Proceed to Step 2.


Step 2: Read the checklist

Read .claude/skills/review/checklist.md.

If the file cannot be read, STOP and report the error. Do not proceed without the checklist.


Step 2.5: Check for Greptile review comments

Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.

If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Greptile integration is additive — the review works without it.

If Greptile comments are found: Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5.


Step 3: Get the diff

Fetch the latest base branch to avoid false positives from stale local state:

git fetch origin <base> --quiet

Run git diff origin/<base> to get the full diff. This includes both committed and uncommitted changes against the latest base branch.


Step 4: Two-pass review

Apply the checklist against the diff in two passes:

  1. Pass 1 (CRITICAL): SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness
  2. Pass 2 (INFORMATIONAL): Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend, Performance & Bundle Impact

Enum & Value Completeness requires reading code OUTSIDE the diff. When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.

Search-before-recommending: When recommending a fix pattern (especially for concurrency, caching, auth, or framework-specific behavior):

  • Verify the pattern is current best practice for the framework version in use
  • Check if a built-in solution exists in newer versions before recommending a workaround
  • Verify API signatures against current docs (APIs change between versions)

Takes seconds, prevents recommending outdated patterns. If WebSearch is unavailable, note it and proceed with in-distribution knowledge.

Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.


Step 4.5: Design Review (conditional)

Design Review (conditional, diff-scoped)

Check if the diff touches frontend files using gstack-diff-scope:

source <(${GSTACK_OPENCODE_DIR}/bin/gstack-diff-scope <base> 2>/dev/null)

If SCOPE_FRONTEND=false: Skip design review silently. No output.

If SCOPE_FRONTEND=true:

  1. Check for DESIGN.md. If DESIGN.md or design-system.md exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.

  2. Read .claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."

  3. Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.

  4. Apply the design checklist against the changed files. For each item:

    • [HIGH] mechanical CSS fix (outline: none, !important, font-size < 16px): classify as AUTO-FIX
    • [HIGH/MEDIUM] design judgment needed: classify as ASK
    • [LOW] intent-based detection: present as "Possible — verify visually or run /design-review"
  5. Include findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.

  6. Log the result for the Review Readiness Dashboard:

${GSTACK_OPENCODE_DIR}/bin/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}'

Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of git rev-parse --short HEAD.

  1. Codex design voice (optional, automatic if available):
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"

If Codex is available, run a lightweight design check on the diff:

TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL"

Use a 5-minute timeout (timeout: 300000). After the command completes, read stderr:

cat "$TMPERR_DRL" && rm -f "$TMPERR_DRL"

Error handling: All errors are non-blocking. On auth failure, timeout, or empty response — skip with a brief note and continue.

Present Codex output under a CODEX (design): header, merged with the checklist findings above.

Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else.


Step 4.75: Test Coverage Diagram

100% coverage is the goal. Evaluate every codepath changed in the diff and identify test gaps. Gaps become INFORMATIONAL findings that follow the Fix-First flow.

Test Framework Detection

Before analyzing coverage, detect the project's test framework:

  1. Read CLAUDE.md — look for a ## Testing section with test command and framework name. If found, use that as the authoritative source.
  2. If CLAUDE.md has no testing section, auto-detect:
# Detect project runtime
[ -f Gemfile ] && echo "RUNTIME:ruby"
[ -f package.json ] && echo "RUNTIME:node"
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
[ -f go.mod ] && echo "RUNTIME:go"
[ -f Cargo.toml ] && echo "RUNTIME:rust"
# Check for existing test infrastructure
ls jest.config.* vitest.config.* playwright.config.* cypress.config.* .rspec pytest.ini phpunit.xml 2>/dev/null
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
  1. If no framework detected: still produce the coverage diagram, but skip test generation.

Step 1. Trace every codepath changed using git diff origin/<base>...HEAD:

Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:

  1. Read the diff. For each changed file, read the full file (not just the diff hunk) to understand context.
  2. Trace data flow. Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
    • Where does input come from? (request params, props, database, API call)
    • What transforms it? (validation, mapping, computation)
    • Where does it go? (database write, API response, rendered output, side effect)
    • What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
  3. Diagram the execution. For each changed file, draw an ASCII diagram showing:
    • Every function/method that was added or modified
    • Every conditional branch (if/else, switch, ternary, guard clause, early return)
    • Every error path (try/catch, rescue, error boundary, fallback)
    • Every call to another function (trace into it — does IT have untested branches?)
    • Every edge: what happens with null input? Empty array? Invalid type?

This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.

Step 2. Map user flows, interactions, and error states:

Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:

  • User flows: What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
  • Interaction edge cases: What happens when the user does something unexpected?
    • Double-click/rapid resubmit
    • Navigate away mid-operation (back button, close tab, click another link)
    • Submit with stale data (page sat open for 30 minutes, session expired)
    • Slow connection (API takes 10 seconds — what does the user see?)
    • Concurrent actions (two tabs, same form)
  • Error states the user can see: For every error the code handles, what does the user actually experience?
    • Is there a clear error message or a silent failure?
    • Can the user recover (retry, go back, fix input) or are they stuck?
    • What happens with no network? With a 500 from the API? With invalid data from the server?
  • Empty/zero/boundary states: What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?

Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.

Step 3. Check each branch against existing tests:

Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:

  • Function processPayment() → look for billing.test.ts, billing.spec.ts, test/billing_test.rb
  • An if/else → look for tests covering BOTH the true AND false path
  • An error handler → look for a test that triggers that specific error condition
  • A call to helperFn() that has its own branches → those branches need tests too
  • A user flow → look for an integration or E2E test that walks through the journey
  • An interaction edge case → look for a test that simulates the unexpected action

Quality scoring rubric:

  • ★★★ Tests behavior with edge cases AND error paths
  • ★★ Tests correct behavior, happy path only
  • ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")

E2E Test Decision Matrix

When checking each branch, also determine whether a unit test or E2E/integration test is the right tool:

RECOMMEND E2E (mark as [→E2E] in the diagram):

  • Common user flow spanning 3+ components/services (e.g., signup → verify email → first login)
  • Integration point where mocking hides real failures (e.g., API → queue → worker → DB)
  • Auth/payment/data-destruction flows — too important to trust unit tests alone

RECOMMEND EVAL (mark as [→EVAL] in the diagram):

  • Critical LLM call that needs a quality eval (e.g., prompt change → test output still meets quality bar)
  • Changes to prompt templates, system instructions, or tool definitions

STICK WITH UNIT TESTS:

  • Pure function with clear inputs/outputs
  • Internal helper with no side effects
  • Edge case of a single function (null input, empty array)
  • Obscure/rare flow that isn't customer-facing

REGRESSION RULE (mandatory)

IRON RULE: When the coverage audit identifies a REGRESSION — code that previously worked but the diff broke — a regression test is written immediately. No question. No skipping. Regressions are the highest-priority test because they prove something broke.

A regression is when:

  • The diff modifies existing behavior (not new code)
  • The existing test suite (if any) doesn't cover the changed path
  • The change introduces a new failure mode for existing callers

When uncertain whether a change is a regression, err on the side of writing the test.

Format: commit as test: regression test for {what broke}

Step 4. Output ASCII coverage diagram:

Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:

CODE PATH COVERAGE
===========================
[+] src/services/billing.ts
    │
    ├── processPayment()
    │   ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
    │   ├── [GAP]         Network timeout — NO TEST
    │   └── [GAP]         Invalid currency — NO TEST
    │
    └── refundPayment()
        ├── [★★  TESTED] Full refund — billing.test.ts:89
        └── [★   TESTED] Partial refund (checks non-throw only) — billing.test.ts:101

USER FLOW COVERAGE
===========================
[+] Payment checkout flow
    │
    ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
    ├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
    ├── [GAP]         Navigate away during payment — unit test sufficient
    └── [★   TESTED]  Form validation errors (checks render only) — checkout.test.ts:40

[+] Error states
    │
    ├── [★★  TESTED] Card declined message — billing.test.ts:58
    ├── [GAP]         Network timeout UX (what does user see?) — NO TEST
    └── [GAP]         Empty cart submission — NO TEST

[+] LLM integration
    │
    └── [GAP] [→EVAL] Prompt template change — needs eval test

─────────────────────────────────
COVERAGE: 5/13 paths tested (38%)
  Code paths: 3/5 (60%)
  User flows: 2/8 (25%)
QUALITY:  ★★★: 2  ★★: 2  ★: 1
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
─────────────────────────────────

Fast path: All paths covered → "Step 4.75: All new code paths have test coverage ✓" Continue.

Step 5. Generate tests for gaps (Fix-First):

If test framework is detected and gaps were identified:

  • Classify each gap as AUTO-FIX or ASK per the Fix-First Heuristic:
    • AUTO-FIX: Simple unit tests for pure functions, edge cases of existing tested functions
    • ASK: E2E tests, tests requiring new test infrastructure, tests for ambiguous behavior
  • For AUTO-FIX gaps: generate the test, run it, commit as test: coverage for {feature}
  • For ASK gaps: include in the Fix-First batch question with the other review findings
  • For paths marked [→E2E]: always ASK (E2E tests are higher-effort and need user confirmation)
  • For paths marked [→EVAL]: always ASK (eval tests need user confirmation on quality criteria)

If no test framework detected → include gaps as INFORMATIONAL findings only, no generation.

Diff is test-only changes: Skip Step 4.75 entirely: "No new application code paths to audit."

This step subsumes the "Test Gaps" category from Pass 2 — do not duplicate findings between the checklist Test Gaps item and this coverage diagram. Include any coverage gaps alongside the findings from Step 4 and Step 4.5. They follow the same Fix-First flow — gaps are INFORMATIONAL findings.


Step 5: Fix-First Review

Every finding gets action — not just critical ones.

Output a summary header: Pre-Landing Review: N issues (X critical, Y informational)

Step 5a: Classify each finding

For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX.

Step 5b: Auto-fix all AUTO-FIX items

Apply each fix directly. For each one, output a one-line summary: [AUTO-FIXED] [file:line] Problem → what you did

Step 5c: Batch-ask about ASK items

If there are ASK items remaining, present them in ONE question:

  • List each item with a number, the severity label, the problem, and a recommended fix
  • For each item, provide options: A) Fix as recommended, B) Skip
  • Include an overall RECOMMENDATION

Example format:

I auto-fixed 5 issues. 2 need your input:

1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition
   Fix: Add `WHERE status = 'draft'` to the UPDATE
   → A) Fix  B) Skip

2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write
   Fix: Add JSON schema validation
   → A) Fix  B) Skip

RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption.

If 3 or fewer ASK items, you may use individual question calls instead of batching.

Step 5d: Apply user-approved fixes

Apply fixes for items where the user chose "Fix." Output what was fixed.

If no ASK items exist (everything was AUTO-FIX), skip the question entirely.

Verification of claims

Before producing the final review output:

  • If you claim "this pattern is safe" → cite the specific line proving safety
  • If you claim "this is handled elsewhere" → read and cite the handling code
  • If you claim "tests cover this" → name the test file and method
  • Never say "likely handled" or "probably tested" — verify or flag as unknown

Rationalization prevention: "This looks fine" is not a finding. Either cite evidence it IS fine, or flag it as unverified.

Greptile comment resolution

After outputting your own findings, if Greptile comments were classified in Step 2.5:

Include a Greptile summary in your output header: + N Greptile comments (X valid, Y fixed, Z FP)

Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.

  1. VALID & ACTIONABLE comments: These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the False Positive reply template (include evidence + suggested re-rank), save to both per-project and global greptile-history.

  2. FALSE POSITIVE comments: Present each one via question:

    • Show the Greptile comment: file:line (or [top-level]) + body summary + permalink URL
    • Explain concisely why it's a false positive
    • Options:
      • A) Reply to Greptile explaining why this is incorrect (recommended if clearly wrong)
      • B) Fix it anyway (if low-effort and harmless)
      • C) Ignore — don't reply, don't fix

    If the user chooses A, reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history.

  3. VALID BUT ALREADY FIXED comments: Reply using the Already Fixed reply template from greptile-triage.md — no question needed:

    • Include what was done and the fixing commit SHA
    • Save to both per-project and global greptile-history
  4. SUPPRESSED comments: Skip silently — these are known false positives from previous triage.


Step 5.5: TODOS cross-reference

Read TODOS.md in the repository root (if it exists). Cross-reference the PR against open TODOs:

  • Does this PR close any open TODOs? If yes, note which items in your output: "This PR addresses TODO: