opencode-code-agent/skills/ship/SKILL.md

59 KiB
Raw Blame History

name: ship description: "Ship workflow: detect + merge base branch, run tests, review diff, bump VERSION, update CHANGELOG, commit, push, create PR. Use when asked to "ship", "deploy", "push to main", "create a PR", or "merge"

Ship: Fully Automated Ship Workflow

You are running the /ship workflow. This is a non-interactive, fully automated workflow. Do NOT ask for confirmation at any step. The user said /ship which means DO IT. Run straight through and output the PR URL at the end.

Only stop for:

  • On the base branch (abort)
  • Merge conflicts that can't be auto-resolved (stop, show conflicts)
  • In-branch test failures (pre-existing failures are triaged, not auto-blocking)
  • Pre-landing review finds ASK items that need user judgment
  • MINOR or MAJOR version bump needed (ask — see Step 4)
  • Greptile review comments that need user decision (complex fixes, false positives)
  • TODOS.md missing and user wants to create one (ask — see Step 5.5)
  • TODOS.md disorganized and user wants to reorganize (ask — see Step 5.5)

Never stop for:

  • Uncommitted changes (always include them)
  • Version bump choice (auto-pick MICRO or PATCH — see Step 4)
  • CHANGELOG content (auto-generate from diff)
  • Commit message approval (auto-commit)
  • Multi-file changesets (auto-split into bisectable commits)
  • TODOS.md completed-item detection (auto-mark)
  • Auto-fixable review findings (dead code, N+1, stale comments — fixed automatically)
  • Test coverage gaps (auto-generate and commit, or flag in PR body)

Step 1: Pre-flight

  1. Check the current branch. If on the base branch or the repo's default branch, abort: "You're on the base branch. Ship from a feature branch."

  2. Run git status (never use -uall). Uncommitted changes are always included — no need to ask.

  3. Run git diff <base>...HEAD --stat and git log <base>..HEAD --oneline to understand what's being shipped.

  4. Check review readiness:

Review Readiness Dashboard

After completing the review, read the review log and config to display the dashboard.

${GSTACK_OPENCODE_DIR}/bin/gstack-review-read

Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review, design-review-lite, adversarial-review, codex-review, codex-plan-review). Ignore entries with timestamps older than 7 days. For the Adversarial row, show whichever is more recent between adversarial-review (new auto-scaled) and codex-review (legacy). For Design Review, show whichever is more recent between plan-design-review (full visual audit) and design-review-lite (code-level check). Append "(FULL)" or "(LITE)" to the status to distinguish. Display:

+====================================================================+
|                    REVIEW READINESS DASHBOARD                       |
+====================================================================+
| Review          | Runs | Last Run            | Status    | Required |
|-----------------|------|---------------------|-----------|----------|
| Eng Review      |  1   | 2026-03-16 15:00    | CLEAR     | YES      |
| CEO Review      |  0   | —                   | —         | no       |
| Design Review   |  0   | —                   | —         | no       |
| Adversarial     |  0   | —                   | —         | no       |
| Outside Voice   |  0   | —                   | —         | no       |
+--------------------------------------------------------------------+
| VERDICT: CLEARED — Eng Review passed                                |
+====================================================================+

Review tiers:

  • Eng Review (required by default): The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with `gstack-config set skip_eng_review true` (the "don't bother me" setting).
  • CEO Review (optional): Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
  • Design Review (optional): Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
  • Adversarial Review (automatic): Auto-scales by diff size. Small diffs (<50 lines) skip adversarial. Medium diffs (50199) get cross-model adversarial. Large diffs (200+) get all 4 passes: Claude structured, Codex structured, Claude adversarial subagent, Codex adversarial. No configuration needed.
  • Outside Voice (optional): Independent plan review from a different AI model. Offered after all review sections complete in /plan-ceo-review and /plan-eng-review. Falls back to Claude subagent if Codex is unavailable. Never gates shipping.

Verdict logic:

  • CLEARED: Eng Review has >= 1 entry within 7 days with status "clean" (or `skip_eng_review` is `true`)
  • NOT CLEARED: Eng Review missing, stale (>7 days), or has open issues
  • CEO, Design, and Codex reviews are shown for context but never block shipping
  • If `skip_eng_review` config is `true`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED

Staleness detection: After displaying the dashboard, check if any existing reviews may be stale:

  • Parse the `---HEAD---` section from the bash output to get the current HEAD commit hash
  • For each review entry that has a `commit` field: compare it against the current HEAD. If different, count elapsed commits: `git rev-list --count STORED_COMMIT..HEAD`. Display: "Note: {skill} review from {date} may be stale — {N} commits since review"
  • For entries without a `commit` field (legacy entries): display "Note: {skill} review from {date} has no commit tracking — consider re-running for accurate staleness detection"
  • If all reviews match the current HEAD, do not display any staleness notes

If the Eng Review is NOT "CLEAR":

  1. Check for a prior override on this branch:

    eval "$(${GSTACK_OPENCODE_DIR}/bin/gstack-slug 2>/dev/null)"
    grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
    

    If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.

  2. If no override exists, use question:

    • Show that Eng Review is missing or has open issues
    • RECOMMENDATION: Choose C if the change is obviously trivial (< 20 lines, typo fix, config-only); Choose B for larger changes
    • Options: A) Ship anyway B) Abort — run /plan-eng-review first C) Change is too small to need eng review
    • If CEO Review is missing, mention as informational ("CEO Review not run — recommended for product changes") but do NOT block
    • For Design Review: run source <(${GSTACK_OPENCODE_DIR}/bin/gstack-diff-scope <base> 2>/dev/null). If SCOPE_FRONTEND=true and no design review (plan-design-review or design-review-lite) exists in the dashboard, mention: "Design Review not run — this PR changes frontend code. The lite design check will run automatically in Step 3.5, but consider running /design-review for a full visual audit post-implementation." Still never block.
  3. If the user chooses A or C, persist the decision so future /ship runs on this branch skip the gate:

    eval "$(${GSTACK_OPENCODE_DIR}/bin/gstack-slug 2>/dev/null)"
    echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
    

    Substitute USER_CHOICE with "ship_anyway" or "not_relevant".


Step 2: Merge the base branch (BEFORE tests)

Fetch and merge the base branch into the feature branch so tests run against the merged state:

git fetch origin <base> && git merge origin/<base> --no-edit

If there are merge conflicts: Try to auto-resolve if they are simple (VERSION, schema.rb, CHANGELOG ordering). If conflicts are complex or ambiguous, STOP and show them.

If already up to date: Continue silently.


Step 2.5: Test Framework Bootstrap

Test Framework Bootstrap

Detect existing test framework and project runtime:

# Detect project runtime
[ -f Gemfile ] && echo "RUNTIME:ruby"
[ -f package.json ] && echo "RUNTIME:node"
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
[ -f go.mod ] && echo "RUNTIME:go"
[ -f Cargo.toml ] && echo "RUNTIME:rust"
[ -f composer.json ] && echo "RUNTIME:php"
[ -f mix.exs ] && echo "RUNTIME:elixir"
# Detect sub-frameworks
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
# Check for existing test infrastructure
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
# Check opt-out marker
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"

If test framework detected (config files or test directories found): Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap." Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns). Store conventions as prose context for use in Phase 8e.5 or Step 3.4. Skip the rest of bootstrap.

If BOOTSTRAP_DECLINED appears: Print "Test bootstrap previously declined — skipping." Skip the rest of bootstrap.

If NO runtime detected (no config files found): Use question: "I couldn't detect your project's language. What runtime are you using?" Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests. If user picks H → write .gstack/no-test-bootstrap and continue without tests.

If runtime detected but no test framework — bootstrap:

B2. Research best practices

Use WebSearch to find current best practices for the detected runtime:

  • "[runtime] best test framework 2025 2026"
  • "[framework A] vs [framework B] comparison"

If WebSearch is unavailable, use this built-in knowledge table:

Runtime Primary recommendation Alternative
Ruby/Rails minitest + fixtures + capybara rspec + factory_bot + shoulda-matchers
Node.js vitest + @testing-library jest + @testing-library
Next.js vitest + @testing-library/react + playwright jest + cypress
Python pytest + pytest-cov unittest
Go stdlib testing + testify stdlib only
Rust cargo test (built-in) + mockall
PHP phpunit + mockery pest
Elixir ExUnit (built-in) + ex_machina

B3. Framework selection

Use question: "I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options: A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e B) [Alternative] — [rationale]. Includes: [packages] C) Skip — don't set up testing right now RECOMMENDATION: Choose A because [reason based on project context]"

If user picks C → write .gstack/no-test-bootstrap. Tell user: "If you change your mind later, delete .gstack/no-test-bootstrap and re-run." Continue without tests.

If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.

B4. Install and configure

  1. Install the chosen packages (npm/bun/gem/pip/etc.)
  2. Create minimal config file
  3. Create directory structure (test/, spec/, etc.)
  4. Create one example test matching the project's code to verify setup works

If package installation fails → debug once. If still failing → revert with git checkout -- package.json package-lock.json (or equivalent for the runtime). Warn user and continue without tests.

B4.5. First real tests

Generate 3-5 real tests for existing code:

  1. Find recently changed files: git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10
  2. Prioritize by risk: Error handlers > business logic with conditionals > API endpoints > pure functions
  3. For each file: Write one test that tests real behavior with meaningful assertions. Never expect(x).toBeDefined() — test what the code DOES.
  4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
  5. Generate at least 1 test, cap at 5.

Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.

B5. Verify

# Run the full test suite to confirm everything works
{detected test command}

If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.

B5.5. CI/CD pipeline

# Check CI provider
ls -d .github/ 2>/dev/null && echo "CI:github"
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null

If .github/ exists (or no CI detected — default to GitHub Actions): Create .github/workflows/test.yml with:

  • runs-on: ubuntu-latest
  • Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
  • The same test command verified in B5
  • Trigger: push + pull_request

If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."

B6. Create TESTING.md

First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.

Write TESTING.md with:

  • Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
  • Framework name and version
  • How to run tests (the verified command from B5)
  • Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
  • Conventions: file naming, assertion style, setup/teardown patterns

B7. Update CLAUDE.md

First check: If CLAUDE.md already has a ## Testing section → skip. Don't duplicate.

Append a ## Testing section:

  • Run command and test directory
  • Reference to TESTING.md
  • Test expectations:
    • 100% test coverage is the goal — tests make vibe coding safe
    • When writing new functions, write a corresponding test
    • When fixing a bug, write a regression test
    • When adding error handling, write a test that triggers the error
    • When adding a conditional (if/else, switch), write tests for BOTH paths
    • Never commit code that makes existing tests fail

B8. Commit

git status --porcelain

Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created): git commit -m "chore: bootstrap test framework ({framework name})"



Step 3: Run tests (on merged code)

Do NOT run RAILS_ENV=test bin/rails db:migratebin/test-lane already calls db:test:prepare internally, which loads the schema into the correct lane database. Running bare test migrations without INSTANCE hits an orphan DB and corrupts structure.sql.

Run both test suites in parallel:

bin/test-lane 2>&1 | tee /tmp/ship_tests.txt &
npm run test 2>&1 | tee /tmp/ship_vitest.txt &
wait

After both complete, read the output files and check pass/fail.

If any test fails: Do NOT immediately stop. Apply the Test Failure Ownership Triage:

Test Failure Ownership Triage

When tests fail, do NOT immediately stop. First, determine ownership:

Step T1: Classify each failure

For each failing test:

  1. Get the files changed on this branch:

    git diff origin/<base>...HEAD --name-only
    
  2. Classify the failure:

    • In-branch if: the failing test file itself was modified on this branch, OR the test output references code that was changed on this branch, OR you can trace the failure to a change in the branch diff.
    • Likely pre-existing if: neither the test file nor the code it tests was modified on this branch, AND the failure is unrelated to any branch change you can identify.
    • When ambiguous, default to in-branch. It is safer to stop the developer than to let a broken test ship. Only classify as pre-existing when you are confident.

    This classification is heuristic — use your judgment reading the diff and the test output. You do not have a programmatic dependency graph.

Step T2: Handle in-branch failures

STOP. These are your failures. Show them and do not proceed. The developer must fix their own broken tests before shipping.

Step T3: Handle pre-existing failures

Check REPO_MODE from the preamble output.

If REPO_MODE is solo:

Use question:

These test failures appear pre-existing (not caused by your branch changes):

[list each failure with file:line and brief error description]

Since this is a solo repo, you're the only one who will fix these.

RECOMMENDATION: Choose A — fix now while the context is fresh. Completeness: 9/10. A) Investigate and fix now (human: ~2-4h / CC: ~15min) — Completeness: 10/10 B) Add as P0 TODO — fix after this branch lands — Completeness: 7/10 C) Skip — I know about this, ship anyway — Completeness: 3/10

If REPO_MODE is collaborative or unknown:

Use question:

These test failures appear pre-existing (not caused by your branch changes):

[list each failure with file:line and brief error description]

This is a collaborative repo — these may be someone else's responsibility.

RECOMMENDATION: Choose B — assign it to whoever broke it so the right person fixes it. Completeness: 9/10. A) Investigate and fix now anyway — Completeness: 10/10 B) Blame + assign GitHub issue to the author — Completeness: 9/10 C) Add as P0 TODO — Completeness: 7/10 D) Skip — ship anyway — Completeness: 3/10

Step T4: Execute the chosen action

If "Investigate and fix now":

  • Switch to /investigate mindset: root cause first, then minimal fix.
  • Fix the pre-existing failure.
  • Commit the fix separately from the branch's changes: git commit -m "fix: pre-existing test failure in <test-file>"
  • Continue with the workflow.

If "Add as P0 TODO":

  • If TODOS.md exists, add the entry following the format in review/TODOS-format.md (or .claude/skills/review/TODOS-format.md).
  • If TODOS.md does not exist, create it with the standard header and add the entry.
  • Entry should include: title, the error output, which branch it was noticed on, and priority P0.
  • Continue with the workflow — treat the pre-existing failure as non-blocking.

If "Blame + assign GitHub issue" (collaborative only):

  • Find who likely broke it. Check BOTH the test file AND the production code it tests:
    # Who last touched the failing test?
    git log --format="%an (%ae)" -1 -- <failing-test-file>
    # Who last touched the production code the test covers? (often the actual breaker)
    git log --format="%an (%ae)" -1 -- <source-file-under-test>
    
    If these are different people, prefer the production code author — they likely introduced the regression.
  • Create a GitHub issue assigned to that person:
    gh issue create \
      --title "Pre-existing test failure: <test-name>" \
      --body "Found failing on branch <current-branch>. Failure is pre-existing.\n\n**Error:**\n```\n<first 10 lines>\n```\n\n**Last modified by:** <author>\n**Noticed by:** gstack /ship on <date>" \
      --assignee "<github-username>"
    
  • If gh is not available or --assignee fails (user not in org, etc.), create the issue without assignee and note who should look at it in the body.
  • Continue with the workflow.

If "Skip":

  • Continue with the workflow.
  • Note in output: "Pre-existing test failure skipped: "

After triage: If any in-branch failures remain unfixed, STOP. Do not proceed. If all failures were pre-existing and handled (fixed, TODOed, assigned, or skipped), continue to Step 3.25.

If all pass: Continue silently — just note the counts briefly.


Step 3.25: Eval Suites (conditional)

Evals are mandatory when prompt-related files change. Skip this step entirely if no prompt files are in the diff.

1. Check if the diff touches prompt-related files:

git diff origin/<base> --name-only

Match against these patterns (from CLAUDE.md):

  • app/services/*_prompt_builder.rb
  • app/services/*_generation_service.rb, *_writer_service.rb, *_designer_service.rb
  • app/services/*_evaluator.rb, *_scorer.rb, *_classifier_service.rb, *_analyzer.rb
  • app/services/concerns/*voice*.rb, *writing*.rb, *prompt*.rb, *token*.rb
  • app/services/chat_tools/*.rb, app/services/x_thread_tools/*.rb
  • config/system_prompts/*.txt
  • test/evals/**/* (eval infrastructure changes affect all suites)

If no matches: Print "No prompt-related files changed — skipping evals." and continue to Step 3.5.

2. Identify affected eval suites:

Each eval runner (test/evals/*_eval_runner.rb) declares PROMPT_SOURCE_FILES listing which source files affect it. Grep these to find which suites match the changed files:

grep -l "changed_file_basename" test/evals/*_eval_runner.rb

Map runner → test file: post_generation_eval_runner.rbpost_generation_eval_test.rb.

Special cases:

  • Changes to test/evals/judges/*.rb, test/evals/support/*.rb, or test/evals/fixtures/ affect ALL suites that use those judges/support files. Check imports in the eval test files to determine which.
  • Changes to config/system_prompts/*.txt — grep eval runners for the prompt filename to find affected suites.
  • If unsure which suites are affected, run ALL suites that could plausibly be impacted. Over-testing is better than missing a regression.

3. Run affected suites at EVAL_JUDGE_TIER=full:

/ship is a pre-merge gate, so always use full tier (Sonnet structural + Opus persona judges).

EVAL_JUDGE_TIER=full EVAL_VERBOSE=1 bin/test-lane --eval test/evals/<suite>_eval_test.rb 2>&1 | tee /tmp/ship_evals.txt

If multiple suites need to run, run them sequentially (each needs a test lane). If the first suite fails, stop immediately — don't burn API cost on remaining suites.

4. Check results:

  • If any eval fails: Show the failures, the cost dashboard, and STOP. Do not proceed.
  • If all pass: Note pass counts and cost. Continue to Step 3.5.

5. Save eval output — include eval results and cost dashboard in the PR body (Step 8).

Tier reference (for context — /ship always uses full):

Tier When Speed (cached) Cost
fast (Haiku) Dev iteration, smoke tests ~5s (14x faster) ~$0.07/run
standard (Sonnet) Default dev, bin/test-lane --eval ~17s (4x faster) ~$0.37/run
full (Opus persona) /ship and pre-merge ~72s (baseline) ~$1.27/run

Step 3.4: Test Coverage Audit

100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.

Test Framework Detection

Before analyzing coverage, detect the project's test framework:

  1. Read CLAUDE.md — look for a ## Testing section with test command and framework name. If found, use that as the authoritative source.
  2. If CLAUDE.md has no testing section, auto-detect:
# Detect project runtime
[ -f Gemfile ] && echo "RUNTIME:ruby"
[ -f package.json ] && echo "RUNTIME:node"
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
[ -f go.mod ] && echo "RUNTIME:go"
[ -f Cargo.toml ] && echo "RUNTIME:rust"
# Check for existing test infrastructure
ls jest.config.* vitest.config.* playwright.config.* cypress.config.* .rspec pytest.ini phpunit.xml 2>/dev/null
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
  1. If no framework detected: falls through to the Test Framework Bootstrap step (Step 2.5) which handles full setup.

0. Before/after test count:

# Count test files before any generation
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l

Store this number for the PR body.

1. Trace every codepath changed using git diff origin/<base>...HEAD:

Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:

  1. Read the diff. For each changed file, read the full file (not just the diff hunk) to understand context.
  2. Trace data flow. Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
    • Where does input come from? (request params, props, database, API call)
    • What transforms it? (validation, mapping, computation)
    • Where does it go? (database write, API response, rendered output, side effect)
    • What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
  3. Diagram the execution. For each changed file, draw an ASCII diagram showing:
    • Every function/method that was added or modified
    • Every conditional branch (if/else, switch, ternary, guard clause, early return)
    • Every error path (try/catch, rescue, error boundary, fallback)
    • Every call to another function (trace into it — does IT have untested branches?)
    • Every edge: what happens with null input? Empty array? Invalid type?

This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.

2. Map user flows, interactions, and error states:

Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:

  • User flows: What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
  • Interaction edge cases: What happens when the user does something unexpected?
    • Double-click/rapid resubmit
    • Navigate away mid-operation (back button, close tab, click another link)
    • Submit with stale data (page sat open for 30 minutes, session expired)
    • Slow connection (API takes 10 seconds — what does the user see?)
    • Concurrent actions (two tabs, same form)
  • Error states the user can see: For every error the code handles, what does the user actually experience?
    • Is there a clear error message or a silent failure?
    • Can the user recover (retry, go back, fix input) or are they stuck?
    • What happens with no network? With a 500 from the API? With invalid data from the server?
  • Empty/zero/boundary states: What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?

Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.

3. Check each branch against existing tests:

Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:

  • Function processPayment() → look for billing.test.ts, billing.spec.ts, test/billing_test.rb
  • An if/else → look for tests covering BOTH the true AND false path
  • An error handler → look for a test that triggers that specific error condition
  • A call to helperFn() that has its own branches → those branches need tests too
  • A user flow → look for an integration or E2E test that walks through the journey
  • An interaction edge case → look for a test that simulates the unexpected action

Quality scoring rubric:

  • ★★★ Tests behavior with edge cases AND error paths
  • ★★ Tests correct behavior, happy path only
  • ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")

E2E Test Decision Matrix

When checking each branch, also determine whether a unit test or E2E/integration test is the right tool:

RECOMMEND E2E (mark as [→E2E] in the diagram):

  • Common user flow spanning 3+ components/services (e.g., signup → verify email → first login)
  • Integration point where mocking hides real failures (e.g., API → queue → worker → DB)
  • Auth/payment/data-destruction flows — too important to trust unit tests alone

RECOMMEND EVAL (mark as [→EVAL] in the diagram):

  • Critical LLM call that needs a quality eval (e.g., prompt change → test output still meets quality bar)
  • Changes to prompt templates, system instructions, or tool definitions

STICK WITH UNIT TESTS:

  • Pure function with clear inputs/outputs
  • Internal helper with no side effects
  • Edge case of a single function (null input, empty array)
  • Obscure/rare flow that isn't customer-facing

REGRESSION RULE (mandatory)

IRON RULE: When the coverage audit identifies a REGRESSION — code that previously worked but the diff broke — a regression test is written immediately. No question. No skipping. Regressions are the highest-priority test because they prove something broke.

A regression is when:

  • The diff modifies existing behavior (not new code)
  • The existing test suite (if any) doesn't cover the changed path
  • The change introduces a new failure mode for existing callers

When uncertain whether a change is a regression, err on the side of writing the test.

Format: commit as test: regression test for {what broke}

4. Output ASCII coverage diagram:

Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:

CODE PATH COVERAGE
===========================
[+] src/services/billing.ts
    │
    ├── processPayment()
    │   ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
    │   ├── [GAP]         Network timeout — NO TEST
    │   └── [GAP]         Invalid currency — NO TEST
    │
    └── refundPayment()
        ├── [★★  TESTED] Full refund — billing.test.ts:89
        └── [★   TESTED] Partial refund (checks non-throw only) — billing.test.ts:101

USER FLOW COVERAGE
===========================
[+] Payment checkout flow
    │
    ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
    ├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
    ├── [GAP]         Navigate away during payment — unit test sufficient
    └── [★   TESTED]  Form validation errors (checks render only) — checkout.test.ts:40

[+] Error states
    │
    ├── [★★  TESTED] Card declined message — billing.test.ts:58
    ├── [GAP]         Network timeout UX (what does user see?) — NO TEST
    └── [GAP]         Empty cart submission — NO TEST

[+] LLM integration
    │
    └── [GAP] [→EVAL] Prompt template change — needs eval test

─────────────────────────────────
COVERAGE: 5/13 paths tested (38%)
  Code paths: 3/5 (60%)
  User flows: 2/8 (25%)
QUALITY:  ★★★: 2  ★★: 2  ★: 1
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
─────────────────────────────────

Fast path: All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.

5. Generate tests for uncovered paths:

If test framework detected (or bootstrapped in Step 2.5):

  • Prioritize error handlers and edge cases first (happy paths are more likely already tested)
  • Read 2-3 existing test files to match conventions exactly
  • Generate unit tests. Mock all external dependencies (DB, API, Redis).
  • For paths marked [→E2E]: generate integration/E2E tests using the project's E2E framework (Playwright, Cypress, Capybara, etc.)
  • For paths marked [→EVAL]: generate eval tests using the project's eval framework, or flag for manual eval if none exists
  • Write tests that exercise the specific uncovered path with real assertions
  • Run each test. Passes → commit as test: coverage for {feature}
  • Fails → fix once. Still fails → revert, note gap in diagram.

Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.

If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."

Diff is test-only changes: Skip Step 3.4 entirely: "No new application code paths to audit."

6. After-count and coverage summary:

# Count test files after generation
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l

For PR body: Tests: {before} → {after} (+{delta} new) Coverage line: Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.

Test Plan Artifact

After producing the coverage diagram, write a test plan artifact so /qa and /qa-only can consume it:

eval "$(${GSTACK_OPENCODE_DIR}/bin/gstack-slug 2>/dev/null)" && mkdir -p ~/.gstack/projects/$SLUG
USER=$(whoami)
DATETIME=$(date +%Y%m%d-%H%M%S)

Write to ~/.gstack/projects/{slug}/{user}-{branch}-ship-test-plan-{datetime}.md:

# Test Plan
Generated by /ship on {date}
Branch: {branch}
Repo: {owner/repo}

## Affected Pages/Routes
- {URL path} — {what to test and why}

## Key Interactions to Verify
- {interaction description} on {page}

## Edge Cases
- {edge case} on {page}

## Critical Paths
- {end-to-end flow that must work}

Step 3.5: Pre-Landing Review

Review the diff for structural issues that tests don't catch.

  1. Read .claude/skills/review/checklist.md. If the file cannot be read, STOP and report the error.

  2. Run git diff origin/<base> to get the full diff (scoped to feature changes against the freshly-fetched base branch).

  3. Apply the review checklist in two passes:

    • Pass 1 (CRITICAL): SQL & Data Safety, LLM Output Trust Boundary
    • Pass 2 (INFORMATIONAL): All remaining categories

Design Review (conditional, diff-scoped)

Check if the diff touches frontend files using gstack-diff-scope:

source <(${GSTACK_OPENCODE_DIR}/bin/gstack-diff-scope <base> 2>/dev/null)

If SCOPE_FRONTEND=false: Skip design review silently. No output.

If SCOPE_FRONTEND=true:

  1. Check for DESIGN.md. If DESIGN.md or design-system.md exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.

  2. Read .claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."

  3. Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.

  4. Apply the design checklist against the changed files. For each item:

    • [HIGH] mechanical CSS fix (outline: none, !important, font-size < 16px): classify as AUTO-FIX
    • [HIGH/MEDIUM] design judgment needed: classify as ASK
    • [LOW] intent-based detection: present as "Possible — verify visually or run /design-review"
  5. Include findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.

  6. Log the result for the Review Readiness Dashboard:

${GSTACK_OPENCODE_DIR}/bin/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}'

Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of git rev-parse --short HEAD.

  1. Codex design voice (optional, automatic if available):
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"

If Codex is available, run a lightweight design check on the diff:

TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL"

Use a 5-minute timeout (timeout: 300000). After the command completes, read stderr:

cat "$TMPERR_DRL" && rm -f "$TMPERR_DRL"

Error handling: All errors are non-blocking. On auth failure, timeout, or empty response — skip with a brief note and continue.

Present Codex output under a CODEX (design): header, merged with the checklist findings above.

Include any design findings alongside the code review findings. They follow the same Fix-First flow below.

  1. Classify each finding as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational lean toward AUTO-FIX.

  2. Auto-fix all AUTO-FIX items. Apply each fix. Output one line per fix: [AUTO-FIXED] [file:line] Problem → what you did

  3. If ASK items remain, present them in ONE question:

    • List each with number, severity, problem, recommended fix
    • Per-item options: A) Fix B) Skip
    • Overall RECOMMENDATION
    • If 3 or fewer ASK items, you may use individual question calls instead
  4. After all fixes (auto + user-approved):

    • If ANY fixes were applied: commit fixed files by name (git add <fixed-files> && git commit -m "fix: pre-landing review fixes"), then STOP and tell the user to run /ship again to re-test.
    • If no fixes applied (all ASK items skipped, or no issues found): continue to Step 4.
  5. Output summary: Pre-Landing Review: N issues — M auto-fixed, K asked (J fixed, L skipped)

    If no issues found: Pre-Landing Review: No issues found.

Save the review output — it goes into the PR body in Step 8.


Step 3.75: Address Greptile review comments (if PR exists)

Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.

If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Continue to Step 4.

If Greptile comments are found:

Include a Greptile summary in your output: + N Greptile comments (X valid, Y fixed, Z FP)

Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.

For each classified comment:

VALID & ACTIONABLE: Use question with:

  • The comment (file:line or [top-level] + body summary + permalink URL)
  • RECOMMENDATION: Choose A because [one-line reason]
  • Options: A) Fix now, B) Acknowledge and ship anyway, C) It's a false positive
  • If user chooses A: apply the fix, commit the fixed files (git add <fixed-files> && git commit -m "fix: address Greptile review — <brief description>"), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation), and save to both per-project and global greptile-history (type: fix).
  • If user chooses C: reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history (type: fp).

VALID BUT ALREADY FIXED: Reply using the Already Fixed reply template from greptile-triage.md — no question needed:

  • Include what was done and the fixing commit SHA
  • Save to both per-project and global greptile-history (type: already-fixed)

FALSE POSITIVE: Use question:

  • Show the comment and why you think it's wrong (file:line or [top-level] + body summary + permalink URL)
  • Options:
    • A) Reply to Greptile explaining the false positive (recommended if clearly wrong)
    • B) Fix it anyway (if trivial)
    • C) Ignore silently
  • If user chooses A: reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history (type: fp)

SUPPRESSED: Skip silently — these are known false positives from previous triage.

After all comments are resolved: If any fixes were applied, the tests from Step 3 are now stale. Re-run tests (Step 3) before continuing to Step 4. If no fixes were applied, continue to Step 4.


Step 3.8: Adversarial review (auto-scaled)

Adversarial review thoroughness scales automatically based on diff size. No configuration needed.

Detect diff size and tool availability:

DIFF_INS=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo "0")
DIFF_DEL=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo "0")
DIFF_TOTAL=$((DIFF_INS + DIFF_DEL))
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
# Respect old opt-out
OLD_CFG=$(${GSTACK_OPENCODE_DIR}/bin/gstack-config get codex_reviews 2>/dev/null || true)
echo "DIFF_SIZE: $DIFF_TOTAL"
echo "OLD_CFG: ${OLD_CFG:-not_set}"

If OLD_CFG is disabled: skip this step silently. Continue to the next step.

User override: If the user explicitly requested a specific tier (e.g., "run all passes", "paranoid review", "full adversarial", "do all 4 passes", "thorough review"), honor that request regardless of diff size. Jump to the matching tier section.

Auto-select tier based on diff size:

  • Small (< 50 lines changed): Skip adversarial review entirely. Print: "Small diff ($DIFF_TOTAL lines) — adversarial review skipped." Continue to the next step.
  • Medium (50199 lines changed): Run Codex adversarial challenge (or Claude adversarial subagent if Codex unavailable). Jump to the "Medium tier" section.
  • Large (200+ lines changed): Run all remaining passes — Codex structured review + Claude adversarial subagent + Codex adversarial. Jump to the "Large tier" section.

Medium tier (50199 lines)

Claude's structured review already ran. Now add a cross-model adversarial challenge.

If Codex is available: run the Codex adversarial challenge. If Codex is NOT available: fall back to the Claude adversarial subagent instead.

Codex adversarial:

TMPERR_ADV=$(mktemp /tmp/codex-adv-XXXXXXXX)
codex exec "Review the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -s read-only -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR_ADV"

Set the Bash tool's timeout parameter to 300000 (5 minutes). Do NOT use the timeout shell command — it doesn't exist on macOS. After the command completes, read stderr:

cat "$TMPERR_ADV"

Present the full output verbatim. This is informational — it never blocks shipping.

Error handling: All errors are non-blocking — adversarial review is a quality enhancement, not a prerequisite.

  • Auth failure: If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run `codex login` to authenticate."
  • Timeout: "Codex timed out after 5 minutes."
  • Empty response: "Codex returned no response. Stderr: ."

On any Codex error, fall back to the Claude adversarial subagent automatically.

Claude adversarial subagent (fallback when Codex unavailable or errored):

Dispatch via the Agent tool. The subagent has fresh context — no checklist bias from the structured review. This genuine independence catches things the primary reviewer is blind to.

Subagent prompt: "Read the diff for this branch with git diff origin/<base>. Think like an attacker and a chaos engineer. Your job is to find ways this code will fail in production. Look for: edge cases, race conditions, security holes, resource leaks, failure modes, silent data corruption, logic errors that produce wrong results silently, error handling that swallows failures, and trust boundary violations. Be adversarial. Be thorough. No compliments — just the problems. For each finding, classify as FIXABLE (you know how to fix it) or INVESTIGATE (needs human judgment)."

Present findings under an ADVERSARIAL REVIEW (Claude subagent): header. FIXABLE findings flow into the same Fix-First pipeline as the structured review. INVESTIGATE findings are presented as informational.

If the subagent fails or times out: "Claude adversarial subagent unavailable. Continuing without adversarial review."

Persist the review result:

${GSTACK_OPENCODE_DIR}/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"medium","commit":"'"$(git rev-parse --short HEAD)"'"}'

Substitute STATUS: "clean" if no findings, "issues_found" if findings exist. SOURCE: "codex" if Codex ran, "claude" if subagent ran. If both failed, do NOT persist.

Cleanup: Run rm -f "$TMPERR_ADV" after processing (if Codex was used).


Large tier (200+ lines)

Claude's structured review already ran. Now run all three remaining passes for maximum coverage:

1. Codex structured review (if available):

TMPERR=$(mktemp /tmp/codex-review-XXXXXXXX)
codex review --base <base> -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR"

Set the Bash tool's timeout parameter to 300000 (5 minutes). Do NOT use the timeout shell command — it doesn't exist on macOS. Present output under CODEX SAYS (code review): header. Check for [P1] markers: found → GATE: FAIL, not found → GATE: PASS.

If GATE is FAIL, use question:

Codex found N critical issues in the diff.

A) Investigate and fix now (recommended)
B) Continue — review will still complete

If A: address the findings. After fixing, re-run tests (Step 3) since code has changed. Re-run codex review to verify.

Read stderr for errors (same error handling as medium tier).

After stderr: rm -f "$TMPERR"

2. Claude adversarial subagent: Dispatch a subagent with the adversarial prompt (same prompt as medium tier). This always runs regardless of Codex availability.

3. Codex adversarial challenge (if available): Run codex exec with the adversarial prompt (same as medium tier).

If Codex is not available for steps 1 and 3, note to the user: "Codex CLI not found — large-diff review ran Claude structured + Claude adversarial (2 of 4 passes). Install Codex for full 4-pass coverage: npm install -g @openai/codex"

Persist the review result AFTER all passes complete (not after each sub-step):

${GSTACK_OPENCODE_DIR}/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"large","gate":"GATE","commit":"'"$(git rev-parse --short HEAD)"'"}'

Substitute: STATUS = "clean" if no findings across ALL passes, "issues_found" if any pass found issues. SOURCE = "both" if Codex ran, "claude" if only Claude subagent ran. GATE = the Codex structured review gate result ("pass"/"fail"), or "informational" if Codex was unavailable. If all passes failed, do NOT persist.


Cross-model synthesis (medium and large tiers)

After all passes complete, synthesize findings across all sources:

ADVERSARIAL REVIEW SYNTHESIS (auto: TIER, N lines):
════════════════════════════════════════════════════════════
  High confidence (found by multiple sources): [findings agreed on by >1 pass]
  Unique to Claude structured review: [from earlier step]
  Unique to Claude adversarial: [from subagent, if ran]
  Unique to Codex: [from codex adversarial or code review, if ran]
  Models used: Claude structured ✓  Claude adversarial ✓/✗  Codex ✓/✗
════════════════════════════════════════════════════════════

High-confidence findings (agreed on by multiple sources) should be prioritized for fixes.


Step 4: Version bump (auto-decide)

  1. Read the current VERSION file (4-digit format: MAJOR.MINOR.PATCH.MICRO)

  2. Auto-decide the bump level based on the diff:

    • Count lines changed (git diff origin/<base>...HEAD --stat | tail -1)
    • MICRO (4th digit): < 50 lines changed, trivial tweaks, typos, config
    • PATCH (3rd digit): 50+ lines changed, bug fixes, small-medium features
    • MINOR (2nd digit): ASK the user — only for major features or significant architectural changes
    • MAJOR (1st digit): ASK the user — only for milestones or breaking changes
  3. Compute the new version:

    • Bumping a digit resets all digits to its right to 0
    • Example: 0.19.1.0 + PATCH → 0.19.2.0
  4. Write the new version to the VERSION file.


Step 5: CHANGELOG (auto-generate)

  1. Read CHANGELOG.md header to know the format.

  2. Auto-generate the entry from ALL commits on the branch (not just recent ones):

    • Use git log <base>..HEAD --oneline to see every commit being shipped
    • Use git diff <base>...HEAD to see the full diff against the base branch
    • The CHANGELOG entry must be comprehensive of ALL changes going into the PR
    • If existing CHANGELOG entries on the branch already cover some commits, replace them with one unified entry for the new version
    • Categorize changes into applicable sections:
      • ### Added — new features
      • ### Changed — changes to existing functionality
      • ### Fixed — bug fixes
      • ### Removed — removed features
    • Write concise, descriptive bullet points
    • Insert after the file header (line 5), dated today
    • Format: ## [X.Y.Z.W] - YYYY-MM-DD

Do NOT ask the user to describe changes. Infer from the diff and commit history.


Step 5.5: TODOS.md (auto-update)

Cross-reference the project's TODOS.md against the changes being shipped. Mark completed items automatically; prompt only if the file is missing or disorganized.

Read .claude/skills/review/TODOS-format.md for the canonical format reference.

1. Check if TODOS.md exists in the repository root.

If TODOS.md does not exist: Use question:

  • Message: "GStack recommends maintaining a TODOS.md organized by skill/component, then priority (P0 at top through P4, then Completed at bottom). See TODOS-format.md for the full format. Would you like to create one?"
  • Options: A) Create it now, B) Skip for now
  • If A: Create TODOS.md with a skeleton (# TODOS heading + ## Completed section). Continue to step 3.
  • If B: Skip the rest of Step 5.5. Continue to Step 6.

2. Check structure and organization:

Read TODOS.md and verify it follows the recommended structure:

  • Items grouped under ## <Skill/Component> headings
  • Each item has **Priority:** field with P0-P4 value
  • A ## Completed section at the bottom

If disorganized (missing priority fields, no component groupings, no Completed section): Use question:

  • Message: "TODOS.md doesn't follow the recommended structure (skill/component groupings, P0-P4 priority, Completed section). Would you like to reorganize it?"
  • Options: A) Reorganize now (recommended), B) Leave as-is
  • If A: Reorganize in-place following TODOS-format.md. Preserve all content — only restructure, never delete items.
  • If B: Continue to step 3 without restructuring.

3. Detect completed TODOs:

This step is fully automatic — no user interaction.

Use the diff and commit history already gathered in earlier steps:

  • git diff <base>...HEAD (full diff against the base branch)
  • git log <base>..HEAD --oneline (all commits being shipped)

For each TODO item, check if the changes in this PR complete it by:

  • Matching commit messages against the TODO title and description
  • Checking if files referenced in the TODO appear in the diff
  • Checking if the TODO's described work matches the functional changes

Be conservative: Only mark a TODO as completed if there is clear evidence in the diff. If uncertain, leave it alone.

4. Move completed items to the ## Completed section at the bottom. Append: **Completed:** vX.Y.Z (YYYY-MM-DD)

5. Output summary:

  • TODOS.md: N items marked complete (item1, item2, ...). M items remaining.
  • Or: TODOS.md: No completed items detected. M items remaining.
  • Or: TODOS.md: Created. / TODOS.md: Reorganized.

6. Defensive: If TODOS.md cannot be written (permission error, disk full), warn the user and continue. Never stop the ship workflow for a TODOS failure.

Save this summary — it goes into the PR body in Step 8.


Step 6: Commit (bisectable chunks)

Goal: Create small, logical commits that work well with git bisect and help LLMs understand what changed.

  1. Analyze the diff and group changes into logical commits. Each commit should represent one coherent change — not one file, but one logical unit.

  2. Commit ordering (earlier commits first):

    • Infrastructure: migrations, config changes, route additions
    • Models & services: new models, services, concerns (with their tests)
    • Controllers & views: controllers, views, JS/React components (with their tests)
    • VERSION + CHANGELOG + TODOS.md: always in the final commit
  3. Rules for splitting:

    • A model and its test file go in the same commit
    • A service and its test file go in the same commit
    • A controller, its views, and its test go in the same commit
    • Migrations are their own commit (or grouped with the model they support)
    • Config/route changes can group with the feature they enable
    • If the total diff is small (< 50 lines across < 4 files), a single commit is fine
  4. Each commit must be independently valid — no broken imports, no references to code that doesn't exist yet. Order commits so dependencies come first.

  5. Compose each commit message:

    • First line: <type>: <summary> (type = feat/fix/chore/refactor/docs)
    • Body: brief description of what this commit contains
    • Only the final commit (VERSION + CHANGELOG) gets the version tag and co-author trailer:
git commit -m "$(cat <<'EOF'
chore: bump version and changelog (vX.Y.Z.W)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
EOF
)"

Step 6.5: Verification Gate

IRON LAW: NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE.

Before pushing, re-verify if code changed during Steps 4-6:

  1. Test verification: If ANY code changed after Step 3's test run (fixes from review findings, CHANGELOG edits don't count), re-run the test suite. Paste fresh output. Stale output from Step 3 is NOT acceptable.

  2. Build verification: If the project has a build step, run it. Paste output.

  3. Rationalization prevention:

    • "Should work now" → RUN IT.
    • "I'm confident" → Confidence is not evidence.
    • "I already tested earlier" → Code changed since then. Test again.
    • "It's a trivial change" → Trivial changes break production.

If tests fail here: STOP. Do not push. Fix the issue and return to Step 3.

Claiming work is complete without verification is dishonesty, not efficiency.


Step 7: Push

Push to the remote with upstream tracking:

git push -u origin <branch-name>

Step 8: Create PR

Create a pull request using gh:

gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
## Summary
<bullet points from CHANGELOG>

## Test Coverage
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">

## Pre-Landing Review
<findings from Step 3.5 code review, or "No issues found.">

## Design Review
<If design review ran: "Design Review (lite): N findings — M auto-fixed, K skipped. AI Slop: clean/N issues.">
<If no frontend files changed: "No frontend files changed — design review skipped.">

## Eval Results
<If evals ran: suite names, pass/fail counts, cost dashboard summary. If skipped: "No prompt-related files changed — evals skipped.">

## Greptile Review
<If Greptile comments were found: bullet list with [FIXED] / [FALSE POSITIVE] / [ALREADY FIXED] tag + one-line summary per comment>
<If no Greptile comments found: "No Greptile comments.">
<If no PR existed during Step 3.75: omit this section entirely>

## TODOS
<If items marked complete: bullet list of completed items with version>
<If no items completed: "No TODO items completed in this PR.">
<If TODOS.md created or reorganized: note that>
<If TODOS.md doesn't exist and user skipped: omit this section>

## Test plan
- [x] All Rails tests pass (N runs, 0 failures)
- [x] All Vitest tests pass (N tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)
EOF
)"

Output the PR URL — then proceed to Step 8.5.


Step 8.5: Auto-invoke /document-release

After the PR is created, automatically sync project documentation. Read the document-release/SKILL.md skill file (adjacent to this skill's directory) and execute its full workflow:

  1. Read the /document-release skill: cat ${GSTACK_OPENCODE_DIR}/../document-release/SKILL.md
  2. Follow its instructions — it reads all .md files in the project, cross-references the diff, and updates anything that drifted (README, ARCHITECTURE, CONTRIBUTING, CLAUDE.md, TODOS, etc.)
  3. If any docs were updated, commit the changes and push to the same branch:
    git add -A && git commit -m "docs: sync documentation with shipped changes" && git push
    
  4. If no docs needed updating, say "Documentation is current — no updates needed."

This step is automatic. Do not ask the user for confirmation. The goal is zero-friction doc updates — the user runs /ship and documentation stays current without a separate command.


Important Rules

  • Never skip tests. If tests fail, stop.
  • Never skip the pre-landing review. If checklist.md is unreadable, stop.
  • Never force push. Use regular git push only.
  • Never ask for trivial confirmations (e.g., "ready to push?", "create PR?"). DO stop for: version bumps (MINOR/MAJOR), pre-landing review findings (ASK items), and Codex structured review [P1] findings (large diffs only).
  • Always use the 4-digit version format from the VERSION file.
  • Date format in CHANGELOG: YYYY-MM-DD
  • Split commits for bisectability — each commit = one logical change.
  • TODOS.md completion detection must be conservative. Only mark items as completed when the diff clearly shows the work is done.
  • Use Greptile reply templates from greptile-triage.md. Every reply includes evidence (inline diff, code references, re-rank suggestion). Never post vague replies.
  • Never push without fresh verification evidence. If code changed after Step 3 tests, re-run before pushing.
  • Step 3.4 generates coverage tests. They must pass before committing. Never commit failing tests.
  • The goal is: user says /ship, next thing they see is the review + PR URL + auto-synced docs.