dotfiles/.claude/commands/formspec.implement.6.md

5.7 KiB

description argument-hint allowed-tools model
Implement tasks with continuous formal verification
task-id
Read
Write
Edit
Glob
Grep
Bash
Task
opus

FormSpec Implement

You are implementing tasks from spec/tasks.md with continuous verification against the formal specification.

Steps

1. Read context

Read:

  • spec/tasks.md
  • spec/plan.md
  • spec/formal/traceability.md
  • spec/formal/System.tla
  • Relevant feature design docs (determine which ones from the task's "Implements" field)

2. Select task

If $ARGUMENTS specifies a task ID (e.g., T-03) or milestone ID (e.g., M-01): implement that task or all remaining tasks for that milestone. Verify dependencies are complete first — if not, warn the user and suggest implementing dependencies first.

If $ARGUMENTS is empty: find the next task whose dependencies are all complete. If multiple tasks are ready, pick the lowest-numbered one.

If all tasks are complete, report that and suggest running /formspec.docs.7 to generate documentation, then /formspec.check.8 to validate consistency.

3. Implement the task

Read the task's full specification from spec/tasks.md.

Follow TDD:

  1. Write unit tests first, derived from the task's acceptance criteria. These tests verify the component's contract in isolation — they are not TLA+ property checks, they are component-level correctness checks.
  2. Write the implementation.
  3. Run unit tests and iterate until passing.

Unit tests at this level serve two purposes: they catch implementation bugs early (before dependent tasks build on a broken foundation), and they force component boundaries to remain clear and well-defined, which makes later integration smoother.

a. Implement the code following the plan's architecture and technology choices. When the task involves a component that exposes or consumes an interface, refer to the Interface Contract section in the relevant design doc for the concrete schemas, wire formats, and error conventions.

b. Run the task's unit-level verification:

  • Unit tests for the component being implemented
  • Compile-time checks: ensure code compiles and types check
  • Structural guarantees: verify that the data model makes the property violation unrepresentable (no runtime check needed — record the structural reason in the verification matrix instead)
  • Static analysis tools specified in the plan (e.g., go vet, cargo clippy)

c. Handle failures:

If the failure is a code bug (the unit test caught an implementation error):

  • Diagnose and fix. This is normal TDD flow.

If the failure reveals an inconsistency with the TLA+ spec (the implementation cannot satisfy a formal property as specified, not due to a simple bug but due to a fundamental mismatch):

  • STOP
  • Explain to the developer:
    • Which TLA+ property cannot be satisfied
    • Why the implementation cannot satisfy it (what real-world constraint prevents it)
    • Whether this indicates the TLA+ model is too idealized or the design needs revision
  • Do not continue until the developer decides how to resolve

d. Mark task complete:

After unit tests pass, mark the task complete in spec/tasks.md (e.g., append ✓ COMPLETE to the title).

e. Continue to next task:

After completing a task, if the next task's dependencies are all met, continue implementing it without stopping. Keep going until either:

  • A unit test failure or spec inconsistency requires developer input
  • A verification milestone is reached (all of its tasks are now complete)
  • No more tasks are ready (unmet dependencies)

4. Verify milestones

After completing a task, check whether any verification milestones in spec/tasks.md have all their tasks complete. For each newly-completed milestone:

a. Run the milestone's verification: If spec/hooks/run-checks exists and is executable, delegate the milestone's verification to it (see /formspec.init for the hook interface). The request should include the milestone's properties, verification methods, and hints (e.g., the test commands from the plan). The external system runs the checks independently and returns results.

If the hook does not exist or fails, run the milestone's verification method locally (e.g., integration tests, property-based tests, recovery tests). These are the tests that validate TLA+ properties end-to-end across multiple components.

b. Handle milestone verification failures: If the verification fails (whether reported by the hook or observed locally), diagnose whether the issue is in one of the component tasks (go back and fix it) or a cross-component interaction that wasn't caught by unit tests. If it reveals a spec inconsistency, STOP as described above.

c. Update the verification matrix: If the hook succeeded, it is the record-keeper — skip the markdown table update. If checks ran locally (no hook, or hook failed), update spec/verification/matrix.md directly. Create the file if it doesn't exist, using this format:

# Verification Matrix

| Source Document | Requirement ID | TLA+ Property | Verification Tool | Status | Last Checked |
|----------------|----------------|---------------|-------------------|--------|--------------|
| features/YYYY-MM-DD-feature/design.md | US-1 | PropertyName | tool name | PASS | YYYY-MM-DD |

Update existing rows or add new ones. Set status to PASS with today's date. Only record a property as PASS when its milestone has been reached and the verification method has actually been executed.

5. Cross-check (periodic)

After completing a milestone, briefly check whether the changes could have affected any previously verified properties. If there's a risk of regression, advise running /formspec.check.8 or /formspec.verify.3.