104 lines
6.3 KiB
Markdown
104 lines
6.3 KiB
Markdown
---
|
|
description: Implement tasks with continuous formal verification
|
|
argument-hint: <feature-name> [task-id]
|
|
allowed-tools: [Read, Write, Edit, Glob, Grep, Bash, Task]
|
|
model: opus
|
|
---
|
|
|
|
# FormSpec Implement
|
|
|
|
You are implementing tasks from a feature's `tasks.md` with continuous verification against the formal specification.
|
|
|
|
## Steps
|
|
|
|
### 1. Read context
|
|
|
|
**$ARGUMENTS must provide at least a feature name.** It may optionally include a task ID or milestone ID (e.g., `user-auth T-03` or `user-auth M-01`). If the feature name is missing, ask the user for it.
|
|
|
|
**Locate the feature directory**: Glob for `spec/features/*-<feature-name>/tasks.md`. If no match is found, tell the user to run `/formspec.tasks.5 <feature-name>` first and stop.
|
|
|
|
Read:
|
|
- The feature's `tasks.md`
|
|
- The feature's `plan.md`
|
|
- The feature's `design.md`
|
|
- `spec/formal/traceability.md`
|
|
- `spec/formal/System.tla`
|
|
|
|
### 2. Select task
|
|
|
|
If $ARGUMENTS includes a task ID (e.g., `T-03`) or milestone ID (e.g., `M-01`): implement that task or all remaining tasks for that milestone. Verify dependencies are complete first — if not, warn the user and suggest implementing dependencies first.
|
|
|
|
If no task ID is specified: find the next task in this feature's `tasks.md` whose dependencies are all complete. If multiple tasks are ready, pick the lowest-numbered one.
|
|
|
|
If all tasks are complete, report that and suggest running `/formspec.docs.7` to generate documentation, then `/formspec.check.8` to validate consistency.
|
|
|
|
### 3. Implement the task
|
|
|
|
Read the task's full specification from the feature's `tasks.md`.
|
|
|
|
Follow TDD:
|
|
1. Write unit tests first, derived from the task's acceptance criteria. These tests verify the component's contract in isolation — they are not TLA+ property checks, they are component-level correctness checks.
|
|
2. Write the implementation.
|
|
3. Run unit tests and iterate until passing.
|
|
|
|
Unit tests at this level serve two purposes: they catch implementation bugs early (before dependent tasks build on a broken foundation), and they force component boundaries to remain clear and well-defined, which makes later integration smoother.
|
|
|
|
**a. Implement the code** following the feature plan's architecture and technology choices. When the task involves a component that exposes or consumes an interface, refer to the Interface Contract section in the feature's design doc for the concrete schemas, wire formats, and error conventions.
|
|
|
|
**b. Run the task's unit-level verification**:
|
|
- Unit tests for the component being implemented
|
|
- Compile-time checks: ensure code compiles and types check
|
|
- Structural guarantees: verify that the data model makes the property violation unrepresentable (no runtime check needed — record the structural reason in the verification matrix instead)
|
|
- Static analysis tools specified in the plan (e.g., `go vet`, `cargo clippy`)
|
|
|
|
**c. Handle failures**:
|
|
|
|
If the failure is a **code bug** (the unit test caught an implementation error):
|
|
- Diagnose and fix. This is normal TDD flow.
|
|
|
|
If the failure reveals an **inconsistency with the TLA+ spec** (the implementation cannot satisfy a formal property as specified, not due to a simple bug but due to a fundamental mismatch):
|
|
- **STOP**
|
|
- Explain to the developer:
|
|
- Which TLA+ property cannot be satisfied
|
|
- Why the implementation cannot satisfy it (what real-world constraint prevents it)
|
|
- Whether this indicates the TLA+ model is too idealized or the design needs revision
|
|
- Do not continue until the developer decides how to resolve
|
|
|
|
**d. Mark task complete**:
|
|
|
|
After unit tests pass, mark the task complete in the feature's `tasks.md` (e.g., append `✓ COMPLETE` to the title).
|
|
|
|
**e. Continue to next task**:
|
|
|
|
After completing a task, present a brief summary of what was implemented and the test results. Then ask the developer whether to continue to the next task or pause for review. If the developer chooses to continue (or has previously indicated they want uninterrupted execution), proceed to the next task whose dependencies are all met.
|
|
|
|
Stop regardless of the developer's preference when:
|
|
- A unit test failure or spec inconsistency requires developer input
|
|
- A verification milestone is reached (all of its tasks are now complete)
|
|
- No more tasks are ready (unmet dependencies)
|
|
|
|
### 4. Verify milestones
|
|
|
|
After completing a task, check whether any verification milestones in the feature's `tasks.md` have all their tasks complete. For each newly-completed milestone:
|
|
|
|
**a. Run the milestone's verification**: If `spec/hooks/run-checks` exists and is executable, delegate the milestone's verification to it (see `/formspec.init` for the hook interface). The request should include the milestone's properties, verification methods, and hints (e.g., the test commands from the plan). The external system runs the checks independently and returns results.
|
|
|
|
If the hook does not exist or fails, run the milestone's verification method locally (e.g., integration tests, property-based tests, recovery tests). These are the tests that validate TLA+ properties end-to-end across multiple components.
|
|
|
|
**b. Handle milestone verification failures**: If the verification fails (whether reported by the hook or observed locally), diagnose whether the issue is in one of the component tasks (go back and fix it) or a cross-component interaction that wasn't caught by unit tests. If it reveals a spec inconsistency, STOP as described above.
|
|
|
|
**c. Update the verification matrix**: If the hook succeeded, it is the record-keeper — skip the markdown table update. If checks ran locally (no hook, or hook failed), update `spec/verification/matrix.md` directly. Create the file if it doesn't exist, using this format:
|
|
|
|
```markdown
|
|
# Verification Matrix
|
|
|
|
| Source Document | Requirement ID | TLA+ Property | Verification Tool | Status | Last Checked |
|
|
|----------------|----------------|---------------|-------------------|--------|--------------|
|
|
| features/YYYY-MM-DD-feature/design.md | US-1 | PropertyName | tool name | PASS | YYYY-MM-DD |
|
|
```
|
|
|
|
Update existing rows or add new ones. Set status to PASS with today's date. Only record a property as PASS when its milestone has been reached and the verification method has actually been executed.
|
|
|
|
### 5. Cross-check (periodic)
|
|
|
|
After completing a milestone, briefly check whether the changes could have affected any previously verified properties. If there's a risk of regression, advise running `/formspec.check.8` or `/formspec.verify.3`.
|