Compare commits

...

5 commits

13 changed files with 1087 additions and 3 deletions

View file

@ -0,0 +1,134 @@
---
description: Cross-artifact consistency check across all formspec artifacts
allowed-tools: [Read, Glob, Grep, Bash, Task]
---
# FormSpec Check
You are performing a comprehensive cross-artifact consistency analysis. This command is **read-only** for spec artifacts — it runs verification tools but does not modify spec files. It reports findings for the developer to act on.
## Steps
### 1. Read all artifacts
Read every artifact that exists (skip gracefully if some don't exist yet):
- `spec/system.md`
- All feature design docs: Glob for `spec/features/*/design.md`
- `spec/formal/System.tla` and `spec/formal/System.cfg`
- `spec/formal/traceability.md`
- `spec/plan.md`
- `spec/tasks.md`
- `spec/verification/matrix.md`
Also scan:
- Implementation source code (Glob for `**/*.rs`, `**/*.go`, `**/*.fst`, `**/*.py`, `**/*.ts`, `**/*.java`, etc., excluding `spec/` and common dependency directories like `node_modules/`, `target/`, `vendor/`)
- Test files (Glob for `**/*_test.go`, `**/test_*.py`, `**/*_test.rs`, `**/*.test.ts`, `**/*.spec.ts`, etc.)
- User documentation (`README.md`, `docs/**/*.md`, excluding `spec/`)
### 2. Run consistency checks
Perform each of these checks and record findings:
**Design <-> TLA+ (if both exist)**:
- Every formalizable requirement (US-N, SC-N, architectural constraints, security considerations) in the design docs has a corresponding TLA+ property in traceability.md
- No TLA+ properties exist in System.tla without a corresponding entry in traceability.md
- No orphaned entries in traceability.md (property exists in table but not in System.tla)
**TLA+ <-> Implementation (if plan.md and matrix.md exist)**:
- Every TLA+ property has at least one implementation-level verification method in plan.md
- Every verification method has been executed and recorded in matrix.md
- No FAIL or PENDING entries in matrix.md that lack explanation
- Verification matrix entries correspond to completed milestones, not individual tasks — flag any property marked PASS whose milestone tasks are not all complete
**Design <-> Implementation (if tasks.md exists)**:
- Every user story (US-N) from every feature design doc is covered by at least one task
- Every task is marked complete or has a documented reason for incompleteness
- No tasks reference nonexistent user stories or properties
- Every TLA+ property with a runtime verification method has a corresponding milestone
- Every milestone's task list includes all the components needed to execute its verification method (e.g., a crash recovery milestone must include tasks for persistence, the coordinator, and the recovery test)
- No milestone claims verification of a property that its tasks don't collectively cover
**Interface Contract <-> Implementation (if design docs define Interface Contracts)**:
- Every schema defined in a design doc's Interface Contract section has a corresponding implementation (struct definition, protobuf message, CLI argument parser, etc.)
- Implementation field names and types match the contract (reasonable naming convention differences are fine — `request_id` vs `RequestId` — but semantic mismatches are not)
- Error conventions in the implementation match what the contract specifies (status codes, error shapes, exit codes)
- Wire format matches (if the contract says newline-delimited text, the implementation should not use length-prefixed binary)
**Terminology consistency**:
- Key domain terms used in design docs match naming in TLA+ variables/operators
- Implementation code uses consistent naming with the TLA+ spec (where reasonable — snake_case vs CamelCase differences are fine, semantic mismatches are not)
**Orphaned artifacts**:
- No design requirements without traceability entries
- No TLA+ properties without implementation verification
- No implementation code files that don't trace back to any task (heuristic — flag files that seem substantial but aren't referenced)
**Tests <-> Specification (if test files exist)**:
- Every TLA+ property with a runtime verification method (from plan.md's Verification Strategy) has corresponding test cases in the codebase
- Test names or comments reference the TLA+ property or design requirement they verify
- No test files exist that don't trace back to any task or verification method (heuristic — flag test files that seem unrelated to the spec)
- Property-based tests (if specified in the plan) exist and cover the properties they claim to verify
- Properties claimed as "structural guarantee" in the verification matrix are actually enforced by the data model — verify that the implementation's types make violation unrepresentable, and that no runtime test exists that claims to check the property but is actually a no-op (a test that can never fail provides no signal and misrepresents the verification method)
- Properties claimed as "compile-time guarantee" in the verification matrix are actually enforced by the language used for that component. In polyglot projects, verify that the component responsible for the property is written in the language whose type system or ownership model is claimed to enforce it. For example: if the matrix claims Rust ownership enforces a memory safety property, verify the relevant code is actually in Rust, not in a Go or Python wrapper that bypasses the guarantee. If the matrix claims F* dependent types prove a bound, verify an `.fst` module exists that covers the claimed property. Flag cases where a compile-time guarantee is claimed but the enforcement boundary is incomplete — e.g., an FFI call, an `unsafe` block, a type assertion, or a serialization boundary that re-introduces the class of error the type system was supposed to prevent
**Documentation <-> Specification (if user docs exist)**:
- Every user story marked as user-facing has corresponding documentation
- Documentation traceability comments (`<!-- spec: ... -->`) reference valid design documents and requirement IDs
- No behavioral claims in documentation contradict the design documents or formal properties
- Behavioral guarantees stated in docs correspond to verified TLA+ properties (cross-reference traceability.md — a documented guarantee should trace to a "checked" property)
- Documentation is not stale: if design docs are newer than documentation files, flag potential drift
**Staleness**:
- Compare file modification times: if design docs are newer than System.tla, flag potential drift
- If System.tla is newer than verification/matrix.md, flag that verification may be stale
- If design docs are newer than documentation files, flag potential documentation drift
### 3. Run verification
If `spec/hooks/run-checks` exists and is executable, delegate all verification (TLC and language-specific tools) to it (see `/formspec.init` for the hook interface). Build the request from `spec/formal/traceability.md` (TLA+ properties), `spec/plan.md` (verification methods), and detected source files (language-specific checks). The external system runs all checks independently and returns results.
If the hook does not exist or fails, run checks locally:
**TLC** (if `spec/formal/System.tla` and `spec/formal/System.cfg` exist):
```bash
tlc spec/formal/System.tla -config spec/formal/System.cfg -workers auto -metadir /tmp/tlc-states -cleanup
```
Record pass/fail for each property.
**Language-specific verification tools** (if implementation exists):
Detect the project's language(s) from source files and run appropriate tools:
- **Rust**: `cargo test`, `cargo clippy`
- **Go**: `go test ./...`, `go vet ./...`
- **F***: `fstar.exe` on `.fst` files
- **Python**: `pytest` if tests exist
- **TypeScript/JavaScript**: `npm test` if package.json exists
Only run tools that are available and applicable. Skip gracefully if tooling isn't set up.
### 4. Report
Produce a structured report organized by severity:
```
## CRITICAL
[TLA+ violations, verification failures, design-spec mismatches, documentation contradicting the spec, tests failing]
## HIGH
[Missing traceability links, unverified properties, stale artifacts after design changes, documented guarantees not backed by verified properties, missing tests for runtime-verified properties]
## MEDIUM
[Terminology inconsistencies, missing task coverage, partial traceability, documentation missing spec traceability comments, untraceable test files]
## LOW
[Minor staleness, documentation gaps for non-critical features, style issues]
```
Each finding should include:
- What the issue is
- Which artifacts are affected
- Suggested resolution (which formspec command to run, or what manual action to take)
If no issues are found at a severity level, omit that section. If everything is clean, say so clearly.

View file

@ -0,0 +1,137 @@
---
description: Create a detailed feature design document
argument-hint: <feature-name>
allowed-tools: [Read, Write, Edit, Glob, Grep, Task]
---
# FormSpec Design
You are creating a detailed natural-language design document for a single feature within a formally-specified system.
## Steps
### 1. Validate preconditions
**spec/system.md must exist.** Check with Glob. If it does not exist, tell the user to run `/formspec.init.0` first and stop.
**$ARGUMENTS must provide a feature name.** If empty, ask the user for a feature name. The feature name should be a short kebab-case identifier (e.g., `user-auth`, `billing`, `rate-limiting`).
### 2. Read existing context
Read these artifacts to understand the current system state:
- `spec/system.md`
- All existing feature design docs under `spec/features/` (use Glob for `spec/features/*/design.md`)
Understanding existing features is critical — the new feature must be designed in the context of what already exists, and cross-feature impacts must be identified.
### 3. Gather requirements
Through conversation with the developer, establish what this feature does and why. Ask clarifying questions. Do not proceed to writing until you have a clear understanding of:
- The feature's purpose and scope
- How it interacts with existing features (if any)
- Who uses it and how
- What should NOT happen (failure modes, abuse scenarios)
### 4. Generate the design document
Create `spec/features/YYYY-MM-DD-<feature-name>/design.md` where YYYY-MM-DD is today's date.
The document must have these sections, in this order:
```markdown
# Feature: <Feature Name>
## Overview
[What this feature does, why it exists, and how it fits into the broader system. Reference spec/system.md architectural constraints where relevant.]
## User Stories
[Numbered US-1, US-2, etc. Each story follows Given/When/Then format.]
US-1: [Title]
- **Given**: [precondition]
- **When**: [action]
- **Then**: [expected outcome]
## Pathological and Adversarial Uses
[What happens when users or attackers misuse this feature. Explicitly consider:]
- [Abuse scenarios]
- [Race conditions]
- [Resource exhaustion]
- [Privilege escalation]
- [Any domain-specific attack vectors]
## Edge Cases and Failure Modes
[Boundary conditions, partial failures, timeout behavior, data corruption scenarios, unexpected input.]
## Success Criteria
[Numbered SC-1, SC-2, etc. Measurable outcomes — quantified where possible.]
SC-1: [Criterion with measurable target]
## Security Considerations
[Authentication, authorization, input validation, data exposure risks. Even if the feature seems non-security-critical, consider what an attacker could do with it.]
## Performance Constraints
[Latency bounds, throughput requirements, resource limits. "None" is acceptable if genuinely unconstrained, but think carefully first.]
## Interface Contract
[How external actors interact with this feature. Include this section when the feature exposes an interface that other components, users, or systems consume. Omit it for purely internal features with no external surface.]
[For each interface (CLI, API, wire protocol, library, etc.), describe:]
- [Transport and framing (e.g., TCP with newline-delimited text, HTTP REST, gRPC, function calls)]
- [Request/response schemas — the concrete structs, messages, or command-line arguments that are passed. Use the project's natural notation: Go struct literals, JSON schemas, protobuf definitions, CLI usage strings, etc.]
- [Response types — success, error, and edge-case responses with their exact shapes]
- [Error conventions — how errors are surfaced (status codes, error fields, exit codes)]
[These schemas are the source of truth for documentation and implementation. The TLA+ specification remains abstract — it models the state transitions and properties, pulling in only those fields directly needed to reason about system behavior. The interface contract captures the concrete surface that the abstract model does not.]
## Dependencies
[Which other features this interacts with. For each dependency, describe the nature of the interaction and what assumptions are made.]
## Open Questions
[Anything that remains unclear. Tag each with [NEEDS CLARIFICATION]. If there are no open questions, omit this section entirely rather than writing "None".]
## Amendments
[Do NOT create this section in new design docs. It is added by `/formspec.formalize.2` when TLC counterexamples force design changes. If this section already exists in a design doc you are editing, preserve it — amendments are permanent records of corrections discovered through formal verification. Only remove an amendment if the requirement it annotates is itself removed through a larger redesign, or changed significantly enough that a plain rewrite is simpler than preserving the amendment history.]
```
**Quality bar**: The design must be specific enough that a different developer could implement it without further conversation. Vague statements like "should be fast" or "handle errors gracefully" are not acceptable — quantify or describe the specific behavior.
### 5. Update spec/system.md
Make three updates to `spec/system.md`:
**Feature Index**: Add an entry linking to the new design document:
```
- [Feature Name](features/YYYY-MM-DD-feature-name/design.md) — one-line summary
```
**Changelog**: Add a dated entry describing what this feature adds, modifies, or deprecates:
```
### YYYY-MM-DD — Feature Name
- Added: [what new capabilities this introduces]
- Modified: [what existing behavior changes, if any]
- Deprecated: [what this replaces, if anything]
```
**Cross-feature impacts**: If this feature impacts existing features (deprecates user stories, changes behavior, adds new constraints), document that explicitly in the Changelog. For example:
```
- Deprecates US-3 of 2026-02-10-user-auth: API key authentication replaced by OAuth
```
Also update the affected feature's design doc if the impact is significant enough to warrant it (add a note in its Dependencies section).
### 6. Report
Summarize what was created, highlight any cross-feature impacts, and note any open questions that need resolution before formalization. Suggest proceeding with `/formspec.formalize.2` to translate the design into a TLA+ specification.

View file

@ -0,0 +1,77 @@
---
description: Generate user documentation derived from the specification
allowed-tools: [Read, Write, Edit, Glob, Grep, Task]
---
# FormSpec Docs
You are generating user-facing documentation derived from the system's design documents and formal specification. Documentation is a downstream artifact — it must be consistent with the spec, and link back to design documents where appropriate.
## Steps
### 1. Read all artifacts
Read:
- `spec/system.md`
- All feature design docs: Glob for `spec/features/*/design.md`
- `spec/formal/traceability.md`
- `spec/plan.md`
- Existing documentation in the project root and `docs/` directory (Glob for `*.md`, `docs/**/*.md`, excluding `spec/`)
All spec artifacts must exist. If `spec/system.md` or feature design docs are missing, direct the user to the prerequisite commands (`/formspec.init.0`, `/formspec.design.1`) and stop.
### 2. Determine documentation scope
From the design artifacts, identify what needs to be documented:
- System-level overview and purpose (from `spec/system.md`)
- Per-feature user-facing behavior (from feature design docs — user stories, success criteria)
- Configuration, setup, and usage (from plan.md — technology choices, architecture)
- API contracts and interfaces (from design docs — Interface Contract sections are the primary source; user stories describe the behavior behind those interfaces)
- Behavioral guarantees the system makes (from traceability — which formal properties translate to user-visible promises)
Not everything in the design docs belongs in user documentation. Filter out:
- Internal implementation details that users don't interact with
- Pathological/adversarial use cases (these inform testing, not docs)
- TLA+ specifics (the formal spec is developer-facing, not user-facing)
### 3. Generate or update documentation
Create or update documentation files. The exact structure depends on the project — adapt to what already exists. Typical outputs:
**README.md** (if it doesn't exist or needs updating):
- Project description derived from `spec/system.md` System Overview
- Setup and usage instructions derived from `spec/plan.md` technology choices
- Feature descriptions derived from feature design docs
**docs/** (for projects complex enough to warrant separate pages):
- Per-feature documentation derived from the feature's user stories and success criteria
- API reference derived from the Interface Contract sections in design docs (schemas, request/response formats, error conventions)
- Configuration reference derived from architectural constraints and plan
### 4. Add spec traceability
Documentation should link back to design documents where it describes feature behavior. Use inline references so that readers (and `/formspec.check.8`) can trace documentation claims to their source:
```markdown
<!-- spec: features/2026-02-10-user-auth/design.md#US-1 -->
```
These comments serve two purposes:
- Readers can find the detailed design behind a documented behavior
- The check command can verify documentation hasn't drifted from the spec
Every documented behavioral claim (what the system does, guarantees it makes, constraints it enforces) should have a traceability comment linking to the design requirement it derives from.
### 5. Validate documentation against spec
Before finishing, cross-check:
- Every user story marked as user-facing in the design docs has corresponding documentation
- No documentation claims contradict the design documents or formal properties
- Behavioral guarantees stated in docs are actually verified (cross-reference with traceability.md)
- Links to design documents are valid (files exist, requirement IDs match)
Flag any discrepancies found.
### 6. Report
Summarize what documentation was created or updated, which design documents it traces to, and any gaps where documentation could not be generated (e.g., design doc lacks enough detail for user-facing docs). Suggest running `/formspec.check.8` to validate full cross-artifact consistency.

View file

@ -0,0 +1,209 @@
---
description: Formalize the system design into a TLA+ specification and verify it
allowed-tools: [Read, Write, Edit, Glob, Grep, Bash, Task]
model: opus
---
# FormSpec Formalize
You are translating the system's natural-language design into a TLA+ formal specification, then running the TLC model checker to verify it. This is the core of the formspec workflow and demands precision.
There are two failure modes that you must handle differently:
1. **Meaning drift**: The TLA+ does not faithfully represent the design (your error to fix).
2. **Design flaw**: The TLA+ faithfully represents the design, but TLC finds a property violation (the developer must decide how to resolve).
## Phase 1 — Read all design artifacts
Read every artifact that exists:
- `spec/system.md`
- All feature design docs: Glob for `spec/features/*/design.md`
- `spec/formal/System.tla` (may not exist yet)
- `spec/formal/System.cfg` (may not exist yet)
- `spec/formal/traceability.md` (may not exist yet)
Extract a comprehensive list of all requirements across all documents:
- User stories (US-N)
- Success criteria (SC-N)
- Architectural constraints
- Security considerations
- Pathological/adversarial use cases
- Performance constraints that can be modeled
Determine whether this is a fresh creation or an update to an existing specification.
## Phase 2 — Write or update the TLA+ specification
### If System.tla does not exist: create from scratch
Write `spec/formal/System.tla` modeling the full system. The spec must:
- **Model all actors** described in the design as processes or roles
- **Model all states and transitions** from user stories and system behavior
- **Define invariants** for every success criterion that can be expressed as a state predicate
- **Define temporal properties** for liveness and fairness requirements
- **Include safety properties** that prevent every pathological behavior described in the design docs
- **Include best-practice safety properties** for this type of system, even if not explicitly in the design. Examples:
- Concurrent systems: no deadlocks, no starvation
- Auth systems: no privilege escalation, no backdoor tokens
- Distributed data: no lost updates, eventual consistency where claimed
- Financial systems: conservation of value, no double-spend
- Queue systems: no message loss, no duplicate delivery (if claimed)
### If System.tla exists: modify it
Diff the current spec against the new/updated design requirements. Add, modify, or remove states, transitions, invariants, and properties to match the current design. Preserve existing properties that remain valid.
### System.cfg
Write `spec/formal/System.cfg` with:
- `SPECIFICATION Spec` (or whatever the spec's temporal formula is named)
- `CONSTANTS` with small but meaningful values (enough to expose bugs, small enough for tractable checking)
- `INVARIANT` listing all invariants
- `PROPERTY` listing all temporal properties
### Subsystem specs
For standalone subcomponents with clean boundaries (e.g., a rate limiter, a queue, an auth module), create partial specs in `spec/formal/subsystems/` that abstract their communication with the rest of the system. These are optional and should only be created when a component is complex enough to warrant independent verification.
## Phase 3 — Write traceability mapping
Create or update `spec/formal/traceability.md` with this table:
```markdown
# Traceability Matrix
| Source Document | Requirement ID | TLA+ Property | Property Type | Status |
|----------------|----------------|---------------|---------------|--------|
| features/YYYY-MM-DD-feature/design.md | US-1 | PropertyName | invariant/temporal/safety/liveness | formalized |
| features/YYYY-MM-DD-feature/design.md | SC-1 | PropertyName | invariant | formalized |
| system.md | Best practice: no deadlock | NoDeadlock | safety | formalized |
```
**Status values**: `formalized` (written but not yet checked), `checked` (TLC passed), `violated` (TLC found counterexample), `not-yet-formalized` (requirement identified but no TLA+ property yet — must include justification).
Every requirement from Phase 1 must appear in this table. If a requirement cannot be formalized (e.g., "UI should feel responsive"), mark it `not-yet-formalized` with a note explaining why.
## Phase 4 — Run TLC model checker
Run:
```bash
tlc spec/formal/System.tla -config spec/formal/System.cfg -workers auto -metadir /tmp/tlc-states -cleanup
```
**Important**: Run this from the project root directory. The `-metadir` flag directs TLC's working state files to a temporary directory, and `-cleanup` removes them after a successful run. This prevents TLC from littering the project with a `states/` directory full of binary artifacts.
**Timeout policy**: Use a 5-minute timeout by default. If TLC has not completed within this limit:
1. **Stop TLC** and report the state space explored so far (states found, distinct states, queue size).
2. **Attempt mitigations** in this order, re-running after each:
- Reduce model constants (fewer processes, smaller domains)
- Enable symmetry sets if the model has interchangeable actors
- Check invariants and temporal properties separately (invariants are cheaper)
- Switch to simulation mode (`-simulate num=10000`) for probabilistic coverage
3. **If mitigations are insufficient**, this is a signal that the system-level spec may be too complex for monolithic checking. See "Complexity management" below — subsystem decomposition is likely the right response.
4. **The timeout may be extended** if there is a specific reason to expect longer runtimes (e.g., a known-large but finite state space that is close to completing, or a liveness property that requires deep exploration). The developer must explicitly approve any timeout beyond 15 minutes.
Parse the TLC output to determine:
- Number of states explored and distinct states
- Whether each invariant and property passed or failed
- Any counterexample traces
- Checking duration
## Phase 5 — Handle results
### All properties pass
If `spec/hooks/run-checks` exists and is executable, delegate TLC verification to it instead of running TLC locally (see `/formspec.init` for the hook interface). Present the returned results to the developer, including any dashboard URLs.
If the hook does not exist or fails, run TLC locally as described in Phase 4.
In either case, update all statuses in `spec/formal/traceability.md` from `formalized` to `checked`. Report success with a summary of what was verified: number of properties checked, state space explored, time taken. Suggest proceeding with `/formspec.plan.4` to create an implementation plan.
### Meaning drift detected (Phase 2 issue)
If during Phase 2 you discover that the design is ambiguous or contradictory, making faithful TLA+ translation impossible:
1. Attempt to resolve the ambiguity by choosing the most natural reading of the design.
2. If the ambiguity cannot be resolved confidently:
- **STOP**
- Present the specific ambiguity to the developer with exact quotes from the design doc
- Explain what the two (or more) possible interpretations are
- Offer to amend the design doc to resolve the ambiguity
- After the developer decides, continue from Phase 2
### Design flaw detected (TLC finds a violation)
If TLC finds a property violation and the TLA+ faithfully represents the design:
**IMMEDIATELY STOP ALL WORK.**
Update `spec/formal/traceability.md` to set the violated property's status to `violated`.
Present to the developer:
1. **Which property was violated**: Name and human-readable description
2. **The counterexample trace**: Simplified to be understandable — show the sequence of states that leads to the violation, translated back into domain language (not raw TLA+ state dumps)
3. **Traceability**: Which design requirement this traces back to, with a link to the source document
4. **Plain-language explanation**: What this means for the system's actual behavior — what could go wrong in production
5. **Classification**: Whether this violates an explicit design requirement or a best-practice property that you added
**Do not proceed** until the developer explicitly acknowledges and decides how to resolve. Possible resolutions:
- Redesign the component (go back to `/formspec.design.1`)
- Add a constraint to the design
- Accept the risk with documented justification
- Modify the TLA+ model if the property was too strict
#### Recording design amendments
When the resolution involves changing the design doc (adding a constraint, modifying a user story, strengthening a precondition), record the amendment in the affected design document:
1. **Edit the requirement text** to reflect the corrected design.
2. **Add an inline reference** next to the changed text: `(see Amendment N)` where N is the next sequential amendment number for that document.
3. **Add an Amendments section** at the bottom of the design doc (create it if it doesn't exist) with a numbered entry explaining what changed and why:
```markdown
## Amendments
1. **US-2, "persists the updated queue"** — Added during formalization.
The original design did not persist the queue on enqueue. TLC found
a DeadlockFreedom counterexample: a crash after enqueue silently
drops the waiter, who then never acquires the lock. The queue must
be persisted to satisfy the architectural constraint on deadlock
freedom under crashes.
```
Each amendment entry must include:
- Which requirement was changed (e.g., US-2, SC-3)
- What the change was (quote or paraphrase the new text)
- The TLA+ property whose counterexample motivated the change
- A plain-language explanation of why the original design was insufficient
Amendments are permanent — they document the design's provenance and record which requirements were non-obvious enough to require formal verification to get right. Only remove an amendment if the requirement it annotates is itself removed through a larger redesign, or changed significantly enough that a plain rewrite is simpler than preserving the amendment history.
**Evaluating amendment volume**: Multiple amendments in a single formalization run are not necessarily a problem — they may indicate that the design was written at a high level (e.g., by an AI agent or during an early brainstorming phase) and the formalization is doing its job of tightening it. The signal to watch for is not the count but the *pattern*:
- **Amendments spread across different parts of the spec** (different user stories, different components, different properties) suggest the design was broadly sketched and TLC is finding the expected gaps. This is normal and productive.
- **Amendments concentrated around the same requirement or interaction** suggest a localized design flaw that incremental patching may not resolve. After 2-3 amendments to the same area, consider whether the component needs a broader redesign via `/formspec.design.1` rather than further patching.
After recording the amendment, restart from Phase 2.
## Complexity management
The TLA+ specification should be as simple as possible while still faithfully modeling the properties that matter. If the spec or the TLC checking becomes unwieldy, that is a signal to simplify — not to push harder.
**Rules of thumb for spec complexity** (these are guidelines, not hard limits — use judgment based on the system being modeled):
- **State variables**: If the main module exceeds ~10-15 state variables, consider whether some of them can be abstracted away or moved to a subsystem spec. Variables that exist solely to make TLC tractable (bookkeeping that doesn't correspond to anything in the design) are a warning sign.
- **Model constants**: Start small (2-3 processes, small domains). Only increase if TLC passes quickly and you want higher confidence. The goal is to find bugs, not to exhaustively verify at production scale — even 2 processes can expose most concurrency bugs.
- **Spec readability**: If the TLA+ spec is hard to read even for someone who understands the design, it is too complex. A spec that cannot be reviewed is not trustworthy.
**When to decompose into subsystems**: The `spec/formal/subsystems/` directory exists precisely for this case. If the full system spec becomes impractical to model-check — TLC times out even with minimal constants, or the spec is too complex to write correctly — partition it:
1. Identify components with clear boundaries and limited interaction with the rest of the system.
2. Create subsystem specs in `spec/formal/subsystems/` that model the component's internal behavior, abstracting its environment (other components) as nondeterministic inputs.
3. Simplify the main `System.tla` by replacing the detailed component behavior with an abstract summary of its guarantees (the properties verified by the subsystem spec).
4. The main spec then checks system-level properties (cross-component interactions, end-to-end guarantees) against the abstract component summaries, at a manageable state space.
This decomposition mirrors how real systems are built and verified — components have contracts, and the system is verified against those contracts rather than against full implementation details.
**Reporting**: After each TLC run, report these complexity metrics alongside the results: number of state variables, number of states explored, number of distinct states, checking duration. These help the developer track whether the spec is growing beyond tractable bounds across formalization iterations.

View file

@ -0,0 +1,98 @@
---
description: Implement tasks with continuous formal verification
argument-hint: [task-id]
allowed-tools: [Read, Write, Edit, Glob, Grep, Bash, Task]
model: opus
---
# FormSpec Implement
You are implementing tasks from `spec/tasks.md` with continuous verification against the formal specification.
## Steps
### 1. Read context
Read:
- `spec/tasks.md`
- `spec/plan.md`
- `spec/formal/traceability.md`
- `spec/formal/System.tla`
- Relevant feature design docs (determine which ones from the task's "Implements" field)
### 2. Select task
If `$ARGUMENTS` specifies a task ID (e.g., `T-03`) or milestone ID (e.g., `M-01`): implement that task or all remaining tasks for that milestone. Verify dependencies are complete first — if not, warn the user and suggest implementing dependencies first.
If `$ARGUMENTS` is empty: find the next task whose dependencies are all complete. If multiple tasks are ready, pick the lowest-numbered one.
If all tasks are complete, report that and suggest running `/formspec.docs.7` to generate documentation, then `/formspec.check.8` to validate consistency.
### 3. Implement the task
Read the task's full specification from `spec/tasks.md`.
Follow TDD:
1. Write unit tests first, derived from the task's acceptance criteria. These tests verify the component's contract in isolation — they are not TLA+ property checks, they are component-level correctness checks.
2. Write the implementation.
3. Run unit tests and iterate until passing.
Unit tests at this level serve two purposes: they catch implementation bugs early (before dependent tasks build on a broken foundation), and they force component boundaries to remain clear and well-defined, which makes later integration smoother.
**a. Implement the code** following the plan's architecture and technology choices. When the task involves a component that exposes or consumes an interface, refer to the Interface Contract section in the relevant design doc for the concrete schemas, wire formats, and error conventions.
**b. Run the task's unit-level verification**:
- Unit tests for the component being implemented
- Compile-time checks: ensure code compiles and types check
- Structural guarantees: verify that the data model makes the property violation unrepresentable (no runtime check needed — record the structural reason in the verification matrix instead)
- Static analysis tools specified in the plan (e.g., `go vet`, `cargo clippy`)
**c. Handle failures**:
If the failure is a **code bug** (the unit test caught an implementation error):
- Diagnose and fix. This is normal TDD flow.
If the failure reveals an **inconsistency with the TLA+ spec** (the implementation cannot satisfy a formal property as specified, not due to a simple bug but due to a fundamental mismatch):
- **STOP**
- Explain to the developer:
- Which TLA+ property cannot be satisfied
- Why the implementation cannot satisfy it (what real-world constraint prevents it)
- Whether this indicates the TLA+ model is too idealized or the design needs revision
- Do not continue until the developer decides how to resolve
**d. Mark task complete**:
After unit tests pass, mark the task complete in `spec/tasks.md` (e.g., append `✓ COMPLETE` to the title).
**e. Continue to next task**:
After completing a task, if the next task's dependencies are all met, continue implementing it without stopping. Keep going until either:
- A unit test failure or spec inconsistency requires developer input
- A verification milestone is reached (all of its tasks are now complete)
- No more tasks are ready (unmet dependencies)
### 4. Verify milestones
After completing a task, check whether any verification milestones in `spec/tasks.md` have all their tasks complete. For each newly-completed milestone:
**a. Run the milestone's verification**: If `spec/hooks/run-checks` exists and is executable, delegate the milestone's verification to it (see `/formspec.init` for the hook interface). The request should include the milestone's properties, verification methods, and hints (e.g., the test commands from the plan). The external system runs the checks independently and returns results.
If the hook does not exist or fails, run the milestone's verification method locally (e.g., integration tests, property-based tests, recovery tests). These are the tests that validate TLA+ properties end-to-end across multiple components.
**b. Handle milestone verification failures**: If the verification fails (whether reported by the hook or observed locally), diagnose whether the issue is in one of the component tasks (go back and fix it) or a cross-component interaction that wasn't caught by unit tests. If it reveals a spec inconsistency, STOP as described above.
**c. Update the verification matrix**: If the hook succeeded, it is the record-keeper — skip the markdown table update. If checks ran locally (no hook, or hook failed), update `spec/verification/matrix.md` directly. Create the file if it doesn't exist, using this format:
```markdown
# Verification Matrix
| Source Document | Requirement ID | TLA+ Property | Verification Tool | Status | Last Checked |
|----------------|----------------|---------------|-------------------|--------|--------------|
| features/YYYY-MM-DD-feature/design.md | US-1 | PropertyName | tool name | PASS | YYYY-MM-DD |
```
Update existing rows or add new ones. Set status to PASS with today's date. Only record a property as PASS when its milestone has been reached and the verification method has actually been executed.
### 5. Cross-check (periodic)
After completing a milestone, briefly check whether the changes could have affected any previously verified properties. If there's a risk of regression, advise running `/formspec.check.8` or `/formspec.verify.3`.

View file

@ -0,0 +1,138 @@
---
description: Initialize formspec project structure
argument-hint: <system-description>
allowed-tools: [Read, Write, Edit, Glob, Grep, Bash]
---
# FormSpec Init
You are initializing a formal-specification-driven development project. Your job is to create the `spec/` directory structure and a foundational `spec/system.md` document.
## Steps
### 1. Check for existing spec/ directory
Use Glob to check if `spec/system.md` already exists in the project root.
If it exists: **STOP**. Tell the user that the project is already initialized and they should use `/formspec.design.1` to add features. Do not modify anything.
### 2. Gather system description
If `$ARGUMENTS` is provided, use it as the high-level system description.
If `$ARGUMENTS` is empty or too vague to work with, ask the user to describe:
- What the system is and its purpose
- Who the primary actors are (users, services, external systems)
- Key constraints (non-negotiable properties like statelessness, partition tolerance, latency bounds, security requirements)
### 3. Create directory structure
Create the following directories:
```
spec/formal/subsystems/
spec/features/
spec/verification/
spec/hooks/
```
### 4. Generate spec/system.md
Write `spec/system.md` with the following structure. Fill in the System Overview and Architectural Constraints based on the gathered description. Leave Feature Index and Changelog empty with placeholder text.
```markdown
# System Design
## System Overview
[What is this system, who uses it, why does it exist. Derived from the user's description.]
## Architectural Constraints
[Non-negotiable properties of the system. Each constraint should be a clear, testable statement.]
## Feature Index
*No features designed yet. Use `/formspec.design.1 <feature-name>` to add features.*
## Changelog
*No changes recorded yet.*
```
The System Overview should be a concise but complete description — enough for someone unfamiliar with the project to understand what they're looking at. The Architectural Constraints should be concrete and specific, not vague aspirations.
### 5. Report
Tell the user what was created and suggest they proceed with `/formspec.design.1 <feature-name>` to start designing features.
## Hook convention
The `spec/hooks/` directory holds executable scripts that formspec commands invoke at defined points. This allows projects to plug in custom behavior (e.g., delegating verification to a TCMS, triggering CI pipelines, posting notifications) without modifying the commands themselves.
If a hook script does not exist, the command falls back to its default behavior (running checks locally and updating markdown tables). If a hook exists and is executable, the command delegates to it instead.
### run-checks
**Path**: `spec/hooks/run-checks`
**Called by**: `/formspec.formalize`, `/formspec.verify`, `/formspec.implement`, `/formspec.check`
**When**: When verification needs to be performed (TLC model checking, milestone tests, consistency checks).
**Purpose**: Delegate verification execution to an external system. The agent determines *what* needs checking from the workflow state; the hook tells an external system to *run* the checks, store the results in its own datastore, and report back. This keeps the external system as the independent record-keeper — results are produced by the verification infrastructure, not self-reported by the agent.
**Request** (JSON on stdin):
```json
{
"command": "formspec.implement",
"project_root": "/home/user/projects/mutex",
"checks": [
{
"property": "MutualExclusion",
"source": "features/2026-02-10-mutex/design.md",
"requirement": "US-1",
"type": "invariant",
"method": "integration test",
"hint": "go test -race -run TestMutualExclusion ./..."
}
]
}
```
**Request fields**:
- `command`: Which formspec command is requesting verification.
- `project_root`: Absolute path to the project root, so the external system can locate source files and specs.
- `checks`: Array of verification requests:
- `property`: TLA+ property name (or descriptive label for non-formalized checks).
- `source`: Design document path relative to `spec/`.
- `requirement`: Requirement ID (e.g., `US-1`, `SC-3`).
- `type`: `invariant`, `temporal`, `safety`, `liveness`, `structural`, `compile-time`, `static-analysis`, `unit-test`, or `integration-test`.
- `method`: Description of the verification method.
- `hint`: Suggested command or approach. The external system may use this directly, adapt it to its own environment, or ignore it entirely.
**Response** (JSON on stdout):
```json
{
"results": [
{
"property": "MutualExclusion",
"status": "pass",
"details": "1024 states explored, 0 violations",
"url": "https://dashboard.example.com/runs/4821#MutualExclusion"
}
]
}
```
**Response fields**:
- `results`: Array of verification outcomes, one per requested check:
- `property`: Echoes back the property name from the request.
- `status`: `pass`, `fail`, or `error`.
- `details`: Human-readable summary of the result — counterexample, error message, state space size, etc. The agent presents this to the developer.
- `url`: Optional. Link to the full result in the external system's dashboard. If present, the agent includes it in the report to the developer.
**Exit codes**: 0 = all checks executed (individual checks may still have `fail` status). Non-zero = the hook itself failed (could not reach the external system, configuration error, etc.). On non-zero exit, the command warns and falls back to running checks locally and updating the markdown tables.
**Behavior when hook is absent**: Commands run checks locally (TLC, `go test`, etc.) and update `spec/formal/traceability.md` and `spec/verification/matrix.md` directly. The markdown tables serve as the built-in, zero-configuration default.

View file

@ -0,0 +1,99 @@
---
description: Create an implementation plan with formal verification strategy
allowed-tools: [Read, Write, Edit, Glob, Grep, Task]
---
# FormSpec Plan
You are creating an implementation plan that bridges the formal TLA+ specification to concrete code, with a verification strategy ensuring each formal property is checked by implementation-level tools.
## Steps
### 1. Read all artifacts
Read:
- `spec/system.md`
- All feature design docs: Glob for `spec/features/*/design.md`
- `spec/formal/System.tla`
- `spec/formal/System.cfg`
- `spec/formal/traceability.md`
All of these must exist. If any are missing, tell the user which prerequisite command to run and stop.
### 2. Validate formal verification status
Check `spec/formal/traceability.md`. Every formalizable property must have status `checked` (meaning TLC has verified it). If any property has status `formalized`, `violated`, or `not-yet-formalized` without adequate justification:
- Tell the user which properties are not yet verified
- Direct them to run `/formspec.formalize.2` first
- Stop
### 3. Generate spec/plan.md
Write `spec/plan.md` with these sections:
```markdown
# Implementation Plan
## Technology Choices
[Language(s), frameworks, infrastructure. Each choice must be justified against the formal properties where relevant.]
Example: "Rust for the auth module — ownership system prevents use-after-free, directly satisfying the MemorySafety invariant. Go for the API gateway — goroutines with race detector verify the NoConcurrentAccess property at test time."
Do not justify choices that have no relationship to formal properties — just state them.
## Architecture
[Component structure, communication patterns, data flow. Reference how the architecture maps to the TLA+ model's actors and processes. Where design docs define Interface Contracts, the architecture should describe how components connect through those interfaces and confirm that the planned component boundaries align with the contracts.]
## Verification Strategy
[A table mapping each TLA+ property to implementation-level verification methods.]
| Source Document | Requirement ID | TLA+ Property | Verification Method | Tool |
|----------------|----------------|---------------|---------------------|------|
| system.md | Architectural constraint | NoDeadlock | Race detection | go test -race |
| features/.../user-auth/design.md | US-2 | AuthInvariant | Type system + unit tests | Rust ownership + cargo test |
| system.md | Best practice | NoBackdoorTokens | Formal proof | F* (Auth.fst) |
Verification methods include:
- **Structural guarantees**: The implementation's data model makes a property violation unrepresentable. This happens when a TLA+ spec has redundant variables kept in sync (e.g., separate `holder` and `clientState` maps) but the implementation collapses them into a single source of truth (e.g., a struct with one `Holder` field). The property is enforced by construction — there is no code path, however buggy, that could violate it. When claiming a structural guarantee, name the specific data model choice that enforces it.
- **Compile-time guarantees**: The language's type system or ownership model prevents the violation. Rust ownership preventing use-after-free, F* dependent types proving a bound, Go's type system preventing a type confusion. Distinct from structural guarantees: compile-time guarantees rely on the *language*, structural guarantees rely on the *data model*.
- **Static analysis**: cargo clippy, go vet, staticcheck, rust-analyzer
- **Runtime checks**: go test -race, property-based tests, fuzzing
- **Formal proofs**: F* modules that verify TLA+ properties
- **Test coverage**: Specific test cases derived from TLC state exploration
Every TLA+ property in the traceability matrix must have at least one verification method. Properties with only "manual review" are not acceptable — find a tool-assisted method.
When a TLA+ property is satisfied structurally, do not claim it is verified by a runtime test. A no-op test function that "checks" a structurally-guaranteed property is misleading — it will never fail, so it provides no signal. Instead, record the verification method as "Structural guarantee" and explain what data model decision enforces it.
## F* Integration Plan
[Include this section only if F* is part of the verification strategy.]
- Which properties will be formally proved in F*
- How F* modules correspond to TLA+ properties
- How F* proofs relate to the implementation code (extracted code vs. separate proof)
## Testing Strategy
[Unit, integration, and property-based testing approach.]
- How tests trace back to TLA+ properties
- Property-based test generators derived from the TLA+ state space
- Integration tests that exercise cross-component interactions modeled in the TLA+ spec
- Coverage targets and how they relate to formal verification completeness
```
### 4. Quality check
Before finishing, verify:
- Every TLA+ property from traceability.md appears in the Verification Strategy
- Every feature's user stories are addressed by the architecture
- Technology choices don't contradict architectural constraints from system.md
- The verification strategy is actually executable (tools exist, are available, and can check what's claimed)
### 5. Report
Summarize the plan and suggest proceeding with `/formspec.tasks.5` to break it into implementable tasks.

View file

@ -0,0 +1,88 @@
---
description: Break the implementation plan into ordered tasks
allowed-tools: [Read, Write, Edit, Glob, Grep]
---
# FormSpec Tasks
You are breaking the implementation plan into concrete, dependency-ordered tasks that can be executed one at a time.
## Steps
### 1. Read artifacts
Read:
- `spec/plan.md` (must exist — if not, direct user to `/formspec.plan.4`)
- `spec/formal/traceability.md`
- All feature design docs: Glob for `spec/features/*/design.md`
### 2. Generate spec/tasks.md
Write `spec/tasks.md` with two sections: implementation tasks and verification milestones.
Implementation tasks are fine-grained and component-oriented — they track what code to write. Verification milestones are coarser and property-oriented — they define when a TLA+ property can actually be validated end-to-end. This separation exists because the implementation architecture is more fine-grained than the specification: a single formal property (e.g., crash safety) typically spans multiple implementation components (persistence layer + coordinator + integration tests), so no single task is a meaningful checkpoint for validating it.
#### Task format
```markdown
# Implementation Tasks
## Task T-01: [Title]
**Depends on**: (none | T-XX, T-YY)
**Implements**: US-3, SC-1 (from 2026-02-10-user-auth)
**Description**: [What to implement — specific enough that the implementer knows exactly what code to write]
**Acceptance**: [Unit-level criteria — how to verify this component works in isolation. Must be objectively verifiable. These are TDD checkpoints, not TLA+ property verification.]
```
Tasks do NOT have a "Verifies" field for TLA+ properties. Individual tasks verify their own component contracts through unit tests; TLA+ properties are verified at the milestone level.
#### Milestone format
After the tasks, add a milestones section:
```markdown
# Verification Milestones
## Milestone M-01: [TLA+ Property or property group]
**Tasks**: T-02, T-04, T-05, T-08
**Verifies**: PersistBeforeGrant, EventualRecovery (from System.tla)
**Verification method**: `go test -race -run Recovery ./coordinator/...`
**Description**: [What this milestone validates and why it requires all the listed tasks to be complete first]
```
A milestone gates on a set of tasks. Only after all of its tasks are complete can the verification method be run and the result recorded in the verification matrix. A single task may appear in multiple milestones. A single milestone may verify multiple related properties.
### 3. Coverage requirements
**Tasks** must collectively cover:
- **All user stories** from all feature design docs (every US-N must appear in at least one task's "Implements" field)
- **All implementation components** from the plan's architecture
- **F* proof modules** if included in the plan
- **Infrastructure setup** tasks (project scaffolding, CI configuration, test harness) where needed
**Milestones** must collectively cover:
- **Every TLA+ property** from traceability.md that has a runtime or integration-level verification method
- **Every verification method** from the plan's Verification Strategy table
- Properties verified by structural or compile-time guarantees do not need milestones — they are validated by the data model or type system, not by test execution
### 4. Dependency ordering rules
- Foundational types, interfaces, and data models before business logic
- Verification infrastructure (test harness, F* module structure) before verification tasks
- Core functionality before features that depend on it
- No circular dependencies
- Tasks should be small enough to implement in a single session but large enough to be meaningful (not "create a file" granularity)
- Each task should be independently testable with unit tests after its dependencies are met — if a task can only be tested as part of a larger integration, it should be merged with the tasks it depends on or the milestone structure should be reconsidered
### 5. Validation
Before finishing, check:
- Every US-N from every feature design doc is covered by at least one task
- Every TLA+ property from traceability.md with a runtime verification method is covered by at least one milestone
- Every milestone's task list references valid task IDs
- Every task's dependencies reference valid task IDs
- No circular dependency chains exist in the task graph
- The dependency graph has a valid topological ordering
- Every milestone's tasks collectively provide the components needed to execute its verification method (e.g., a milestone that verifies crash recovery must include tasks for persistence, the coordinator, and the recovery integration test)
Report any gaps found during validation. Suggest proceeding with `/formspec.implement.6` to start implementing tasks.

View file

@ -0,0 +1,54 @@
---
description: Run the TLC model checker on the formal specification
allowed-tools: [Read, Glob, Grep, Bash]
---
# FormSpec Verify
You are running the TLC model checker on the existing TLA+ specification and reporting results. This command is **read-only** — it does not modify any spec files.
## Steps
### 1. Validate preconditions
Check that both files exist:
- `spec/formal/System.tla`
- `spec/formal/System.cfg`
If either is missing, tell the user to run `/formspec.formalize.2` first and stop.
### 2. Run TLC
If `spec/hooks/run-checks` exists and is executable, delegate verification to it instead of running TLC locally (see `/formspec.init` for the hook interface). Build the request from `spec/formal/traceability.md` (properties and their types) and `spec/formal/System.cfg` (the TLC invocation hint). Present the returned results to the developer, including any dashboard URLs.
If the hook does not exist or fails, run TLC locally:
```bash
tlc spec/formal/System.tla -config spec/formal/System.cfg -workers auto -metadir /tmp/tlc-states -cleanup
```
Run from the project root directory. The `-metadir` flag directs TLC's working state files to a temporary directory, and `-cleanup` removes them after a successful run. Use a 5-minute timeout by default. If TLC does not complete within this limit, stop it and report the partial state (states explored so far, queue size). If the developer expects and accepts a longer runtime, the timeout may be extended — but beyond 15 minutes, always alert the developer and wait for confirmation before continuing.
### 3. Parse and report
Read the TLC output (or hook response) and produce a structured report:
**Summary**:
- States explored / distinct states
- Duration
- Overall result: PASS or FAIL
- Dashboard URL (if provided by hook)
**Property results** (table format):
| Property | Type | Result |
|----------|------|--------|
| Name | invariant/temporal | PASS/FAIL |
**For any violation**, include:
- The counterexample trace, simplified into domain language
- If `spec/formal/traceability.md` exists, map the violated property back to its design requirement and include that reference
### 4. No modifications
Do not modify `System.tla`, `System.cfg`, `traceability.md`, or any other file. This command is purely diagnostic. If violations are found, advise the user to run `/formspec.formalize.2` to address them.

View file

@ -11,16 +11,24 @@ github_update() {
repo="$2"
resource_selector="$3"
post_fetch="$4"
override_tag="${5:-}"
if [ -z "${override_tag}" ]; then
response=$(curl --silent "https://api.github.com/repos/${repo}/releases/latest")
latest_version=$(printf '%s' "${response}" | jq --raw-output .name)
version_type="Latest version"
else
response=$(curl --silent "https://api.github.com/repos/${repo}/releases/tags/${override_tag}")
latest_version="${override_tag}"
version_type="Fixed version"
fi
VERSION_FILE="${STATE_DIR}/${package}"
install="false"
if [ -f "${VERSION_FILE}" ]; then
installed_version=$(cat "${VERSION_FILE}")
if [ "${installed_version}" = "${latest_version}" ]; then
echo "Latest version \"${package} ${installed_version}\" is already installed"
echo "${version_type} \"${package} ${installed_version}\" is already installed"
else
echo "Upgrading \"${package} ${installed_version}\" -> \"${package} ${latest_version}\"..."
install="true"

View file

@ -22,3 +22,16 @@ else
cd "$(systemd-path user-state-private)" || exit
git clone https://github.com/simonthum/git-sync
fi
REPO_DIR="$(systemd-path user-state-private)"/tla-bin
if [ -d "${REPO_DIR}" ]
then
git -C "${REPO_DIR}" pull || true
else
cd "$(systemd-path user-state-private)" || exit
git clone https://github.com/pmer/tla-bin
fi
cd "${REPO_DIR}" && \
./download_or_update_tla.sh && \
./install.sh "$(systemd-path user-shared)"/tla-bin && \
chmod --recursive o-rwx "$(systemd-path user-shared)"/tla-bin

View file

@ -0,0 +1,25 @@
#! /usr/bin/bash
set -euo pipefail
IFS=$'\n\t'
package=tlapm
repo=tlaplus/tlapm
tlapm_resource() {
echo "tlapm-$1-x86_64-linux-gnu.tar.gz"
}
INSTALL_DIR="$(systemd-path user-state-private)"/tlapm
install_tlapm() {
tempdir="$(mktemp --directory)" && \
tar xz --directory="${tempdir}" && \
chmod --recursive o-rwx "${tempdir}" && \
rm --force --recursive "${INSTALL_DIR}" && \
mv "${tempdir}"/tlapm "$(dirname "${INSTALL_DIR}")" && \
rm --force --recursive "${tempdir}" && \
ln --symbolic "${INSTALL_DIR}"/bin/tlapm "$(systemd-path user-binaries)"/tlapm
}
github_update "${package}" "${repo}" tlapm_resource install_tlapm 1.6.0-pre

View file

@ -39,6 +39,10 @@ if [ -d "$(go env GOBIN)" ] ; then
PATH="$(go env GOBIN):$PATH"
fi
if [ -d "$(systemd-path user-shared)"/tla-bin/bin ] ; then
PATH="$(systemd-path user-shared)/tla-bin/bin:$PATH"
fi
# Install the git-sync script
if [ -d "$(systemd-path user-state-private)"/git-sync ] ; then
PATH="$(systemd-path user-state-private)/git-sync:$PATH"