We welcome contributions! Jolt uses a spec-driven development workflow for major features, and standard PRs for everything else.
- Bug fixes, small improvements, documentation: Open a PR directly. No spec needed.
- Major features, architectural changes: Start with a spec (see below).
The rule of thumb: if the implementation will be hard to review (500+ lines of non-trivial changes), write a spec first. The spec is small and reviewable; the implementation follows mechanically from it.
Large PRs are expensive to review. Code generation is cheap — human verification time is the scarce resource. We optimize for that by front-loading review onto small, readable specs.
A single PR carries a feature from spec to implementation:
- Create a spec using
/new-spec <feature-name>in Claude Code, or copyspecs/TEMPLATE.mdmanually. - Open a PR with just the spec file. A GitHub Action will add the
speclabel. - Spec analysis: Adding the
claude-spec-review-requestlabel triggers an external analysis. Claude performs a single-pass analysis, posting all questions at once — ambiguities, missing invariants, unclear evaluation criteria. When satisfied, Claude adds theclaude-spec-approvedlabel. - Spec review: Maintainers review and approve the spec.
- Implementation: Run
/implement-specin Claude Code cloud or locally. Claude generates a one-shot implementation from the approved spec. - Merge: Maintainers review the implementation and merge.
A spec has these sections (see specs/TEMPLATE.md):
- Summary: One paragraph — what is this feature and why does it matter?
- Intent: Goal (one sentence), Invariants (correctness properties), Non-Goals (explicit scope boundaries)
- Evaluation: Acceptance Criteria (checkboxes), Testing Strategy (host + zk modes), Performance expectations
- Design: Architecture (how it fits the existing system), Alternatives Considered (what was rejected and why)
- Documentation: What changes to the Jolt book (
book/) are required? - Execution (optional): Implementation direction, algorithmic approach
- References: Papers, related specs, prior art
Focus on intent and evaluation. Execution is downstream — the implementer should be able to derive most of it from what you want (intent) and how you'll verify it (evaluation).
| Label | Meaning | Applied by |
|---|---|---|
spec |
PR contains a spec file | GitHub Action (auto) |
no-spec |
PR has no spec file | GitHub Action (auto) |
implementation |
PR contains code alongside a spec | GitHub Action (auto) |
claude-spec-review-request |
Triggers external Claude spec analysis | Maintainer (manual) |
claude-spec-approved |
Claude's analysis found no ambiguities | Claude |
PRs exceeding 500 changed lines without a spec file get an automated warning comment. This is a suggestion, not a gate — use your judgment.
These Claude Code skills are available in this repo:
| Skill | Description |
|---|---|
/new-spec <name> |
Create a new spec file from the template |
/analyze-spec |
Interactive Socratic analysis of a spec (local) |
/implement-spec |
Autonomous implementation from an approved spec (local/cloud) |
/ci-code-review |
Deep PR code review with parallel analysis agents |
Spec analysis can also be triggered externally by adding the claude-spec-review-request label to a PR.
- Rust toolchain (see
rust-toolchain.toml) - cargo-nextest for running tests
# Lint (must pass in both modes)
cargo clippy -p jolt-core --features host --message-format=short -q --all-targets -- -D warnings
cargo clippy -p jolt-core --features host,zk --message-format=short -q --all-targets -- -D warnings
# Format
cargo fmt -q
# Test (always use nextest)
cargo nextest run --cargo-quiet
# Primary correctness check
cargo nextest run -p jolt-core muldiv --cargo-quiet --features host
cargo nextest run -p jolt-core muldiv --cargo-quiet --features host,zkcargo fmt+cargo clippywith zero warnings- Performance is critical — profile before optimizing, benchmark changes to hot paths
- The codebase uses
non_snake_casefor math variables:log_T,ram_K, etc. - See
CLAUDE.mdfor full development guidelines