CLI Reference
Complete reference for the jugeo command-line interface — every command,
flag, option, and expected terminal output. Run jugeo --help or
jugeo <command> --help for inline help at any time.
pip install -e . — install from the repository root with Python ≥ 3.10.
For LLM features, set ANTHROPIC_API_KEY (or pass --model with any
supported slug). Use --no-llm to run entirely offline with heuristic fallbacks.
Global Options
These flags apply to every jugeo invocation and must be placed
before the subcommand name.
| Flag | Alias | Type | Default | Description |
|---|---|---|---|---|
| --verbose | -v | bool | false | Enable verbose output. Prints intermediate judgment sections, solver calls, LLM prompt/response traces, and timing breakdowns. |
| --format | text|json | text | Output format. text is human-readable with colour; json is machine-readable and suitable for piping into other tools or CI systems. |
|
| --output | -o | path | stdout | Write primary output to this directory (or file) instead of stdout. Artefacts such as proof certificates, repaired files, and generated code are also placed here. |
| --no-llm | bool | false | Skip all LLM calls and use heuristic fallbacks. Useful for offline environments, reproducible CI, or budget-constrained runs. Reduces trust tier to Solver at most. | |
| --model | string | claude-sonnet-4.6 | LLM model slug. Accepts any Anthropic model identifier. Also accepts OpenAI-compatible slugs when OPENAI_API_KEY is set and the slug begins with gpt- or o1. |
Full sheaf-theoretic program verification. Loads spec and implementation into a semantic site, computes descent obstructions, dispatches to Z3/SMT and LLM evidence channels, and attempts to glue local judgments into a global proof certificate.
| Flag | Type | Default | Description |
|---|---|---|---|
| --spec | path | required | Path to the specification file (Python with JuGeo judgment annotations). |
| --impl | path | required | Path to the implementation file to verify against the spec. |
| --strategy | enum | AUTO |
Proof strategy. One of:
AUTO
EXHAUSTIVE
FAST
SMT_ONLY
LLM_ONLY
DESCENT
|
| --timeout | int | 120 | Per-judgment solver timeout in seconds. |
| --cert | path | Write proof certificate JSON to this path. |
jugeo prove --spec spec.py --impl impl.py --strategy EXHAUSTIVE
Static bug detection across six well-defined Python bug classes. Combines AST analysis with Z3-backed data-flow reasoning and optional LLM triage to rank findings by severity and fix cost.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Python source file (or directory — scanned recursively) to analyse. |
| --classes | list | all | Comma-separated subset of bug classes to check, e.g. bare-except,mutable-default. |
| --min-severity | low|med|high | low | Suppress findings below this severity threshold. |
jugeo --format json bugs mycode.py
Check a Python implementation against a formal specification file. Extracts pre/post conditions and invariants from the spec, encodes them in SMT, and verifies the implementation satisfies every judgment section.
| Flag | Type | Default | Description |
|---|---|---|---|
| --spec | path | required | Specification file. |
| --impl | path | required | Implementation file to check. |
| --report | bool | true | Print a per-judgment compliance report table. |
jugeo spec --spec spec.py --impl impl.py
Check semantic equivalence of two Python programs. Uses sheaf-theoretic judgment transport to compare observable behaviour across all reachable inputs, optionally modulo a relation (e.g. output order, floating-point tolerance).
| Flag | Type | Default | Description |
|---|---|---|---|
| <prog1> <prog2> | path path | required | The two programs to compare. |
| --modulo | string | Equivalence modulo relation, e.g. order, fp-tol=1e-9. |
|
| --entry | string | main | Entry-point function name to compare. |
jugeo equiv prog1.py prog2.py
Suggest minimal, semantically-grounded repairs for buggy code. Analyses detected bugs and failing spec judgments, proposes diffs, and optionally applies them in-place after confirmation.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Buggy Python file to repair. |
| --spec | path | Optional spec file; repairs are guided toward spec compliance. | |
| --apply | bool | false | Apply the highest-confidence repair automatically (writes file in-place). |
| --max-repairs | int | 5 | Maximum number of repair candidates to generate per finding. |
jugeo repair buggy.py --spec spec.py
Evaluate code quality, semantic maturity, and trust tier. Produces a scorecard across multiple dimensions including correctness, robustness, documentation alignment, and cyclic maturity level.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Python file or package directory to evaluate. |
| --spec | path | Specification file for spec-compliance sub-score. | |
| --maturity | bool | true | Include cyclic maturity level (CML 1–5) in the report. |
jugeo evaluate mycode.py --spec spec.py
Generate Python code from a formal specification or judgment description. Uses the LLM oracle guided by SMT-checked preconditions to produce implementations that are pre-verified at the judgment level before being returned.
| Flag | Type | Default | Description |
|---|---|---|---|
| --spec | path | required | Specification file to generate from. |
| --target | path | Output file for generated implementation. | |
| --verify | bool | true | Run jugeo prove on the generated code before writing it. |
| --attempts | int | 3 | Number of LLM generation + verification rounds. |
jugeo generate --spec spec.py --target impl_gen.py --verify
Run a full judgment pipeline from a TOML or JSON configuration file. Useful for multi-step workflows (e.g. load → encode → prove → evaluate) defined declaratively and reproducibly.
| Flag | Type | Default | Description |
|---|---|---|---|
| <config> | path | required | Pipeline config file (.toml or .json). |
| --dry-run | bool | false | Parse and validate the config without executing any steps. |
| --step | string | Run only the named step from the pipeline. |
jugeo run pipeline.toml
Start a local HTTP API server exposing all JuGeo commands as REST endpoints. Designed for IDE plugin integration, CI pipelines, and the JuGeo web dashboard.
| Flag | Type | Default | Description |
|---|---|---|---|
| --host | string | 127.0.0.1 | Bind address. |
| --port | int | 7800 | TCP port. |
| --reload | bool | false | Auto-reload on code changes (development mode). |
| --workers | int | 4 | Number of async worker threads. |
jugeo server --port 7800
Load and analyse a Python program into judgment sections. Produces a human- and machine-readable breakdown of the program's semantic structure: sections, effects, type constraints, and import graph.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Python source file or package directory. |
| --sections | bool | true | Print extracted judgment sections. |
| --effects | bool | true | Print Python effect annotations per section. |
jugeo --format json load mycode.py
Encode a Python program's judgment sections into Z3/SMT. Supports scalar, sequence, tensor, and text encodings. Output is an SMT-LIB2 file or a Z3 Python script.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Python source to encode. |
| --encoding | enum | scalar |
Encoding strategy:
scalar
bitvec
sequence
tensor
text
auto
|
| --smtlib | bool | false | Emit raw SMT-LIB2 instead of Z3 Python API code. |
jugeo encode mycode.py --encoding bitvec --smtlib
Classify a problem or code file against the JuGeo problem atlas. Returns the nearest problem families, recommended proof strategies, and analogous theorems from the theorem ecology.
| Flag | Type | Default | Description |
|---|---|---|---|
| <input> | path|string | required | Python file, spec file, or free-text problem description (quoted). |
| --top-k | int | 5 | Return the top k nearest problem families. |
jugeo classify spec.py --top-k 3
Check documentation (docstrings, README, type annotations) for honest projection against the actual code semantics. Detects outdated docs, misleading claims, and missing coverage.
| Flag | Type | Default | Description |
|---|---|---|---|
| <file> | path | required | Python source file with docstrings to check. |
| --readme | path | External README/markdown file to check in addition. | |
| --strict | bool | false | Fail on any misalignment, not just high-severity ones. |
jugeo alignment mycode.py --readme README.md
Run bugs, spec compliance, and equivalence checks simultaneously in a single pass. More efficient than calling each command separately; shares the loaded semantic site and SMT encoding across all three analyses.
| Flag | Type | Default | Description |
|---|---|---|---|
| --impl | path | required | Implementation file. |
| --spec | path | Spec file for compliance check (optional). | |
| --reference | path | Reference implementation for equivalence check (optional). |
jugeo mixed --impl impl.py --spec spec.py --reference ref.py
Show package version, installed packs, solver versions, trust tier summary, and environment diagnostics. Useful for debugging and support reports.
jugeo info
Run the JuGeo test and benchmark suite. Can run the full internal test suite, a specific experiment, or a user-supplied benchmark configuration.
| Flag | Type | Default | Description |
|---|---|---|---|
| --suite | string | all | Test suite name or all. |
| --benchmark | path | User benchmark config file. | |
| --fast | bool | false | Skip slow integration tests; run only unit tests. |
jugeo test --suite bugs --fast
Run descent and gluing on pre-computed judgment data. Takes a set of local judgment
sections (from jugeo load or jugeo encode) and attempts to
glue them into a global section via the Cech descent algorithm.
| Flag | Type | Default | Description |
|---|---|---|---|
| <data> | path | required | JSON file of judgment sections (output of jugeo --format json load). |
| --obstruction-report | bool | true | Print the full Cech cohomology obstruction report. |
jugeo descend sections.json
Mathematical ideation and theorem discovery. Given seed theorems or problem families, generates new conjectures, finds analogies via analogy transport, and ranks candidates by plausibility score.
| Flag | Type | Default | Description |
|---|---|---|---|
| --seeds | path | File of seed theorem names or IDs (one per line). | |
| --domain | string | Mathematical domain to ideate within, e.g. sheaf-theory, type-theory. |
|
| --budget | int | 20 | Maximum number of conjectures to generate. |
jugeo ideate --domain sheaf-theory --budget 10
Synthesize a complete program from a natural language idea. Uses sheaf-theoretic elaboration to plan, generate, test, and refine software.
| Flag | Type | Default | Description |
|---|---|---|---|
| idea | string | required | The software idea to implement (quoted string). |
| --max-iterations | int | 5 | Maximum refinement iterations. |
| --output, -o | path | Output directory for the generated project. |
jugeo orchestrate "implement a stack data structure"
Full synthesis pipeline: mathematical fields → tournament → code generation → textbook generation. Produces a self-contained research artefact including theorems, implementations, and a generated textbook chapter.
| Flag | Type | Default | Description |
|---|---|---|---|
| --fields | path|string | required | Comma-separated field names or path to fields config. |
| --rounds | int | 3 | Number of tournament rounds for idea selection. |
| --textbook | bool | true | Generate a textbook chapter from the synthesis output. |
| --latex | bool | false | Emit LaTeX source for the textbook chapter. |
jugeo foundation --rounds 3 --n-fields 6 --latex-only
Browse and search the JuGeo problem catalog. Lists all registered problem families, their recommended strategies, associated theorems, and example code.
| Flag | Type | Default | Description |
|---|---|---|---|
| --search | string | Filter catalog by keyword (searches name, description, tags). | |
| --tag | string | Filter by tag, e.g. termination, resource, concurrency. |
|
| --detail | string | Show full detail for a named problem family. |
jugeo catalog --tag termination
Generate a complete Flask web application from a natural-language prompt. Produces models, routes, templates, static assets, and runs cross-layer descent verification to catch mismatches before deployment.
| Flag | Type | Default | Description |
|---|---|---|---|
| --outdir, -o | path | required | Output directory for the generated application. |
| prompt | string | Natural-language description of the app (optional positional). | |
| --type, -t | enum | crud | Application type: crud, api, dashboard, form_workflow, custom. |
| --template | enum | standard | Template complexity: minimal, standard, full, custom. |
| --name | string | app | Application name. |
| --port, -p | int | 5000 | Port number for the Flask dev server. |
| --no-verify | flag | Skip cross-layer descent verification after generation. | |
| --html-only | flag | Generate an HTML-only static app (no Flask). | |
| --include-tests | flag | Generate test scaffolding alongside the app. | |
| --include-docker | flag | Generate a Dockerfile for containerised deployment. |
jugeo webapp "a recipe sharing app" --outdir ./my-app --type crud
Improve an existing codebase via an agent-driven descent loop. Scans source files, identifies structural gaps, dispatches an AI agent to make targeted improvements, and verifies each iteration satisfies the stated obligation.
| Flag | Type | Default | Description |
|---|---|---|---|
| directory | path | required | Root directory of the codebase to improve. |
| improvement | string | required | Natural-language description of the improvement to make. |
| --max-iterations | int | 5 | Maximum improvement iterations before stopping. |
jugeo improve ./my-project "add comprehensive type hints"
Directed research pipeline: ideate a novel approach, generate an implementation, benchmark against baselines, refine or pivot, and produce a research paper. Runs until descent succeeds (H¹ = 0) on the research site.
| Flag | Type | Default | Description |
|---|---|---|---|
| prompt | string | required | Natural-language research prompt describing the problem. |
| --max-iterations | int | 30 | Maximum refinement iterations. |
| --max-pivots | int | 3 | Maximum theory pivots before giving up. |
| --seed | int | Random seed for reproducibility. |
jugeo research "optimal sorting with gradient information"
Evolutionary tournament to find the best implementation for a canonical problem. Tries multiple competing mathematical theories, keeps copies of what works, and evolves the champion to beat itself on the specified metrics.
| Flag | Type | Default | Description |
|---|---|---|---|
| field | string | required | Research domain, e.g. “computational finance”. |
| --metrics | string | required | Comma-separated evaluation metrics, e.g. "Sharpe Ratio,MAPE,speed". |
| --dataset | string | required | Canonical dataset to benchmark against. |
| --primary-metric | string | first metric | The single metric used to crown the champion. |
| -g, --generations | int | 5 | Number of evolutionary generations. |
| -t, --theories-per-gen | int | 4 | Competing theories per generation. |
jugeo research-focused "sorting" --metrics "time,memory" --dataset "standard"
Iterative descent on a 7-object delivery site: ideates a novel approach, generates a large-scale implementation, benchmarks against competitive baselines on real data, and refines or pivots until all hard obligations are met (H¹ = 0 on the delivery site).
| Flag | Type | Default | Description |
|---|---|---|---|
| prompt | string | required | Natural-language description of the delivery goal. |
| --max-outer | int | 10 | Maximum outer descent iterations. |
| --max-inner | int | 30 | Maximum inner refinement iterations per outer loop. |
| --max-pivots | int | 3 | Maximum theory pivots. |
| --min-kloc | float | 1.0 | Minimum implementation size in thousands of lines of code. |
| --output | path | Output directory for the generated project. |
jugeo research-and-implement "optimal portfolio allocation" --output ./portfolio
Quick Reference
All commands at a glance.
| Command | One-liner | Min. args | LLM? |
|---|---|---|---|
| jugeo prove | Full sheaf-theoretic verification | --spec --impl |
Optional |
| jugeo bugs | Detect 6 bug classes | <file> |
Optional |
| jugeo spec | Spec compliance check | --spec --impl |
No |
| jugeo equiv | Semantic equivalence | <prog1> <prog2> |
No |
| jugeo repair | Suggest / apply code repairs | <file> |
Yes |
| jugeo evaluate | Quality / maturity scorecard | <file> |
Optional |
| jugeo generate | Generate code from spec | --spec |
Yes |
| jugeo run | Run pipeline from config | <config> |
Depends |
| jugeo server | HTTP API server | — | Depends |
| jugeo load | Load & analyse program | <file> |
No |
| jugeo encode | Encode to Z3/SMT | <file> |
No |
| jugeo classify | Problem atlas classification | <input> |
Optional |
| jugeo alignment | Doc / code alignment check | <file> |
Yes |
| jugeo mixed | Bugs + spec + equiv in one pass | --impl |
Optional |
| jugeo info | Package / solver / tier info | — | No |
| jugeo test | Run test / benchmark suite | — | Optional |
| jugeo descend | Descent & gluing on judgment data | <data> |
No |
| jugeo ideate | Theorem discovery | — | Yes |
| jugeo orchestrate | Synthesize a program from an idea | <idea> |
Yes |
| jugeo foundation | Fields → tournament → code → textbook | — | Yes |
| jugeo catalog | Browse problem catalog | — | No |
| jugeo webapp | Generate a Flask web application | --outdir |
Yes |
| jugeo improve | Agent-driven codebase improvement | <dir> <goal> |
Yes |
| jugeo research | Directed research: ideate → implement → paper | <prompt> |
Yes |
| jugeo research-focused | Evolutionary tournament for best implementation | <field> --metrics --dataset |
Yes |
| jugeo research-and-implement | Full delivery site descent with hard obligations | <prompt> |
Yes |