Reference

CLI Reference

Complete reference for the jugeo command-line interface — every command, flag, option, and expected terminal output. Run jugeo --help or jugeo <command> --help for inline help at any time.

Installation
pip install -e .  —  install from the repository root with Python ≥ 3.10. For LLM features, set ANTHROPIC_API_KEY (or pass --model with any supported slug). Use --no-llm to run entirely offline with heuristic fallbacks.
Jump to command

Global Options

These flags apply to every jugeo invocation and must be placed before the subcommand name.

Flag Alias Type Default Description
--verbose -v bool false Enable verbose output. Prints intermediate judgment sections, solver calls, LLM prompt/response traces, and timing breakdowns.
--format text|json text Output format. text is human-readable with colour; json is machine-readable and suitable for piping into other tools or CI systems.
--output -o path stdout Write primary output to this directory (or file) instead of stdout. Artefacts such as proof certificates, repaired files, and generated code are also placed here.
--no-llm bool false Skip all LLM calls and use heuristic fallbacks. Useful for offline environments, reproducible CI, or budget-constrained runs. Reduces trust tier to Solver at most.
--model string claude-sonnet-4.6 LLM model slug. Accepts any Anthropic model identifier. Also accepts OpenAI-compatible slugs when OPENAI_API_KEY is set and the slug begins with gpt- or o1.
Example
zsh — jugeo global flags
$ jugeo --verbose --format json --model claude-opus-4-5 prove --spec spec.py --impl impl.py # verbose + JSON output + custom model; all flags before the subcommand $ jugeo --no-llm bugs mycode.py # fully offline: heuristic bug detection only, no API calls $ jugeo -v --output ./results --format json bugs mycode.py # write JSON results to ./results/ with verbose trace

jugeo prove

Full sheaf-theoretic program verification. Loads spec and implementation into a semantic site, computes descent obstructions, dispatches to Z3/SMT and LLM evidence channels, and attempts to glue local judgments into a global proof certificate.

FlagTypeDefaultDescription
--spec path required Path to the specification file (Python with JuGeo judgment annotations).
--impl path required Path to the implementation file to verify against the spec.
--strategy enum AUTO Proof strategy. One of:
AUTO EXHAUSTIVE FAST SMT_ONLY LLM_ONLY DESCENT
--timeout int 120 Per-judgment solver timeout in seconds.
--cert path Write proof certificate JSON to this path.
Usage
bash
jugeo prove --spec spec.py --impl impl.py --strategy EXHAUSTIVE
Terminal demo
zsh — jugeo prove
$ jugeo prove --spec spec.py --impl impl.py --strategy EXHAUSTIVE JuGeo Sheaf-Theoretic Verifier v0.9.1 Loading spec .............. spec.py (3 judgments, 2 invariants) Loading impl .............. impl.py (147 LOC) Building semantic site .... 8 open sets, 12 cover relations Phase 1 Descent obstruction check Cech complex computed (H^0 = 0, H^1 = 0) No gluing obstructions detected Phase 2 SMT dispatch (Z3 4.13.0) J0: pre/post ........... SAT (0.32 s) J1: loop invariant ..... SAT (1.14 s) J2: termination ........ SAT (0.87 s) Phase 3 LLM oracle (claude-sonnet-4.6) Semantic coherence check passed Edge cases reviewed (0 anomalies) Gluing Local sections glue to global judgment VERIFIED Trust tier: Verified (T1) | Elapsed: 4.21 s Certificate written to ./proof_cert_impl.json

🐛
jugeo bugs

Static bug detection across six well-defined Python bug classes. Combines AST analysis with Z3-backed data-flow reasoning and optional LLM triage to rank findings by severity and fix cost.

Detected bug classes
bare-except
identity-literal
late-binding-closure
mutable-default
open-without-close
shadow-builtin
FlagTypeDefaultDescription
<file> path required Python source file (or directory — scanned recursively) to analyse.
--classes list all Comma-separated subset of bug classes to check, e.g. bare-except,mutable-default.
--min-severity low|med|high low Suppress findings below this severity threshold.
Usage
bash
jugeo --format json bugs mycode.py
Terminal demo
zsh — jugeo bugs
$ jugeo --format json bugs mycode.py { "file": "mycode.py", "bugs_found": 3, "findings": [ { "class": "mutable-default", "line": 14, "col": 12, "severity": "high", "message": "Mutable default argument `[]` — shared across all calls.", "fix_hint": "Use `None` as default; initialise inside function body." }, { "class": "late-binding-closure", "line": 38, "col": 5, "severity": "med", "message": "Loop variable `i` captured by reference in lambda.", "fix_hint": "Use `lambda i=i: ...` to capture by value." }, { "class": "open-without-close", "line": 57, "col": 9, "severity": "high", "message": "`open(...)` not wrapped in `with` statement — resource may leak.", "fix_hint": "Rewrite as `with open(...) as f:`." } ], "elapsed_s": 0.41, "trust_tier": "Solver" }

📋
jugeo spec

Check a Python implementation against a formal specification file. Extracts pre/post conditions and invariants from the spec, encodes them in SMT, and verifies the implementation satisfies every judgment section.

FlagTypeDefaultDescription
--spec path required Specification file.
--impl path required Implementation file to check.
--report bool true Print a per-judgment compliance report table.
Usage
bash
jugeo spec --spec spec.py --impl impl.py
Terminal demo
zsh — jugeo spec
$ jugeo spec --spec spec.py --impl impl.py Spec Compliance Report Spec: spec.py Impl: impl.py Judgment Status Trust Evidence ------------------------------------------------------ J0 pre_condition PASS Solver Z3 SAT (0.28 s) J1 post_condition PASS Solver Z3 SAT (0.51 s) J2 type_invariant PASS Verified Z3 + LLM agree J3 resource_bound FAIL Unverified Countermodel found Countermodel for J3: x = 1000000 → alloc_bytes = 134217728 (exceeds bound 67108864) PARTIAL 3/4 judgments satisfied | Elapsed: 1.82 s

↔️
jugeo equiv

Check semantic equivalence of two Python programs. Uses sheaf-theoretic judgment transport to compare observable behaviour across all reachable inputs, optionally modulo a relation (e.g. output order, floating-point tolerance).

FlagTypeDefaultDescription
<prog1> <prog2> path path required The two programs to compare.
--modulo string Equivalence modulo relation, e.g. order, fp-tol=1e-9.
--entry string main Entry-point function name to compare.
Usage
bash
jugeo equiv prog1.py prog2.py
Terminal demo
zsh — jugeo equiv
$ jugeo equiv prog1.py prog2.py Semantic Equivalence Check Program A: prog1.py (entry: main) Program B: prog2.py (entry: main) Building judgment sections ........ A: 5 sections B: 5 sections Computing transport map ........... 25 pairs SMT product encoding .............. Z3 bitvector + array theory I/O behaviour equivalent on common domain Effect ordering preserved Exception semantics agree EQUIVALENT Trust tier: Solver (T2) | Elapsed: 2.09 s

🔧
jugeo repair

Suggest minimal, semantically-grounded repairs for buggy code. Analyses detected bugs and failing spec judgments, proposes diffs, and optionally applies them in-place after confirmation.

FlagTypeDefaultDescription
<file> path required Buggy Python file to repair.
--spec path Optional spec file; repairs are guided toward spec compliance.
--apply bool false Apply the highest-confidence repair automatically (writes file in-place).
--max-repairs int 5 Maximum number of repair candidates to generate per finding.
Usage
bash
jugeo repair buggy.py --spec spec.py
Terminal demo
zsh — jugeo repair
$ jugeo repair buggy.py --spec spec.py Repair Suggestions — buggy.py Finding 1 mutable-default line 14 [HIGH] - def process(items=[]): + def process(items=None): + if items is None: items = [] Confidence: 0.97 | Trust: Solver Finding 2 late-binding-closure line 38 [MED] - handlers = [lambda x: x + i for i in range(n)] + handlers = [lambda x, i=i: x + i for i in range(n)] Confidence: 0.93 | Trust: Solver Spec compliance improvement Before: 3/4 judgments pass After: 4/4 judgments pass (projected) Run with --apply to write repairs. Always review diffs before applying.

📊
jugeo evaluate

Evaluate code quality, semantic maturity, and trust tier. Produces a scorecard across multiple dimensions including correctness, robustness, documentation alignment, and cyclic maturity level.

FlagTypeDefaultDescription
<file> path required Python file or package directory to evaluate.
--spec path Specification file for spec-compliance sub-score.
--maturity bool true Include cyclic maturity level (CML 1–5) in the report.
Usage
bash
jugeo evaluate mycode.py --spec spec.py
Terminal demo
zsh — jugeo evaluate
$ jugeo evaluate mycode.py --spec spec.py JuGeo Evaluation Report — mycode.py Dimension Score Trust Tier ------------------------------------------ Correctness 0.87 Solver Spec compliance 0.75 Solver Bug density 0.60 Runtime Documentation align 0.91 Oracle Effect safety 0.82 Solver ------------------------------------------ Overall 0.79 Runtime (T3) Cyclic Maturity Level CML-3 (Structured — solver-backed invariants, partial spec) Elapsed: 3.44 s

jugeo generate

Generate Python code from a formal specification or judgment description. Uses the LLM oracle guided by SMT-checked preconditions to produce implementations that are pre-verified at the judgment level before being returned.

FlagTypeDefaultDescription
--spec path required Specification file to generate from.
--target path Output file for generated implementation.
--verify bool true Run jugeo prove on the generated code before writing it.
--attempts int 3 Number of LLM generation + verification rounds.
Usage
bash
jugeo generate --spec spec.py --target impl_gen.py --verify
Terminal demo
zsh — jugeo generate
$ jugeo generate --spec spec.py --target impl_gen.py --verify Reading spec ................. 3 judgments extracted Round 1 Generating candidate ... LLM generation ............. 1.23 s (claude-sonnet-4.6) SMT verification ........... 0.89 s Result: FAIL J2 termination not satisfied Round 2 Regenerating with feedback ... LLM generation ............. 1.41 s SMT verification ........... 1.05 s Result: PASS all 3 judgments satisfied Generated impl_gen.py Trust tier: Solver (T2) | Elapsed: 4.68 s

▶️
jugeo run

Run a full judgment pipeline from a TOML or JSON configuration file. Useful for multi-step workflows (e.g. load → encode → prove → evaluate) defined declaratively and reproducibly.

FlagTypeDefaultDescription
<config> path required Pipeline config file (.toml or .json).
--dry-run bool false Parse and validate the config without executing any steps.
--step string Run only the named step from the pipeline.
Usage
bash
jugeo run pipeline.toml
Terminal demo
zsh — jugeo run
$ jugeo run pipeline.toml Pipeline pipeline.toml (4 steps) Step 1 load src/core.py ............. OK (0.14 s) Step 2 encode z3 bitvec ............... OK (0.38 s) Step 3 prove EXHAUSTIVE .............. OK (3.91 s) Step 4 evaluate scorecard ............... OK (1.22 s) Pipeline complete 4/4 steps passed | Total: 5.65 s Artefacts written to ./pipeline_output/

🌐
jugeo server

Start a local HTTP API server exposing all JuGeo commands as REST endpoints. Designed for IDE plugin integration, CI pipelines, and the JuGeo web dashboard.

FlagTypeDefaultDescription
--host string 127.0.0.1 Bind address.
--port int 7800 TCP port.
--reload bool false Auto-reload on code changes (development mode).
--workers int 4 Number of async worker threads.
Usage
bash
jugeo server --port 7800
Terminal demo
zsh — jugeo server
$ jugeo server --port 7800 JuGeo HTTP API server Listening on http://127.0.0.1:7800 Workers: 4 Endpoints: POST /prove POST /bugs POST /spec POST /equiv POST /repair GET /health GET /docs (OpenAPI) Press Ctrl+C to stop. 2026-03-23 12:04:31 INFO GET /health 200 0.3ms

📥
jugeo load

Load and analyse a Python program into judgment sections. Produces a human- and machine-readable breakdown of the program's semantic structure: sections, effects, type constraints, and import graph.

FlagTypeDefaultDescription
<file> path required Python source file or package directory.
--sections bool true Print extracted judgment sections.
--effects bool true Print Python effect annotations per section.
Usage
bash
jugeo --format json load mycode.py
Terminal demo
zsh — jugeo load
$ jugeo load mycode.py Program Analysis — mycode.py LOC: 147 | Functions: 8 | Classes: 2 | Imports: 5 Judgment Sections J0 process() pre: x >= 0 post: result >= 0 J1 validate() pre: items != None post: valid_items subset items J2 Pipeline.__init__ pre: cfg is dict post: self.ready == True Effect Annotations process() [PURE] validate() [IO_READ] run_pipeline() [IO_READ, IO_WRITE, ASYNC] Import Graph mycode.py -> pathlib, asyncio, dataclasses, jugeo.packs.core Elapsed: 0.09 s

🧮
jugeo encode

Encode a Python program's judgment sections into Z3/SMT. Supports scalar, sequence, tensor, and text encodings. Output is an SMT-LIB2 file or a Z3 Python script.

FlagTypeDefaultDescription
<file> path required Python source to encode.
--encoding enum scalar Encoding strategy:
scalar bitvec sequence tensor text auto
--smtlib bool false Emit raw SMT-LIB2 instead of Z3 Python API code.
Usage
bash
jugeo encode mycode.py --encoding bitvec --smtlib
Terminal demo
zsh — jugeo encode
$ jugeo encode mycode.py --encoding bitvec --smtlib Z3/SMT Encoding — mycode.py [bitvec] Sections encoded: 3 | Variables: 14 | Constraints: 27 ; === J0: process() pre/post === (declare-const x (_ BitVec 64)) (declare-const result (_ BitVec 64)) (assert (bvsge x (_ bv0 64))) ; pre: x >= 0 (assert (bvsge result (_ bv0 64))) ; post: result >= 0 ... SMT-LIB2 written to mycode.smt2 (3.1 KB) Elapsed: 0.21 s

🗂️
jugeo classify

Classify a problem or code file against the JuGeo problem atlas. Returns the nearest problem families, recommended proof strategies, and analogous theorems from the theorem ecology.

FlagTypeDefaultDescription
<input> path|string required Python file, spec file, or free-text problem description (quoted).
--top-k int 5 Return the top k nearest problem families.
Usage
bash
jugeo classify spec.py --top-k 3
Terminal demo
zsh — jugeo classify
$ jugeo classify spec.py --top-k 3 Problem Classification — spec.py Rank Family Sim Strategy ------------------------------------------------------ 1 Array bound verification 0.91 SMT_ONLY 2 Loop termination 0.84 DESCENT 3 Resource management 0.79 EXHAUSTIVE Analogous theorems from ecology: - BoundedArrayAccess (Corollary 3.2) - LoopRankDecrement (Theorem 4.1) Elapsed: 0.67 s

📐
jugeo alignment

Check documentation (docstrings, README, type annotations) for honest projection against the actual code semantics. Detects outdated docs, misleading claims, and missing coverage.

FlagTypeDefaultDescription
<file> path required Python source file with docstrings to check.
--readme path External README/markdown file to check in addition.
--strict bool false Fail on any misalignment, not just high-severity ones.
Usage
bash
jugeo alignment mycode.py --readme README.md
Terminal demo
zsh — jugeo alignment
$ jugeo alignment mycode.py --readme README.md Documentation Alignment Report — mycode.py process() docstring matches behaviour (score 0.94) validate() docstring mentions O(n) but impl is O(n log n) [MED] Pipeline README claims thread-safe; no locking detected [HIGH] Type stubs fully aligned with runtime types Overall alignment: 0.71 | 2 findings (1 HIGH, 1 MED) Elapsed: 1.38 s

🌀
jugeo mixed

Run bugs, spec compliance, and equivalence checks simultaneously in a single pass. More efficient than calling each command separately; shares the loaded semantic site and SMT encoding across all three analyses.

FlagTypeDefaultDescription
--impl path required Implementation file.
--spec path Spec file for compliance check (optional).
--reference path Reference implementation for equivalence check (optional).
Usage
bash
jugeo mixed --impl impl.py --spec spec.py --reference ref.py
Terminal demo
zsh — jugeo mixed
$ jugeo mixed --impl impl.py --spec spec.py --reference ref.py Mixed Analysis — impl.py Shared site built in 0.11 s | Shared encoding in 0.29 s [Bugs] 0 bugs found across 6 classes [Spec] 4/4 judgments satisfied (Trust: Solver) [Equiv vs ref.py] Semantically equivalent (Trust: Solver) ALL CHECKS PASSED | Elapsed: 2.87 s

ℹ️
jugeo info

Show package version, installed packs, solver versions, trust tier summary, and environment diagnostics. Useful for debugging and support reports.

Usage
bash
jugeo info
Terminal demo
zsh — jugeo info
$ jugeo info JuGeo v0.9.1 Python: 3.12.2 Platform: darwin 24.6.0 arm64 Solvers Z3 4.13.0 (available) cvc5 1.1.2 (available) Lean 4 not found (optional — Lean proofs unavailable) LLM ANTHROPIC_API_KEY set Default model: claude-sonnet-4.6 Installed Packs jugeo.packs.core v0.9.1 (14 judgments) jugeo.packs.async v0.9.1 (8 judgments) jugeo.packs.heap v0.8.3 (6 judgments) Trust Tiers Available T1 Verified T2 Solver T3 Runtime T4 Oracle T5 Copilot T6 Unverified

🧪
jugeo test

Run the JuGeo test and benchmark suite. Can run the full internal test suite, a specific experiment, or a user-supplied benchmark configuration.

FlagTypeDefaultDescription
--suite string all Test suite name or all.
--benchmark path User benchmark config file.
--fast bool false Skip slow integration tests; run only unit tests.
Usage
bash
jugeo test --suite bugs --fast
Terminal demo
zsh — jugeo test
$ jugeo test --suite bugs --fast JuGeo Test Suite — bugs [fast] Collecting tests .............. 42 tests Running ..................... test_bare_except (0.03 s) test_identity_literal (0.02 s) test_late_binding_closure (0.04 s) test_mutable_default (0.02 s) test_open_without_close (0.03 s) test_shadow_builtin (0.02 s) ... 36 more tests ... 42/42 passed 0 failed 0 skipped | Elapsed: 1.87 s

📡
jugeo descend

Run descent and gluing on pre-computed judgment data. Takes a set of local judgment sections (from jugeo load or jugeo encode) and attempts to glue them into a global section via the Cech descent algorithm.

FlagTypeDefaultDescription
<data> path required JSON file of judgment sections (output of jugeo --format json load).
--obstruction-report bool true Print the full Cech cohomology obstruction report.
Usage
bash
jugeo descend sections.json
Terminal demo
zsh — jugeo descend
$ jugeo descend sections.json Descent / Gluing — sections.json Sections: 5 | Cover relations: 8 | Overlaps: 12 Cech Complex H^0 dim = 0 (global section exists) Cech Complex H^1 dim = 0 (no gluing obstruction) Gluing Result Local sections glue uniquely to global judgment Global section written to glued_section.json Elapsed: 0.44 s

🧠
jugeo ideate

Mathematical ideation and theorem discovery. Given seed theorems or problem families, generates new conjectures, finds analogies via analogy transport, and ranks candidates by plausibility score.

FlagTypeDefaultDescription
--seeds path File of seed theorem names or IDs (one per line).
--domain string Mathematical domain to ideate within, e.g. sheaf-theory, type-theory.
--budget int 20 Maximum number of conjectures to generate.
Usage
bash
jugeo ideate --domain sheaf-theory --budget 10
Terminal demo
zsh — jugeo ideate
$ jugeo ideate --domain sheaf-theory --budget 10 Theorem Ideation — domain: sheaf-theory Seed ecology: 47 theorems | Budget: 10 conjectures Generating conjectures ... C01 Plausibility 0.89 — "Every flasque sheaf on a Noetherian site admits ... a canonical resolution via judgment descent." C02 Plausibility 0.84 — "Trust transport is functorial under cover refinement." C03 Plausibility 0.81 — "Coherent judgment sections satisfy the Mittag-Leffler condition under inverse limits." ... 7 more conjectures ... Top conjecture written to ideation_results.json Elapsed: 8.41 s (LLM: 7.92 s, analogy transport: 0.49 s)

🎼
jugeo orchestrate

Synthesize a complete program from a natural language idea. Uses sheaf-theoretic elaboration to plan, generate, test, and refine software.

FlagTypeDefaultDescription
idea string required The software idea to implement (quoted string).
--max-iterations int 5 Maximum refinement iterations.
--output, -o path Output directory for the generated project.
Usage
bash
jugeo orchestrate "implement a stack data structure"
Terminal demo
zsh — jugeo orchestrate
$ jugeo orchestrate --task task.toml --agents 4 Orchestration Pipeline — task.toml Agents: 4 | Consensus: majority Phase 1 Classification Agent-0 classify ......... Array bound verification (0.68 s) Phase 2 Evidence routing Agent-1 Z3 bitvec ........ dispatched Agent-2 Z3 array ........ dispatched Agent-3 LLM oracle ........ dispatched Phase 3 Consensus (majority) Agent-1 PASS Agent-2 PASS Agent-3 PASS (LLM agrees) Consensus: VERIFIED (3/3 agents) Trust tier: Verified (T1) | Elapsed: 6.22 s

🏛️
jugeo foundation

Full synthesis pipeline: mathematical fields → tournament → code generation → textbook generation. Produces a self-contained research artefact including theorems, implementations, and a generated textbook chapter.

FlagTypeDefaultDescription
--fields path|string required Comma-separated field names or path to fields config.
--rounds int 3 Number of tournament rounds for idea selection.
--textbook bool true Generate a textbook chapter from the synthesis output.
--latex bool false Emit LaTeX source for the textbook chapter.
Usage
bash
jugeo foundation --rounds 3 --n-fields 6 --latex-only
Terminal demo
zsh — jugeo foundation
$ jugeo foundation --rounds 3 --n-fields 6 --latex-only Foundation Synthesis Pipeline Fields: sheaf-theory, type-theory | Tournament rounds: 3 Stage 1 Ideation Generated 24 candidate ideas across 2 fields (4.1 s) Stage 2 Tournament (3 rounds) Round 1: 24 ideas → 12 finalists Round 2: 12 ideas → 6 finalists Round 3: 6 ideas → 3 winners Winner: "Judgment transport along sheaf morphisms" (score 0.93) Stage 3 Code generation Generating implementation .... verified in 2 rounds (5.8 s) Stage 4 Textbook generation Chapter draft ............... 1,842 words (3.2 s) LaTeX source ................ chapter_sheaf_type.tex Foundation synthesis complete | Total: 31.4 s Artefacts in ./foundation_output/

📚
jugeo catalog

Browse and search the JuGeo problem catalog. Lists all registered problem families, their recommended strategies, associated theorems, and example code.

FlagTypeDefaultDescription
--search string Filter catalog by keyword (searches name, description, tags).
--tag string Filter by tag, e.g. termination, resource, concurrency.
--detail string Show full detail for a named problem family.
Usage
bash
jugeo catalog --tag termination
Terminal demo
zsh — jugeo catalog
$ jugeo catalog --tag termination Problem Catalog — tag: termination (4 families) ID Strategy Papers Tags ------------------------------------------------------------------------ loop-rank-decrement DESCENT — termination, loop well-founded-recursion SMT_ONLY — termination, recursion async-task-completion EXHAUSTIVE — termination, async generator-exhaustion AUTO — termination, generators Use `jugeo catalog --detail loop-rank-decrement` for full description. $ jugeo catalog --detail loop-rank-decrement loop-rank-decrement Prove that a loop terminates by exhibiting a ranking function that strictly decreases on each iteration and is bounded below. Strategy: DESCENT Reference: Theorem 4.1 Tags: termination, loop, ranking-function Pack: jugeo.packs.core Example: jugeo classify "while loop with counter" → loop-rank-decrement (0.88)

🌐
jugeo webapp

Generate a complete Flask web application from a natural-language prompt. Produces models, routes, templates, static assets, and runs cross-layer descent verification to catch mismatches before deployment.

FlagTypeDefaultDescription
--outdir, -o path required Output directory for the generated application.
prompt string Natural-language description of the app (optional positional).
--type, -t enum crud Application type: crud, api, dashboard, form_workflow, custom.
--template enum standard Template complexity: minimal, standard, full, custom.
--name string app Application name.
--port, -p int 5000 Port number for the Flask dev server.
--no-verify flag Skip cross-layer descent verification after generation.
--html-only flag Generate an HTML-only static app (no Flask).
--include-tests flag Generate test scaffolding alongside the app.
--include-docker flag Generate a Dockerfile for containerised deployment.
Usage
bash
jugeo webapp "a recipe sharing app" --outdir ./my-app --type crud
Terminal demo
zsh — jugeo webapp
$ jugeo webapp "a recipe sharing app" -o ./recipes-app Pipeline started: outdir=./recipes-app app=app template=standard Agent available: True, Generators available: True Phase 1 Ideation Domain nouns: recipe, ingredient, user Routes: 14 (CRUD × 3 nouns + auth + explore) Phase 2 Generation models.py .............. 3 models, 12 fields app.py ................. 14 routes templates/ ............. 11 Jinja2 templates static/style.css ....... 420 lines static/app.js .......... 180 lines Phase 3 Cross-layer descent verification HTML↔CSS: 0 obstructions HTML↔Flask: 0 obstructions Navigation reachability: all routes reachable Block name consistency: ok ✔ Generated 18 files in ./recipes-app Run: cd recipes-app && pip install -r requirements.txt && python app.py

📈
jugeo improve

Improve an existing codebase via an agent-driven descent loop. Scans source files, identifies structural gaps, dispatches an AI agent to make targeted improvements, and verifies each iteration satisfies the stated obligation.

FlagTypeDefaultDescription
directory path required Root directory of the codebase to improve.
improvement string required Natural-language description of the improvement to make.
--max-iterations int 5 Maximum improvement iterations before stopping.
Usage
bash
jugeo improve ./my-project "add comprehensive type hints"
Terminal demo
zsh — jugeo improve
$ jugeo improve ./my-project "add comprehensive type hints" Improvement target: ./my-project Obligation: add comprehensive type hints Max iterations: 5 Iteration 1/5 Scanned 42 source files Structural gaps: 18 total (0 errors, 18 warnings) Dispatching agent... Modified 12 files, keyword_coverage: 0.67 Iteration 2/5 Structural gaps: 6 total (0 errors, 6 warnings) Modified 4 files, keyword_coverage: 0.89 ✔ Obligation satisfied after 2 iterations

🔬
jugeo research

Directed research pipeline: ideate a novel approach, generate an implementation, benchmark against baselines, refine or pivot, and produce a research paper. Runs until descent succeeds (H¹ = 0) on the research site.

FlagTypeDefaultDescription
prompt string required Natural-language research prompt describing the problem.
--max-iterations int 30 Maximum refinement iterations.
--max-pivots int 3 Maximum theory pivots before giving up.
--seed int Random seed for reproducibility.
Usage
bash
jugeo research "optimal sorting with gradient information"
Terminal demo
zsh — jugeo research
$ jugeo research "optimal sorting with gradient information" Directed Research — optimal sorting with gradient information Output: outputs/research_20260330_122105 ═══ PHASE 1: IDEATION ═══ Running cross-domain ideation... Theory: gradient-informed merge sort via sheaf descent ═══ PHASE 2: IMPLEMENTATION ═══ Generated 4 source files (1.2 kLoC) ═══ PHASE 3: BENCHMARKING ═══ Baseline: stdlib sort — 1.00× Ours: gradient sort — 1.12× speedup ═══ PHASE 4: PAPER ═══ paper.tex (12 pages, 3 figures) README.md

🎯
jugeo research-focused

Evolutionary tournament to find the best implementation for a canonical problem. Tries multiple competing mathematical theories, keeps copies of what works, and evolves the champion to beat itself on the specified metrics.

FlagTypeDefaultDescription
field string required Research domain, e.g. “computational finance”.
--metrics string required Comma-separated evaluation metrics, e.g. "Sharpe Ratio,MAPE,speed".
--dataset string required Canonical dataset to benchmark against.
--primary-metric string first metric The single metric used to crown the champion.
-g, --generations int 5 Number of evolutionary generations.
-t, --theories-per-gen int 4 Competing theories per generation.
Usage
bash
jugeo research-focused "sorting" --metrics "time,memory" --dataset "standard"
Terminal demo
zsh — jugeo research-focused
$ jugeo research-focused "sorting" --metrics "time,memory" --dataset "standard" FOCUSED RESEARCH: sorting Dataset: standard Metrics: time, memory Primary metric: time Generations: 5 Theories/gen: 4 Generation 1 Theory A: radix-sheaf sort ......... time: 0.42s memory: 12MB Theory B: merge-descent sort ....... time: 0.38s memory: 18MB ✔ Champion: Theory B (time: 0.38s) Generation 2 Theory C: hybrid-cover sort ........ time: 0.35s memory: 14MB ✔ New champion: Theory C (time: 0.35s)

🚀
jugeo research-and-implement

Iterative descent on a 7-object delivery site: ideates a novel approach, generates a large-scale implementation, benchmarks against competitive baselines on real data, and refines or pivots until all hard obligations are met (H¹ = 0 on the delivery site).

FlagTypeDefaultDescription
prompt string required Natural-language description of the delivery goal.
--max-outer int 10 Maximum outer descent iterations.
--max-inner int 30 Maximum inner refinement iterations per outer loop.
--max-pivots int 3 Maximum theory pivots.
--min-kloc float 1.0 Minimum implementation size in thousands of lines of code.
--output path Output directory for the generated project.
Usage
bash
jugeo research-and-implement "optimal portfolio allocation" --output ./portfolio
Terminal demo
zsh — jugeo research-and-implement
$ jugeo research-and-implement "optimal portfolio allocation" --output ./portfolio Research & Implement — 7-object delivery site Output: ./portfolio ═══ PHASE 1: IDEATION ═══ Cross-domain ideation (3 idea sources)... Theory: sheaf-theoretic risk decomposition ═══ PHASE 2: IMPLEMENTATION ═══ Generated 8 source files (2.4 kLoC) README.md, paper.tex, benchmarks/ ═══ PHASE 3: BENCHMARKING ═══ Baseline: mean-variance — Sharpe 1.24 Ours: sheaf-risk — Sharpe 1.41 (+13.7%) ═══ DELIVERY SITE DESCENT ═══ README: present Paper: 14 pages, 4 figures Code: 2.4 kLoC Benchmarks: beat baseline H¹ = 0 — all obligations met

Quick Reference

All commands at a glance.

Command One-liner Min. args LLM?
jugeo prove Full sheaf-theoretic verification --spec --impl Optional
jugeo bugs Detect 6 bug classes <file> Optional
jugeo spec Spec compliance check --spec --impl No
jugeo equiv Semantic equivalence <prog1> <prog2> No
jugeo repair Suggest / apply code repairs <file> Yes
jugeo evaluate Quality / maturity scorecard <file> Optional
jugeo generate Generate code from spec --spec Yes
jugeo run Run pipeline from config <config> Depends
jugeo server HTTP API server Depends
jugeo load Load & analyse program <file> No
jugeo encode Encode to Z3/SMT <file> No
jugeo classify Problem atlas classification <input> Optional
jugeo alignment Doc / code alignment check <file> Yes
jugeo mixed Bugs + spec + equiv in one pass --impl Optional
jugeo info Package / solver / tier info No
jugeo test Run test / benchmark suite Optional
jugeo descend Descent & gluing on judgment data <data> No
jugeo ideate Theorem discovery Yes
jugeo orchestrate Synthesize a program from an idea <idea> Yes
jugeo foundation Fields → tournament → code → textbook Yes
jugeo catalog Browse problem catalog No
jugeo webapp Generate a Flask web application --outdir Yes
jugeo improve Agent-driven codebase improvement <dir> <goal> Yes
jugeo research Directed research: ideate → implement → paper <prompt> Yes
jugeo research-focused Evolutionary tournament for best implementation <field> --metrics --dataset Yes
jugeo research-and-implement Full delivery site descent with hard obligations <prompt> Yes
← API Reference Proof Modes →