jugeo-agents: The Verification Language for Multi-Agent AI
Sheaf-theoretic verification for multi-agent LLM systems. Automatic contradiction detection, trust algebra, provenance tracing, knowledge fusion, and convergence guarantees — all without requiring an LLM.
⚡ Quick Install
pip install -e .
python jugeo-agents/examples/coding_agents_demo.py
jugeo-agents lives inside the main JuGeo repository and is available through the
same editable install as jugeo and jugeo.webapp; it is not a separate
pip package on PyPI.
What Makes This Different
Existing multi-agent frameworks — CrewAI, LangGraph, AutoGen — provide orchestration for LLM agents: routing messages, managing state, retrying on failure. But none of them offer formal verification of agent outputs. If Agent A says the company was founded in 2018 and Agent B says 2020, those frameworks will happily merge both answers, pick one at random, or majority-vote their way to a wrong consensus.
The majority-vote trap — If three agents agree on a hallucinated "fact" and one agent has the correct answer, majority vote enshrines the hallucination. This is phantom consensus, and it is undetectable without formal methods.
jugeo-agents provides the mathematical guarantee layer. It treats each agent’s output as a local section of a presheaf over the task-decomposition site and uses sheaf cohomology to detect inconsistencies, hallucination cascades, and phantom consensus. The result is not another orchestrator — it is a verification compiler that sits alongside any framework and tells you exactly what is wrong, where the evidence chain breaks, and which claims can be trusted.
No LLM required — jugeo-agents performs all verification using classical mathematics (lattice theory, sheaf cohomology, graph algorithms). NLP claim extraction uses spaCy, not a language model. Verification is deterministic, reproducible, and auditable.
How It Interfaces with Claude Code, Copilot CLI, and Codex
For software engineering, jugeo-agents does not treat Claude Code, Copilot CLI, and Codex as opaque chatbots. It treats them as structured evidence-producing agents whose outputs can be normalized, compared, and fused.
Each coding-agent run becomes a local section over a coding site: repository + task + files touched + tool context + claimed properties. JuGeo records not just the patch text, but also explanations, file edits, tool usage, citations, and test results.
| External system | Adapter | What JuGeo captures |
|---|---|---|
| Claude Code | ClaudeCodeAdapter |
code edits, file operations, bash/tool calls, explanations, test results |
| Copilot CLI | CopilotCLIAdapter |
suggested edits, shell commands, repository inspection, explanations |
| Codex | CodexAdapter |
candidate implementations, reasoning text, optional tool/test metadata |
Once normalized, JuGeo can ask questions that no single coding CLI can answer by itself: did two agents make contradictory claims about the same function, did one agent claim tests passed without tool evidence, or did three agents converge on the same patch for genuinely different reasons and trust levels?
This is why jugeo-agents is not just an orchestrator wrapper. It is a
verification layer over coding-agent evidence. Claude Code, Copilot CLI,
and Codex become comparable local sections inside one mathematical object.
Want the full story? Read Coding-Agent CLIs for a dedicated explanation of the adapter layer, trust transport, and multi-agent coding verification workflow.
Core Capabilities
Six interlocking subsystems, each grounded in a specific mathematical structure.
Contradiction Detection
NLP-powered claim extraction finds conflicting facts across agents — founding years, revenue numbers, entity claims. Uses spaCy dependency parsing, not just regex.
Trust Algebra
Ordered trust lattice:
FORMALLY_PROVEN >
TOOL_VERIFIED >
RAG_GROUNDED >
CROSS_AGENT_CONFIRMED >
WEAK_MODEL_GENERATED >
UNGROUNDED_CLAIM >
SELF_CONTRADICTED.
No silent promotion.
Provenance Tracing
Full evidence-flow graph. Detect trust laundering (low-trust claim re-labeled as high-trust) and fabrication cascades across the agent network.
Knowledge Fusion
The crown jewel: compute Čech cohomology over the agent-task site and assemble a verified global section — a consistent knowledge base with trust certificates. What majority-vote gets wrong, JuGeo gets right.
Convergence Monitoring
Lyapunov-function-based convergence tracking. Detects stalls, divergence, and estimates rounds to completion with formal bounds.
Treaty Negotiation
When agents contradict, JuGeo negotiates resolutions using trust-weighted arbitration — not just random selection or majority vote.
The Mathematical Insight
Every multi-agent system implicitly defines a topological covering. Each agent’s subtask is an open set; the agent’s output is a local section of the knowledge presheaf over that open set. The fundamental question is: do these local sections glue into a consistent global section?
For every pair of agents \((i, j)\) covering overlapping subtasks:
\[ \sigma_i\big|_{U_i \cap U_j} \;=\; \sigma_j\big|_{U_i \cap U_j} \]When this fails, the obstruction lives in \(H^1(\mathcal{U}, \mathcal{F})\) — the first Čech cohomology group.
JuGeo classifies obstructions by cohomological degree:
- \(H^0\) gaps Missing coverage — no agent addressed a required subtask.
- \(H^1\) contradictions Pairwise disagreements between agents on overlapping claims.
- \(H^2\) cascades Multi-hop fabrication chains — errors that propagate through three or more agents.
- Phantom sections Locally consistent but globally ungrounded consensus (the majority-vote failure mode).
Why cohomology? — Classical consistency checks (pairwise diff, majority vote) can only detect \(H^1\) contradictions. Phantom sections and \(H^2\) cascades are invisible to those methods. Sheaf cohomology is the minimal algebraic structure that captures all three failure modes simultaneously.
30-Second Example
Two agents research the same company and return conflicting facts. JuGeo detects the contradictions automatically:
from jugeo_agents import GlobalSectionAssembler, AgentOutput
assembler = GlobalSectionAssembler()
assembler.ingest(AgentOutput(
agent_id="agent-a",
output_text="Acme was founded in 2018 with 450 employees."
))
assembler.ingest(AgentOutput(
agent_id="agent-b",
output_text="Acme, founded in 2020, has 500 staff."
))
section = assembler.assemble()
print(section.summary_text())
H¹ contradiction detected: founding year 2018 vs 2020, employee count 450 vs 500
No LLM was invoked — Claim extraction uses spaCy dependency parsing. Contradiction detection uses lattice comparison. The entire pipeline runs in milliseconds on CPU.
How It Fits Together
jugeo-agents is not a replacement for your agent framework. It is a verification sidecar that works with any orchestrator:
┌──────────────────────────────────────────────────────────┐
│ Your Orchestrator (CrewAI / LangGraph / AutoGen / …) │
│ │
│ Agent A ─────┐ │
│ Agent B ─────┼──► Raw outputs │
│ Agent C ─────┘ │ │
└─────────────────────────┼────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ jugeo-agents (verification layer) │
│ │
│ 1. Claim extraction (spaCy NLP) │
│ 2. Trust assignment (lattice algebra) │
│ 3. Sheaf assembly (Čech cohomology) │
│ 4. Obstruction report (H⁰, H¹, H², phantom) │
│ 5. Treaty negotiation (trust-weighted arbitration) │
│ 6. Verified global section │
└──────────────────────────────────────────────────────────┘
Explore the Documentation
Dive deeper into any area of jugeo-agents: