jugeo-agents lets you catch contradictions, trace provenance, and detect hallucination cascades across multiple LLM agents — all backed by the mathematics of sheaf theory, but wrapped in an API so simple it feels like cheating.

By the end of this tutorial you will have:

📘

Every code snippet on this page is copy-paste ready. If you want the theory behind the code, see Core Concepts.

1

Install

Install JuGeo once at the repository root, then use the bundled jugeo_agents module:

git clone https://github.com/thehalleyyoung/jugeo.git
cd jugeo
pip install -e .

# Then run the bundled jugeo-agents examples
python jugeo-agents/examples/coding_agents_demo.py
💡

jugeo-agents is part of the main JuGeo codebase, not a standalone PyPI package. If you want stronger NLP extraction, add the relevant extras or local dependencies inside the same JuGeo environment rather than installing a separate jugeo-agents distribution.

Verify your install:

python -c "import jugeo_agents; print(jugeo_agents.__version__)"
1.5

Understand the Coding-Agent Interface

When you use Claude Code, Copilot CLI, or Codex with jugeo-agents, JuGeo does not only read the final patch. It normalizes the run into a structured record with code, explanation, files touched, tools used, test results, and metadata.

from jugeo_agents.adapters.coding_agents import (
    ClaudeCodeAdapter,
    CopilotCLIAdapter,
    CodexAdapter,
)

claude = ClaudeCodeAdapter.from_response(
    code="def parse(items): return [x.strip() for x in items]",
    explanation="Normalizes whitespace and preserves order.",
    tools_used=["bash:pytest"],
    test_results={"passed": True, "exit_code": 0},
)

copilot = CopilotCLIAdapter.from_response(
    code="def parse(items): return list(map(str.strip, items))",
    explanation="Equivalent map-based implementation.",
    tools_used=["grep", "read_file"],
)

codex = CodexAdapter.from_response(
    code="def parse(items): return [item.strip() for item in items if item]",
    explanation="Filters falsy inputs before normalization.",
)
🧩

These adapters are the key interface layer. They turn Claude Code, Copilot CLI, and Codex into comparable local sections so JuGeo can check contradictions, compare trust levels, and fuse only the claims that genuinely glue.

2

Your First Contradiction Detection (30 seconds)

Two agents report facts about Tesla. One says it was founded in 2003 with 130,000 employees; the other says 2004 with 128,000. Let's see what JuGeo does:

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler()

# Agent A says one thing...
assembler.ingest(AgentOutput(
    agent_id="agent-a",
    output_text="Tesla was founded in 2003 and has 130,000 employees."
))

# Agent B says another...
assembler.ingest(AgentOutput(
    agent_id="agent-b",
    output_text="Tesla, founded in 2004, employs 128,000 people."
))

# Assemble the verified global section
section = assembler.assemble()
print(section.summary_text())

Expected output:

═══ Verified Global Section ═══ Claims verified: 3 Claims quarantined: 2 Stalks merged: 2 agents → 1 global section H¹ (contradictions): 2 (2 resolved, 0 open) ⚠ "founded in 2003" vs "founded in 2004" → resolved: agent-a (alphabetical tiebreaker) ⚠ "130,000 employees" vs "128,000 people" → resolved: agent-a (alphabetical tiebreaker)

JuGeo detected two contradictions automatically: the founding-year discrepancy (2003 vs 2004) and the employee-count mismatch (130K vs 128K). Since neither agent provided tool evidence, both sit at the same trust level (UNGROUNDED_CLAIM), so the assembler falls back to an alphabetical tiebreaker — agent-a wins.

🧮

The "H¹" in the output refers to the first cohomology group of the presheaf of agent claims. In plain English: it counts the places where local agent information fails to glue into a consistent global picture. More in Core Concepts.

3

Trust Makes All the Difference

Alphabetical tiebreakers aren't great. Let's give one agent real evidence and see what happens:

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler()

# Researcher — backed by tools and citations
assembler.ingest(AgentOutput(
    agent_id="researcher",
    output_text="Tesla was founded in 2003 and has 130,000 employees.",
    tools_used=["wikipedia_lookup", "sec_filings"],
    citations=["sec:TSLA-10K-2024"],
))

# Chatbot — no evidence at all
assembler.ingest(AgentOutput(
    agent_id="chatbot",
    output_text="Tesla was founded in 2004 and employs 128,000 people.",
))

section = assembler.assemble()
print(section.summary_text())
═══ Verified Global Section ═══ Claims verified: 3 Claims quarantined: 2 Stalks merged: 2 agents → 1 global section H¹ (contradictions): 2 (2 resolved, 0 open) ✅ "founded in 2003" vs "founded in 2004" → resolved: researcher (TOOL_EXECUTED > UNGROUNDED_CLAIM) ✅ "130,000 employees" vs "128,000 people" → resolved: researcher (TOOL_EXECUTED > UNGROUNDED_CLAIM) Quarantined claims: ✗ chatbot: "founded in 2004" reason: H1_LOST_TRUST_CONTEST ✗ chatbot: "128,000 people" reason: H1_LOST_TRUST_CONTEST

Now the researcher wins every contradiction, because TOOL_EXECUTED outranks UNGROUNDED_CLAIM in the trust lattice. The chatbot's claims are quarantined with reason H1_LOST_TRUST_CONTEST.

💡

The trust lattice (from lowest to highest) is: UNGROUNDED_CLAIM < LLM_AGREEMENT < TOOL_EXECUTED < FORMALLY_VERIFIED. You never set trust manually — JuGeo infers it from tools_used, citations, and cross-agent agreement.

This is the key insight: evidence beats consensus. One agent with a citation outweighs ten agents without one.

4

Detecting Hallucination Cascades

This is where JuGeo really shines. Consider a four-agent pipeline where a fabricated company propagates through the system:

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler()

# Agent A fabricates a company (no tools)
assembler.ingest(AgentOutput(
    agent_id="agent-a",
    output_text="QuantumLeap Inc is a leading quantum computing startup valued at $2B.",
))

# Agent B echoes Agent A's claim (no tools)
assembler.ingest(AgentOutput(
    agent_id="agent-b",
    output_text="QuantumLeap Inc, the $2B quantum startup, just raised a Series C.",
))

# Agent C synthesizes A + B, creating "phantom consensus"
assembler.ingest(AgentOutput(
    agent_id="agent-c",
    output_text="Multiple sources confirm QuantumLeap Inc is valued at $2B after Series C.",
))

# Agent D actually checks — with tools
assembler.ingest(AgentOutput(
    agent_id="agent-d",
    output_text="There is no company called QuantumLeap Inc in any business registry.",
    tools_used=["crunchbase_search", "sec_edgar_lookup"],
    citations=["crunchbase:no-results", "sec:no-results"],
))

section = assembler.assemble()
print(section.summary_text())
═══ Verified Global Section ═══ Claims verified: 1 Claims quarantined: 4 Stalks merged: 4 agents → 1 global section H¹ (contradictions): 1 (1 resolved, 0 open) ✅ "QuantumLeap Inc [...] $2B" vs "no company called QuantumLeap Inc" → resolved: agent-d (TOOL_EXECUTED > UNGROUNDED_CLAIM) 🔗 Cascade detected: agent-a → agent-b → agent-c Origin: agent-a (ungrounded fabrication) Propagation: 3 agents, 4 claims ⚠ Naive majority vote would ACCEPT the hallucination (3 vs 1) Quarantined claims: ✗ agent-a: "QuantumLeap Inc [...] valued at $2B" reason: H1_CASCADE_ORIGIN ✗ agent-b: "QuantumLeap Inc [...] Series C" reason: H1_CASCADE_PROPAGATION ✗ agent-c: "Multiple sources confirm [...]" reason: H1_CASCADE_PROPAGATION ✗ agent-a: "$2B" (echoed by agent-b, agent-c) reason: H1_PHANTOM_CONSENSUS

A naive majority vote would accept "QuantumLeap Inc exists" because 3 agents say yes and only 1 says no. But JuGeo traces the provenance chain and detects that B and C are merely echoing A — they add no independent evidence. The entire A → B → C chain is quarantined as a hallucination cascade.

⚠️

Hallucination cascades are the #1 failure mode of multi-agent systems. When agents read each other's outputs, a single fabrication can snowball into "phantom consensus" that looks authoritative. JuGeo's sheaf-theoretic approach catches this because it tracks where information came from, not just how many agents agree.

5

Using JuGeoAgentWrapper (The Full Pipeline API)

GlobalSectionAssembler is great for one-shot verification. When you need the complete pipeline — task decomposition checking, convergence monitoring, and a final report — use JuGeoAgentWrapper:

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()

# ── 1. Check task decomposition completeness ──────────────
coverage = jugeo.verify_task_decomposition(
    task="Write a comprehensive market analysis",
    subtasks=[
        "research companies",
        "analyze financials",
        "write report",
    ]
)
print(f"Coverage: {coverage.coverage_score:.0%}")
# → Coverage: 78%  (missing: competitive landscape, risk factors)
💡

verify_task_decomposition uses the Čech nerve of the subtask covering to detect gaps. In plain terms: it checks whether every aspect of the parent task is handled by at least one subtask.

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()

# ── 2. Feed agent outputs as they arrive ──────────────────
jugeo.on_agent_output(
    "researcher",
    "Tesla was founded in 2003. Revenue in 2024 was $96.8B...",
    metadata={"model": "claude-sonnet-4", "tools_used": ["sec_filings"]},
)

jugeo.on_agent_output(
    "analyst",
    "Based on SEC filings, Tesla's 2024 revenue was $96.8B...",
    metadata={"tools_used": ["bloomberg"]},
)

# ── 3. Finalize and get the full report ───────────────────
report = jugeo.on_pipeline_complete()
print(report.summary_text())
═══ Pipeline Verification Report ═══ Task coverage: 78% (2 gaps identified) Claims verified: 12 Claims quarantined: 1 Contradictions: 1 resolved, 0 open Convergence: ✅ Achieved in 2 rounds Trust distribution: TOOL_EXECUTED: 9 | LLM_AGREEMENT: 3 | UNGROUNDED: 1

The wrapper orchestrates the full lifecycle: decomposition → ingestion → assembly → convergence check → report.

6

Provenance Tracing

Ever wonder where a specific claim came from? JuGeo tracks the full trust chain from raw agent output to final verified claim:

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()
jugeo.on_agent_output(
    "researcher",
    "Tesla was founded in 2003.",
    metadata={"tools_used": ["sec_filings"]},
)

chain = jugeo.provenance_for("Tesla was founded in 2003")

print(f"Claim: {chain.claim.text}")
print(f"Trust chain: {' → '.join(link.agent_id for link in chain.links)}")
print(f"Overall trust: {chain.overall_trust.name}")
for link in chain.links:
    print(f"  {link.agent_id}: trust={link.trust.name}")
Claim: Tesla was founded in 2003 Trust chain: researcher → analyst Overall trust: TOOL_EXECUTED Evidence: researcher: trust=TOOL_EXECUTED, tools=['sec_filings'] analyst: trust=TOOL_EXECUTED, tools=['bloomberg']

Each link in the chain records the agent, its trust level, the tools used, and the citations provided. If a claim was quarantined, the chain also includes the reason and the winning claim that replaced it.

🔍

Provenance tracing is essential for auditing and compliance. When a downstream decision depends on a specific fact, you can trace it back to the exact agent, tool call, and citation that produced it.

7

Convergence Monitoring

In iterative pipelines, agents may run multiple rounds of refinement. JuGeo monitors whether the system is converging toward a stable global section:

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()
status = jugeo.convergence_status()

print(status.name)
Phase: ASSEMBLY Status: CONVERGED Rounds: 2 Open conflicts: 0 Δ (last round): 0.0000

A delta of zero means no claims changed in the last round — the system has stabilized. If delta keeps increasing, you may have an oscillation (two agents overriding each other), which JuGeo also detects:

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper()
action = jugeo.suggest_next_action()

print(action.action_type.name)
print(action.description)
🔄

Oscillation detection prevents infinite loops in multi-round pipelines. If two agents keep overriding each other, JuGeo freezes the contested claim and flags it for human review.

Putting It All Together

Here's a complete, self-contained script that runs through all the features in one go. Copy-paste it and run:

#!/usr/bin/env python3
"""jugeo-agents: complete tutorial example."""

from jugeo_agents import (
    GlobalSectionAssembler,
    AgentOutput,
    JuGeoAgentWrapper,
)

# ── Contradiction detection ───────────────────────────────
assembler = GlobalSectionAssembler()

assembler.ingest(AgentOutput(
    agent_id="researcher",
    output_text="Tesla was founded in 2003 and has 130,000 employees.",
    tools_used=["wikipedia_lookup", "sec_filings"],
    citations=["sec:TSLA-10K-2024"],
))

assembler.ingest(AgentOutput(
    agent_id="chatbot",
    output_text="Tesla was founded in 2004 and employs 128,000 people.",
))

section = assembler.assemble()
print(section.summary_text())

# ── Full pipeline with provenance ─────────────────────────
jugeo = JuGeoAgentWrapper()

coverage = jugeo.verify_task_decomposition(
    task="Market analysis",
    subtasks=["research", "financials", "report"],
)
print(f"\nCoverage: {coverage.coverage_score:.0%}")

jugeo.on_agent_output(
    "researcher", "Tesla was founded in 2003.",
    metadata={"tools_used": ["sec_filings"]},
)
jugeo.on_agent_output(
    "analyst", "Tesla founded in 2003, confirmed.",
    metadata={"tools_used": ["bloomberg"]},
)

report = jugeo.on_pipeline_complete()
print(report.summary_text())

# ── Provenance ────────────────────────────────────────────
chain = jugeo.provenance_for("Tesla was founded in 2003")
print(f"\nProvenance: {' → '.join(l.agent_id for l in chain.links)}")
print(f"Trust: {chain.overall_trust.name}")

# ── Convergence ───────────────────────────────────────────
status = jugeo.convergence_status()
print(f"\nConvergence: {status.name}")

What's Next?

You've built your first verified multi-agent pipeline. Here's where to go from here:

🧠 Core Concepts

Understand the sheaf theory, trust lattice, and cohomology behind the API.

📖 API Reference

Full reference for every class, method, and enum in jugeo-agents.

🧪 More Examples

Real-world patterns: RAG verification, tool-augmented agents, and more.

🔗 Knowledge Fusion

Deep dive on how JuGeo merges outputs from heterogeneous agents.

🚀

Found a bug? Have a question? Open an issue on GitHub or check the FAQ.