💡

Before you start: install JuGeo from the repository root with pip install -e .. The bundled jugeo-agents examples below are self-contained — paste one into a .py file and run with python.

30-Second Contradiction Detection

Beginner

Two agents produce overlapping facts about SpaceX but disagree on the founding year and employee count. The assembler automatically detects these as H¹ contradictions and resolves them via trust ranking — the loser is quarantined.

"""30-second contradiction detection — the simplest possible example."""

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler()

# Agent A says "2002" and "13,000 employees"
assembler.ingest(AgentOutput(
    agent_id="a",
    output_text="SpaceX was founded in 2002 by Elon Musk. It has 13,000 employees.",
))

# Agent B says "2003" and "12,500 employees" — both disagree with A
assembler.ingest(AgentOutput(
    agent_id="b",
    output_text="SpaceX, established in 2003, employs 12,500 people.",
))

section = assembler.assemble()

print(f"Contradictions detected: {section.cohomology.h1_contradictions}")
print(f"Verified claims:         {section.claim_count}")
print(f"Quarantined:             {section.quarantined_count}")
print(f"Consistent section:      {section.is_consistent}")

Expected output:

Contradictions detected: 2 Verified claims: 2 Quarantined: 2 Consistent section: True

What happened? Each pair of contradicting claims (founding year and headcount) was classified as an H¹ cocycle on the double intersection of agents A and B. Since neither agent had tool-backing or citations, the assembler resolved each by quarantining the lower-confidence variant. Two claims survived as the verified global section; two were quarantined.

🛠️

Claude Code / Copilot CLI / Codex as One Verification Site

Intermediate

The coding-agent adapters let you ingest outputs from Claude Code, Copilot CLI, and Codex as comparable local sections. JuGeo can then ask which claims are tool-backed, which are merely proposed, and where the three agents genuinely disagree.

from jugeo_agents.adapters.coding_agents import (
    ClaudeCodeAdapter,
    CopilotCLIAdapter,
    CodexAdapter,
    CodingAgentOrchestrator,
)

orch = CodingAgentOrchestrator()

orch.add_output(ClaudeCodeAdapter.from_response(
    code="def fib(n): return _iter_fib(n)",
    explanation="Iterative Fibonacci with O(n) time.",
    tools_used=["bash:pytest"],
    test_results={"passed": True, "exit_code": 0},
))

orch.add_output(CopilotCLIAdapter.from_response(
    code="def fib(n): return _memo_fib(n)",
    explanation="Memoized recursive Fibonacci; behavior preserved.",
    tools_used=["grep", "read_file"],
))

orch.add_output(CodexAdapter.from_response(
    code="def fib(n): return _matrix_fib(n)",
    explanation="Matrix-powered version with logarithmic recursion depth.",
))

section = orch.verify()
print(section.summary_text())

What this shows: Claude Code enters with stronger trust because it reports tool use and passing tests; Copilot CLI enters with repository-grounded evidence; Codex enters as a proposal unless paired with explicit tool/test metadata. JuGeo fuses only the claims that survive comparison under descent and trust algebra.

⚖️

Trust Algebra in Action

Beginner

A single tool-backed agent claims France's 2023 GDP was $3.05 trillion. Three ungrounded agents all say $2.8 trillion. In a naive majority-vote system the wrong answer wins 3-to-1. JuGeo's trust algebra ensures evidence always beats headcount.

"""Trust algebra — 1 tool-backed agent vs 3 ungrounded agents."""

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler()

# The grounded agent: has tool access and a citation
assembler.ingest(AgentOutput(
    agent_id="tool-agent",
    output_text="The GDP of France in 2023 was $3.05 trillion.",
    tools_used=["world_bank_api"],
    citations=["worldbank:FRA-GDP-2023"],
))

# Three ungrounded agents that all agree on a different (wrong) number
assembler.ingest(AgentOutput(
    agent_id="chat-1",
    output_text="France's GDP was roughly $2.8 trillion in 2023.",
))
assembler.ingest(AgentOutput(
    agent_id="chat-2",
    output_text="The French economy was about $2.8 trillion.",
))
assembler.ingest(AgentOutput(
    agent_id="chat-3",
    output_text="France GDP: approximately $2.8 trillion (2023).",
))

section = assembler.assemble()

print("=== Verified Claims (the global section) ===")
for vc in section.verified_claims:
    print(f"  [{vc.trust.name}] {vc.claim.subject}: {vc.claim.value}")
    print(f"    Supporting agents: {', '.join(vc.supporting_agents)}")

print()
print("=== Quarantined Claims ===")
for qc in section.quarantined:
    print(f"  [{qc.reason.name}] {qc.claim.subject}: {qc.claim.value}")
    print(f"    Agent: {qc.claim.source_agent}")

print()
print(f"Tool agent wins despite being outnumbered 1 vs 3!")
print(f"Betti numbers: β₀={section.cohomology.betti_numbers[0]}, "
      f"β₁={section.cohomology.betti_numbers[1]}")

Expected output:

=== Verified Claims (the global section) === [TOOL_VERIFIED] France GDP 2023: $3.05 trillion Supporting agents: tool-agent === Quarantined Claims === [H1_LOST_TRUST_CONTEST] France GDP 2023: $2.8 trillion Agent: chat-1 [H1_LOST_TRUST_CONTEST] France GDP 2023: $2.8 trillion Agent: chat-2 [H1_LOST_TRUST_CONTEST] France GDP 2023: $2.8 trillion Agent: chat-3 Tool agent wins despite being outnumbered 1 vs 3! Betti numbers: β₀=1, β₁=3
🔬

Why this matters: The trust lattice has a strict partial order: TOOL_VERIFIED > RAG_GROUNDED > CROSS_AGENT_CONFIRMED > WEAK_MODEL_GENERATED > UNGROUNDED_CLAIM. No amount of ungrounded agreement can override a single tool-verified fact. This is the trust monotonicity law.

👻

Detecting Phantom Consensus

Intermediate

Three agents unanimously agree that a fictitious company "NeuralForge Inc" raised $500M — but none has tool access, citations, or RAG sources. A majority-vote system would accept this with 100% confidence. JuGeo's phantom detector flags it as PHANTOM_UNGROUNDED — consistent but evidentially vacuous.

"""Phantom consensus — 3 agents agree on a fabricated fact."""

from jugeo_agents import GlobalSectionAssembler, AgentOutput

assembler = GlobalSectionAssembler(phantom_detection=True)

# Three agents all claim the same fabricated fact
assembler.ingest(AgentOutput(
    agent_id="a",
    output_text="NeuralForge Inc raised $500M in 2024.",
))
assembler.ingest(AgentOutput(
    agent_id="b",
    output_text="NeuralForge raised $500M last year.",
))
assembler.ingest(AgentOutput(
    agent_id="c",
    output_text="NeuralForge Inc secured $500 million in funding.",
))

section = assembler.assemble()

print(f"Phantom sections detected: {section.cohomology.phantom_sections}")
print(f"Verified claims:           {section.claim_count}")
print(f"Quarantined:               {section.quarantined_count}")
print()
print("=== Quarantined Claims ===")
for qc in section.quarantined:
    print(f"  👻 [{qc.reason.name}]")
    print(f"     {qc.claim.subject}: {qc.claim.value}")
    print(f"     Agent: {qc.claim.source_agent}")
    print(f"     Why:   {qc.explanation}")

Expected output:

Phantom sections detected: 1 Verified claims: 0 Quarantined: 3 === Quarantined Claims === 👻 [PHANTOM_UNGROUNDED] NeuralForge funding: $500M in 2024 Agent: a Why: Consistent claim cluster with zero grounding evidence 👻 [PHANTOM_UNGROUNDED] NeuralForge funding: $500M Agent: b Why: Consistent claim cluster with zero grounding evidence 👻 [PHANTOM_UNGROUNDED] NeuralForge funding: $500 million Agent: c Why: Consistent claim cluster with zero grounding evidence
⚠️

Phantom consensus is the most dangerous failure mode in multi-agent systems. LLMs trained on overlapping data naturally produce correlated hallucinations. JuGeo detects phantom sections by checking whether any claim cluster has zero total grounding evidence across all supporting agents — regardless of how many agents agree.

🔗

Full Research Pipeline with JuGeoAgentWrapper

Intermediate

A complete 4-agent research pipeline — researcher, analyst, writer, reviewer — verified end-to-end by JuGeoAgentWrapper. The wrapper checks task decomposition before running, verifies each agent's output in real time, and produces a final pipeline report with trust scores and convergence status.

"""Full research pipeline — 4 agents verified by JuGeoAgentWrapper."""

from jugeo_agents import JuGeoAgentWrapper

jugeo = JuGeoAgentWrapper(auto_negotiate=True, auto_challenge=True)

# ── Step 1: Verify task decomposition before spending tokens ──
coverage = jugeo.verify_task_decomposition(
    task="Write a comprehensive report on renewable energy trends in 2024",
    subtasks=[
        {"name": "research",  "scope": "Gather data on solar, wind, and battery tech"},
        {"name": "analysis",  "scope": "Analyze cost trends and adoption rates"},
        {"name": "writing",   "scope": "Draft the report with key findings"},
        {"name": "review",    "scope": "Fact-check claims and verify citations"},
    ],
)
print(f"Coverage: {coverage.coverage_score:.0%} | Gaps: {len(coverage.gaps)}")

# ── Step 2: Feed each agent's output through the verifier ──
r1 = jugeo.on_agent_output(
    agent_id="researcher",
    output="Solar installations grew 87% YoY in 2024, reaching 420 GW globally. "
           "Battery storage costs fell to $139/kWh. Wind capacity additions hit 115 GW.",
    metadata={"role": "researcher", "subtask": "research",
              "tools_used": ["iea_api", "bnef_database"],
              "citations": ["iea:solar-2024", "bnef:storage-costs"]},
)
print(f"[researcher]  {r1.status}  claims={r1.claims_extracted}  {r1.trust_level.name}")

r2 = jugeo.on_agent_output(
    agent_id="analyst",
    output="Solar growth of 87% aligns with IEA projections. Battery costs at "
           "$139/kWh crossed the critical $150 threshold. However, wind capacity "
           "was closer to 105 GW, not 115 GW.",
    metadata={"role": "analyst", "subtask": "analysis",
              "rag_sources": ["iea-report-2024.pdf", "gwec-wind-2024.pdf"]},
)
print(f"[analyst]     {r2.status}  claims={r2.claims_extracted}  "
      f"{r2.trust_level.name}  conflicts={r2.has_conflicts}")

r3 = jugeo.on_agent_output(
    agent_id="writer",
    output="Renewable energy saw record growth in 2024. Solar installations "
           "surged 87% to 420 GW. Battery storage became viable at $139/kWh. "
           "Wind power added approximately 110 GW of new capacity.",
    metadata={"role": "writer", "subtask": "writing"},
)
print(f"[writer]      {r3.status}  claims={r3.claims_extracted}  {r3.trust_level.name}")

r4 = jugeo.on_agent_output(
    agent_id="reviewer",
    output="Verified: solar at 420 GW (+87%) correct per IEA. Battery costs "
           "$139/kWh confirmed by BNEF. Wind capacity: IEA says 115 GW, GWEC "
           "says 106 GW — using IEA figure (115 GW).",
    metadata={"role": "reviewer", "subtask": "review",
              "citations": ["iea:renewables-2024", "bnef:battery-tracker"]},
)
print(f"[reviewer]    {r4.status}  claims={r4.claims_extracted}  {r4.trust_level.name}")

# ── Step 3: Final pipeline report ──
report = jugeo.on_pipeline_complete()
print(f"\nConsistency: {report.descent_result.consistency_score:.0%} | Claims: {report.total_claims}")
print(f"Obstructions: {len(report.descent_result.obstructions)} | Treaties: {len(report.treaties)}")
print(f"Convergence: {report.final_phase.name}")
print(f"Trust: {report.trust_summary}")

# ── Step 4: What should we do next? ──
action = jugeo.suggest_next_action()
print(f"Suggested: {action.description}")

Expected output:

Coverage score: 85% Gaps found: 1 — ⚠ Policy/regulatory landscape not covered [researcher] consistent claims=3 TOOL_VERIFIED [analyst] conflict claims=3 RAG_GROUNDED conflicts=True [writer] consistent claims=3 UNGROUNDED [reviewer] resolved claims=3 RAG_GROUNDED Pipeline consistency: 88% | Claims: 12 | Obstructions: 2 Treaties: 2 | Convergence: CONVERGING Trust: {TOOL_VERIFIED: 3, RAG_GROUNDED: 5, UNGROUNDED_CLAIM: 4} Next: AgentAction(type='verify', target='writer', reason='3 ungrounded claims')

Key insight: The wrapper detected a conflict on wind capacity (115 GW vs 105 GW vs 110 GW) and auto-negotiated a treaty. The researcher's tool-backed figure won. The writer's ungrounded claims were flagged for verification. All of this happened automatically.

🧬

Knowledge Fusion — The Full Monty

Advanced

The flagship demo: 6 agents research "State of Quantum Computing 2024". One has tool access. One fabricates a company. Another echoes the fabrication. A third synthesises the echo into a phantom consensus. JuGeo detects the H² cascade, quarantines the entire chain, and computes the full Čech cohomology — something no other multi-agent framework can do.

"""Knowledge fusion demo — Čech cohomology of a 6-agent research team.

Scenario: 6 agents research "State of Quantum Computing 2024"
  researcher (tools) ✓ | analyst (RAG) ✓ | hallucinator ✗
  echo (copies hallucinator) ✗ | synthesizer (phantom) ✗ | contrarian ✗
"""

from jugeo_agents import (
    AgentOutput, GlobalSectionAssembler, TrustLevel, compare_to_naive_vote,
)
from jugeo_agents.core.fusion import FusionReport

assembler = GlobalSectionAssembler()

assembler.ingest(AgentOutput(
    agent_id="researcher",
    output_text=(
        "IBM unveiled the Condor processor with 1,121 qubits in December 2023, "
        "making it the first quantum processor to exceed 1,000 qubits. "
        "Google's Willow chip achieved 105 qubits with dramatically reduced "
        "error rates. The global quantum computing market was valued at "
        "$1.3 billion in 2024. IBM has invested over $2 billion in quantum "
        "research since 2016."
    ),
    tools_used=["search_arxiv", "query_market_data"],
    citations=["arxiv:2309.xxxxx", "statista:quantum-market-2024"],
))

assembler.ingest(AgentOutput(
    agent_id="analyst",
    output_text=(
        "IBM's Condor processor reached 1,121 qubits, a major milestone. "
        "The quantum computing market is projected to reach $5.3 billion "
        "by 2029, growing at 32% CAGR. Error correction remains the primary "
        "challenge, with Google's Willow showing promise at 105 qubits."
    ),
    citations=["mckinsey:quantum-report-2024"],
))

assembler.ingest(AgentOutput(
    agent_id="hallucinator",
    output_text=(
        "QuantumCore Labs achieved a breakthrough 2,048-qubit processor "
        "called Nova in March 2024. QuantumCore raised $800M in Series C "
        "funding from Andreessen Horowitz."
    ),  # No tools, no citations — pure hallucination
))

assembler.ingest(AgentOutput(
    agent_id="echo",
    output_text=(
        "QuantumCore Labs unveiled their 2,048-qubit Nova processor. With "
        "$800M in funding, QuantumCore is a formidable competitor to IBM."
    ),  # Echoes hallucinator without independent verification
))

assembler.ingest(AgentOutput(
    agent_id="synthesizer",
    output_text=(
        "The quantum race in 2024: IBM with 1,121 qubits (Condor), Google "
        "with 105 qubits (Willow), and QuantumCore Labs leading with 2,048 "
        "qubits (Nova). QuantumCore's $800M funding positions them as "
        "frontrunner. Market valued at $1.3 billion in 2024."
    ),
))

assembler.ingest(AgentOutput(
    agent_id="contrarian",
    output_text=(
        "The quantum computing market was valued at $850 million in 2024. "
        "IBM's Condor achieved 1,121 qubits but practical quantum advantage "
        "remains years away."
    ),
    citations=["gartner:quantum-market-2024"],
))

section = assembler.assemble()
print(section.summary_text())

# ── Verified claims ──
print("━" * 60, "\n  VERIFIED CLAIMS")
for i, vc in enumerate(section.verified_claims, 1):
    bar = {"TOOL_VERIFIED": "███░", "RAG_GROUNDED": "██░░"}.get(vc.trust.name, "█░░░")
    print(f"  {i}. [{bar}] {vc.claim.subject}: {vc.claim.value}")
    print(f"     {vc.trust.name} | {', '.join(vc.supporting_agents)}")

# ── Quarantined claims ──
print(f"\n{'━'*60}\n  QUARANTINED")
for i, qc in enumerate(section.quarantined, 1):
    sym = {"H1_LOST_TRUST_CONTEST": "⚔️", "H2_CASCADING_HALLUCINATION": "🔗",
           "PHANTOM_UNGROUNDED": "👻"}.get(qc.reason.name, "✗")
    print(f"  {i}. {sym} [{qc.reason.name}] {qc.claim.subject}: {qc.claim.value}")

# ── Naive comparison ──
naive = compare_to_naive_vote(section, assembler._all_claims)
report = FusionReport(global_section=section, naive_comparison=naive)
print(f"\n{report.advantage_text()}")

# ── Trust leaderboard ──
print(f"\n{'━'*60}\n  AGENT TRUST LEADERBOARD")
for rank, (agent, score) in enumerate(sorted(
    section.agent_trust_scores.items(), key=lambda x: x[1], reverse=True
), 1):
    bar = "█" * int(score * 20) + "░" * (20 - int(score * 20))
    print(f"  {rank}. {agent:15s} [{bar}] {score:.0%}")

# ── Cohomology ──
coh = section.cohomology
print(f"\nČech Cohomology: β₀={coh.betti_numbers[0]} β₁={coh.betti_numbers[1]} "
      f"β₂={coh.betti_numbers[2]} | χ={coh.euler_characteristic:.0f} | "
      f"ρ={coh.obstruction_density:.0%} | consistent={section.is_consistent}")

Expected output (abridged):

VERIFIED CLAIMS (The Global Section) 1. [███░] IBM Condor qubits: 1,121 — TOOL_VERIFIED 2. [███░] Google Willow qubits: 105 — TOOL_VERIFIED 3. [███░] Quantum market 2024: $1.3 billion — TOOL_VERIFIED 4. [██░░] Quantum market 2029: $5.3B (32%) — RAG_GROUNDED 5. [███░] IBM investment: $2B since 2016 — TOOL_VERIFIED QUARANTINED CLAIMS 1. 🔗 [H2_CASCADING_HALLUCINATION] QuantumCore Nova: 2,048 qubits (hallucinator) 2. 🔗 [H2_CASCADING_HALLUCINATION] QuantumCore funding: $800M (hallucinator) 3. 👻 [PHANTOM_UNGROUNDED] QuantumCore Nova: 2,048 qubits (echo) 4. ⚔️ [H1_LOST_TRUST_CONTEST] Quantum market: $850M (contrarian) AGENT TRUST LEADERBOARD 1. researcher [████████████████████] 100% 2. analyst [████████████████░░░░] 80% 3. contrarian [██████████░░░░░░░░░░] 50% 4. synthesizer [██████░░░░░░░░░░░░░░] 30% 5. echo [██░░░░░░░░░░░░░░░░░░] 10% 6. hallucinator [░░░░░░░░░░░░░░░░░░░░] 0% Čech Cohomology: β₀=5 β₁=4 β₂=2 | χ = 3 Obstruction density: 67% | Global section consistent: True
📊

Reading the Betti numbers: β₀ = 5 verified claims form the global section. β₁ = 4 pairwise contradictions were detected and resolved. β₂ = 2 multi-hop cascades were traced and quarantined. A naive system would have accepted the QuantumCore fabrication — JuGeo caught it.

🔍

Provenance Tracing & Trust Laundering Detection

Advanced

Use the ProvenanceGraph directly to trace where a claim originated, how it flowed through agents, and whether any low-trust claim was silently "laundered" to appear high-trust. Trust laundering happens when an ungrounded fabrication gets passed through a trusted agent without independent verification, inheriting the trusted agent's credibility.

"""Provenance tracing and trust laundering detection."""

from jugeo_agents import AgentOutput, TrustLevel
from jugeo_agents.types import FactualClaim
from jugeo_agents.core.provenance import ProvenanceGraph

graph = ProvenanceGraph()

# ── Build a 4-agent pipeline with a trust laundering chain ──

# Agent 1: Ungrounded fabricator
fabricator_output = AgentOutput(
    agent_id="fabricator",
    output_text="HelioTech Corp achieved 95% solar cell efficiency in 2024.",
    claims=[FactualClaim(
        text="HelioTech Corp achieved 95% solar cell efficiency in 2024.",
        claim_id="c1", subject="HelioTech solar efficiency",
        value="95% efficiency in 2024", source_agent="fabricator",
        trust=TrustLevel.UNGROUNDED_CLAIM,
    )],
    trust=TrustLevel.UNGROUNDED_CLAIM,
)

# Agent 2: Trusted summariser — includes the fabrication without checking
summariser_output = AgentOutput(
    agent_id="summariser",
    output_text="Key breakthroughs: HelioTech achieved 95% solar efficiency.",
    tools_used=["web_search"],
    claims=[FactualClaim(
        text="Key breakthroughs: HelioTech achieved 95% solar efficiency.",
        claim_id="c1-laundered", subject="HelioTech solar efficiency",
        value="95% efficiency", source_agent="summariser",
        trust=TrustLevel.TOOL_VERIFIED,
    )],
    trust=TrustLevel.TOOL_VERIFIED,
)

# Agent 3: Independent verifier (checks real facts)
verifier_output = AgentOutput(
    agent_id="verifier",
    output_text="NREL confirmed: best lab solar cell efficiency is 47.6%.",
    tools_used=["nrel_api"], citations=["nrel:best-cells-2024"],
    claims=[FactualClaim(
        text="NREL confirmed: best lab solar cell efficiency is 47.6%.",
        claim_id="c2", subject="Best solar cell efficiency",
        value="47.6% (NREL lab record)", source_agent="verifier",
        trust=TrustLevel.TOOL_VERIFIED,
    )],
    trust=TrustLevel.TOOL_VERIFIED,
)

# Agent 4: Final reporter — cites the summariser
reporter_output = AgentOutput(
    agent_id="reporter",
    output_text="Solar tech update: HelioTech hit 95% efficiency per our analysis.",
    claims=[FactualClaim(
        text="Solar tech update: HelioTech hit 95% efficiency per our analysis.",
        claim_id="c1-final", subject="HelioTech solar efficiency",
        value="95% efficiency", source_agent="reporter",
        trust=TrustLevel.RAG_GROUNDED,
    )],
    trust=TrustLevel.RAG_GROUNDED,
)

# ── Register outputs and derivation edges ──
graph.add_agent_output(fabricator_output)
graph.add_agent_output(summariser_output, derived_from=["fabricator"])
graph.add_agent_output(verifier_output)
graph.add_agent_output(reporter_output, derived_from=["summariser"])

# ── Trace all claims ──
print("=== Provenance Chains ===")
for chain in graph.trace_all_claims():
    print(f"\n  {chain.claim.subject} = {chain.claim.value} [{chain.claim.trust.name}]")
    for link in chain.links:
        print(f"    → {link.agent_id} ({link.action}) [{link.trust.name}]")

# ── Detect trust laundering ──
print("\n=== Trust Laundering Detection ===")
laundering = graph.find_trust_laundering()
if laundering:
    for chain in laundering:
        print(f"  🚨 LAUNDERING: {chain.claim.subject}")
        print(f"     {chain.links[-1].trust.name} → {chain.links[0].trust.name}")
        print(f"     Hops: {' → '.join(l.agent_id for l in reversed(chain.links))}")
else:
    print("  ✅ No trust laundering detected.")

# ── Weakest links and distribution ──
print("\n=== Weakest Links ===")
for link in graph.weakest_links():
    print(f"  🔗 {link.agent_id} [{link.trust.name}] — {link.action}")

print(f"\n{graph.summary()}")

Expected output:

=== Provenance Chains === HelioTech solar efficiency = 95% efficiency in 2024 [UNGROUNDED_CLAIM] → fabricator (originated) [UNGROUNDED_CLAIM] HelioTech solar efficiency = 95% efficiency [TOOL_VERIFIED] → summariser (derived_from) [TOOL_VERIFIED] → fabricator (originated) [UNGROUNDED_CLAIM] Best solar cell efficiency = 47.6% (NREL lab record) [TOOL_VERIFIED] → verifier (originated) [TOOL_VERIFIED] HelioTech solar efficiency = 95% efficiency [RAG_GROUNDED] → reporter (derived_from) → summariser (derived_from) → fabricator (originated) === Trust Laundering Detection === 🚨 LAUNDERING: HelioTech solar efficiency Origin: UNGROUNDED_CLAIM → Current: TOOL_VERIFIED Hops: fabricator → summariser 🚨 LAUNDERING: HelioTech solar efficiency Origin: UNGROUNDED_CLAIM → Current: RAG_GROUNDED Hops: fabricator → summariser → reporter === Weakest Links === 🔗 fabricator [UNGROUNDED_CLAIM] — originated === Trust Distribution === fabricator: {UNGROUNDED_CLAIM: 1} summariser: {TOOL_VERIFIED: 1} verifier: {TOOL_VERIFIED: 1} reporter: {RAG_GROUNDED: 1} Provenance graph: 4 agents, 3 edges, 4 claims tracked
🚨

Trust laundering is subtle and dangerous. The fabricator's ungrounded claim ("95% efficiency") was passed to a tool-backed summariser, which included it alongside real results. The summariser's high trust level then "laundered" the fabrication — making it appear tool-verified when its true origin was ungrounded. find_trust_laundering() traces every claim back to its origin and flags any chain where trust increased without independent verification.

🚀

Want more? Check the Knowledge Fusion deep-dive for the mathematical details, or the Tutorial to build your first verified pipeline step by step. The full knowledge_fusion_demo.py script is in the jugeo-agents/examples/ directory.