Judgment Fiber Bundles: Trust as a Geometric Connection
The central construction of Judgment Geometry applied to multi-agent systems. Trust is not a label — it's a connection on a fiber bundle, with computable curvature that detects structural team unreliability.
Judgment
Not just a claim, but claim + evidence + trust + channel. The fundamental object of JG.
Fiber Bundle
The judgment space \(E \to B\): fibers of judgments over the task space.
Connection
Trust transport between agents. How trust transforms across boundaries.
Curvature
Path-dependence of trust transport. Detects structural unreliability.
Characteristic Classes
Global invariants (\(c_1\)) measuring team-wide trust consistency.
§1 — Why Trust Is Not a Label
In standard multi-agent frameworks, trust is a flat tag: "this claim has trust level 5," or "this agent has reliability score 0.93." The number is assigned once, used everywhere, and silently assumed to be context-independent. This assumption is wrong — and the error is not merely philosophical; it has measurable, computable consequences.
In Judgment Geometry, trust is a connection. When you transport a judgment from Agent A's context to Agent B's context, the trust level transforms. The transformation depends on the path you take through the agent network: going from \(A \to B\) directly may yield a different trust than going \(A \to C \to B\). This path-dependence is exactly what makes trust a connection on a fiber bundle, not a function on a set.
The key insight: trust is not a property of a judgment — it is a relationship between a judgment and a context. Just as the electromagnetic potential is not a property of a point in space but a relationship between a charged particle and the field, trust lives in the connection, not in the fiber.
Gauge theory analogy. Trust is the analogue of the electromagnetic potential \(A_\mu\). The curvature \(F = dA + A \wedge A\) is the analogue of the field strength tensor. Holonomy around a closed loop is the analogue of the Aharonov–Bohm effect: even when local trust looks consistent, a global loop can reveal hidden inconsistency. You cannot "gauge away" non-zero curvature — it is a physical (structural) invariant of the agent team.
Why does this matter in practice? Consider three agents: a Researcher who has tool access, an Analyst who has citations, and a Chatbot that has neither. The Researcher trusts the Analyst (they share verifiable evidence). The Analyst trusts the Chatbot (based on conversational coherence). But the Chatbot's claim, relayed through the Analyst to the Researcher, arrives with inflated trust — the Researcher treats it as if it were citation-backed, when it never was. This is trust inflation, and it shows up as positive curvature in the bundle.
Flat trust models cannot detect this. Bundle geometry can — and does so automatically, from the structure of the agent outputs alone.
§2 — The Bundle
We now make the construction precise. A judgment fiber bundle is a tuple \((E, B, \pi, F, G)\) where each component has a concrete interpretation in the multi-agent setting.
The base space \(B\) is the set of agent-task assignments. Each point \(p \in B\) represents a specific agent assigned to a specific task (or sub-task). If there are \(n\) agents and \(m\) tasks, then \(|B| \leq n \times m\), with equality when every agent may address every task.
The fiber at a point \(p \in B\) is the space of all judgments that the agent at \(p\) can make about the task at \(p\):
Here \(\sigma\) is the claim (a semantic content element), \(e\) is the evidence supporting the claim, \(\tau \in \mathcal{T}\) is the trust level, and \(\chi\) is the evidence channel through which the judgment was formed. The fiber is not a bare set: it carries the partial order inherited from the trust algebra \((\mathcal{T}, \leq, \min)\).
The total space is the disjoint union of all fibers: \(E = \coprod_{p \in B} F_p\). The projection is the canonical map \(\pi : E \to B\) sending each judgment to the agent-task pair that produced it. By construction, \(\pi^{-1}(p) = F_p\).
The structure group is \(G = (\mathcal{T},\, \min)\), the trust semilattice. It acts on fibers by adjusting trust levels: for \(g \in G\) and \((\sigma, e, \tau, \chi) \in F_p\), the action is \(g \cdot (\sigma, e, \tau, \chi) = (\sigma, e, \min(\tau, g), \chi)\). This captures the principle that trust can only be reduced by transport, never inflated — a fundamental asymmetry that distinguishes judgment bundles from generic vector bundles.
The bundle is locally trivial: over any single agent, the fibers look like a standard trust-graded judgment space. The non-trivial geometry emerges when we try to compare fibers at different points — i.e., when we try to transport judgments between agents. This is where the connection enters.
§3 — The Trust Connection
A connection on the judgment bundle is a rule for transporting judgments between fibers. Concretely, for agents \(A\) and \(B\) sharing an edge in the task graph, the connection is a map:
This map takes a judgment in Agent A's fiber and produces the "same" judgment as seen from Agent B's context. The trust level generally changes during transport. The transport rule is determined by evidence overlap:
Let \(j = (\sigma, e_A, \tau_A, \chi_A) \in F_A\) be a judgment in Agent A's fiber, and let \(e_B\) be Agent B's available evidence for the same claim \(\sigma\). The transported judgment \(\nabla_{A \to B}(j) = (\sigma, e_B, \tau_B, \chi_B)\) has trust level determined by:
- Same evidence (\(e_A \simeq e_B\)): Identity transport, \(\tau_B = \tau_A\). The judgment is equally supported in both contexts.
- Higher evidence at target (\(e_B \succ e_A\)): Trust promotes, \(\tau_B \geq \tau_A\). Agent B has more evidence, so the claim can be trusted at least as much.
- Lower evidence at target (\(e_B \prec e_A\)): Trust demotes, \(\tau_B \leq \tau_A\). Agent B has less evidence, so trust must decrease.
- No evidence overlap (\(e_A \perp e_B\)): Flat (identity) transport, \(\tau_B = \tau_A\). With no basis for comparison, we preserve the incoming trust level.
Formally, the transport is encoded as a connection matrix. For a team of \(n\) agents, the connection is an \(n \times n\) matrix \(\Gamma\) where \(\Gamma_{AB}\) is the trust adjustment applied when transporting from \(A\) to \(B\).
Example: Three-Agent Connection Matrix
Consider a team of three agents: a Researcher (R) with tool access, an Analyst (A) with citations, and a Chatbot (C) with no external evidence.
Reading the matrix: \(\Gamma_{RC} = -1\) means that transporting a judgment from the Researcher to the Chatbot demotes trust by 1 level (the Chatbot cannot verify tool-based evidence). \(\Gamma_{RA} = +1\) means Researcher-to-Analyst transport promotes trust (the Analyst can cross-reference with citations). The diagonal is zero: self-transport is always identity.
Key property: The connection is determined entirely by the agent outputs
— specifically, by the evidence channels and tool usage declared in each
AgentOutput. No human labeling is required. The JudgmentBundle
class computes \(\Gamma\) automatically from the evidence overlap structure.
§4 — Curvature
The curvature of the trust connection measures the path-dependence of trust transport. If transporting a judgment around a closed loop returns it to a different trust level, the connection has non-zero curvature — and the team has a structural trust inconsistency.
For three agents \(A, B, C\) forming a triangle in the task graph, the curvature is:
The curvature is the total trust adjustment accumulated by transporting a judgment around the triangle \(A \to B \to C \to A\). It is the discrete analogue of the curvature 2-form \(F = d\omega + \omega \wedge \omega\) in differential geometry.
Interpreting the Curvature
The sign of the curvature has a precise operational meaning:
- \(F = 0\): Flat, consistent trust. Transporting a judgment around any loop returns it to its original trust level. The team's trust assignments are globally coherent. Agents' evidence channels are mutually compatible.
- \(F > 0\): Trust inflation (echo chamber). The loop promotes trust beyond what any individual agent's evidence supports. Agents are mutually inflating each other's credibility. This is the geometric signature of an echo chamber.
- \(F < 0\): Trust deflation (adversarial). The loop demotes trust. Agents systematically undermine each other's credibility. This may indicate adversarial dynamics or fundamental evidence incompatibility.
Example: Computing Curvature
Using the connection matrix from §3:
In this specific example, the curvature vanishes — the trust structure is flat. But modify the scenario slightly: suppose the Chatbot claims to have citations (without actually having them). Then \(\Gamma_{AC}\) becomes \(0\) instead of \(-1\), and:
The positive curvature detects the trust inflation: the Chatbot's false claim of citations creates a loop where trust is systematically promoted beyond what evidence supports.
Non-zero curvature cannot be fixed by adjusting individual agents. It is a topological property of the agent team configuration. You cannot "re-calibrate" one agent to eliminate curvature — you must change the team structure itself (add new evidence channels, remove unreliable links, or restructure the task graph). This is the deep content of the Chern class obstruction (§5).
§5 — Holonomy and Characteristic Classes
Curvature at individual triangles gives local information. To understand the global trust structure, we integrate curvature over the entire agent network. This yields two closely related invariants: the holonomy and the first Chern class.
The holonomy around a closed loop \(\gamma = (A_1 \to A_2 \to \cdots \to A_k \to A_1)\) in the agent graph is:
where indices are taken mod \(k\) (so \(A_{k+1} = A_1\)). Holonomy measures the total trust shift when a judgment is transported around the entire team. It is the finite-sum analogue of parallel transport in Riemannian geometry.
The first Chern class \(c_1\) of the judgment bundle is the average curvature over all triangles in the agent graph:
where \(\Delta\) is the set of all 2-simplices (triangles) in the agent task graph. The first Chern class is a topological invariant: it does not change under continuous deformations of the connection (i.e., small perturbations of trust levels). It captures the deep, structural (in)consistency of the team.
For a judgment fiber bundle \((E, B, \pi, F, G)\) over a finite agent graph, the following are equivalent:
- \(c_1(E) = 0\).
- The bundle admits a flat connection (a connection with vanishing curvature on every triangle).
- There exists a globally consistent trust assignment — a section \(s: B \to E\) such that parallel transport along any path preserves the trust level of \(s\).
In other words, the first Chern class is the complete obstruction to globally consistent trust.
If \(c_1(E) \neq 0\), then no global trust assignment exists. The agent team is provably unreliable in a structural way: no recalibration of individual agents can eliminate the trust inconsistency. The team topology itself must change.
This is the key result of the fiber bundle framework. It transforms the informal question "is this agent team trustworthy?" into a computable topological invariant. The answer is not a probability or a heuristic — it is a mathematical certificate of (in)consistency.
§6 — Live Code Example
The jugeo-agents library computes the full bundle geometry from raw agent
outputs. Here is a complete example with three agents:
from jugeo_agents import JudgmentBundle, AgentOutput
# 1. Create the bundle
bundle = JudgmentBundle()
# 2. Add agent outputs (evidence is inferred from tools/citations)
bundle.add_agent_output(AgentOutput(
agent_id="researcher",
output_text="The climate model predicts 2.5°C warming by 2100.",
tools_used=["search", "calculator"]
))
bundle.add_agent_output(AgentOutput(
agent_id="analyst",
output_text="Cross-referencing IPCC AR6, the 2.5°C figure is within the likely range.",
citations=["IPCC AR6 WG1 Ch4", "Nature Climate Change 2023"]
))
bundle.add_agent_output(AgentOutput(
agent_id="chatbot",
output_text="Based on the analysis, warming of 2.5°C is expected by 2100."
))
# 3. Compute and display the full bundle geometry
print(bundle.summary_text())
Output:
Non-Flat Example
Now suppose the chatbot falsely claims citation support:
from jugeo_agents import AgentOutput, JudgmentBundle
from jugeo_agents import AgentOutput, JudgmentBundle
from jugeo_agents import AgentOutput, JudgmentBundle
bundle_inflated = JudgmentBundle()
bundle_inflated.add_agent_output(AgentOutput(
agent_id="researcher",
output_text="The climate model predicts 2.5°C warming by 2100.",
tools_used=["search", "calculator"]
))
bundle_inflated.add_agent_output(AgentOutput(
agent_id="analyst",
output_text="Cross-referencing IPCC AR6, the figure is within the likely range.",
citations=["IPCC AR6 WG1 Ch4"]
))
# Chatbot claims citations it does not actually have
bundle_inflated.add_agent_output(AgentOutput(
agent_id="chatbot",
output_text="Per my sources, 2.5°C is the consensus.",
citations=["fabricated_source"] # ← falsely claimed
))
print(bundle_inflated.summary_text())
§7 — Trust Stratification
The trust algebra \((\mathcal{T}, \leq, \min)\) induces a natural stratification of the total space \(E\). Judgments at different trust levels live in different strata, and contradictions between strata have different severity.
For each trust level \(\tau \in \mathcal{T}\), the \(\tau\)-stratum is:
The standard trust hierarchy in jugeo-agents has five levels:
Intra-Stratum vs. Cross-Stratum Contradictions
Not all contradictions are equal. Two agents at the same trust level disagreeing is much more severe than agents at different levels disagreeing:
-
Intra-stratum contradiction (e.g., two
TOOL_VERIFIEDagents producing contradictory claims): This indicates a genuine inconsistency in the evidence. The tools themselves give different answers. Resolution requires investigating the tools, not adjusting trust. -
Cross-stratum contradiction (e.g., a
TOOL_VERIFIEDagent vs. anUNVERIFIEDchatbot): This is the expected case — the lower-trust agent is simply wrong, and the trust ordering resolves the conflict automatically.
Stratification and curvature interact. Positive curvature concentrated at cross-stratum boundaries indicates trust inflation at that boundary. Positive curvature within a stratum indicates that agents with the same evidence quality are still inconsistent — a deeper structural problem.
The stratification is not just a diagnostic tool — it feeds directly into the knowledge fusion algorithm. When fusing judgments from multiple agents, the fusion operator respects the stratification: higher-stratum judgments dominate lower-stratum ones, and intra-stratum conflicts trigger explicit contradiction flags in the cohomology.
§8 — Relationship to Sheaf Cohomology
The fiber bundle framework does not replace the sheaf cohomology approach — it strictly generalizes it. Every sheaf-cohomological invariant (\(H^0\), \(H^1\), \(H^2\), phantom classes) embeds naturally into the bundle, where it gains a geometric interpretation.
The following correspondences hold between the sheaf framework and the bundle framework:
\(H^1\) Contradictions as Local Curvature
An \(H^1\) element in the sheaf framework represents a pairwise contradiction between two agents. In the bundle, this is precisely non-zero curvature at a specific edge (or, more precisely, at a triangle containing that edge). The bundle makes the geometric content of the contradiction explicit: it tells you not just that agents disagree, but how much trust is inflated or deflated, and in which direction.
\(H^2\) Cascades as Holonomy
An \(H^2\) element represents a cascade contradiction: a collection of pairwise agreements that are globally inconsistent (the "Penrose triangle" of multi-agent trust). In the bundle, this is precisely non-trivial holonomy around a loop. Each edge in the loop looks locally consistent, but transporting trust around the entire loop reveals the global inconsistency.
Phantom Classes as Topological Obstructions
Phantom classes — invisible inconsistencies that cannot be detected by any finite collection of local checks — correspond to bundles where the curvature vanishes on every individual triangle but the first Chern class is still non-zero. These are the most subtle and dangerous failure modes: the team looks consistent at every local check, but is globally inconsistent. Only the integrated invariant \(c_1\) can detect them.
Let \(\mathcal{F}\) be the judgment sheaf on the agent task graph, and let \((E, B, \pi)\) be the corresponding judgment fiber bundle. Then:
- Every cohomology class \([c] \in H^k(\mathcal{F})\) determines a curvature/holonomy invariant of \(E\) via the dictionary above.
- The bundle detects strictly more structure than the sheaf: the quantitative curvature values, the trust stratification, and the characteristic classes are not visible to \(H^*(\mathcal{F})\) alone.
- When the bundle is flat (\(c_1 = 0\)), the sheaf and bundle frameworks agree completely: \(H^0\) classifies global sections, and \(H^1\), \(H^2\) classify obstructions to extending local sections.
When to use which? Use the sheaf framework when you need qualitative
obstruction detection (yes/no: are agents consistent?). Use the bundle framework when
you need quantitative diagnostics (how much inconsistency? where? what kind?). In
practice, jugeo-agents computes both simultaneously — the
JudgmentBundle class inherits from the sheaf infrastructure and adds the
geometric layer on top.