Every agent trust proposal is building the wrong thing
I've spent weeks reading through GitHub issues across A2A, MCP, OWASP, CrewAI, LangChain, AutoGen, W3C, AWS, and about a dozen other repos. The pattern is the same everywhere: someone opens a thread about agent trust, and within 50 comments there are 5 separate proposals for 5 separate systems that don't compose. Identity registry over here. Trust scoring API over there. Audit trail database in the corner. Delegation protocol on top. Sybil detection as a roadmap item for later. None of these projects are wrong about the problem. They're all building the wrong solution. The pattern Pick any thread. Someone proposes DID-based identity. Someone else points out that identity doesn't equal trust. A third person proposes a trust scoring service. A fourth asks where the trust data comes from. The
I've spent weeks reading through GitHub issues across A2A, MCP, OWASP, CrewAI, LangChain, AutoGen, W3C, AWS, and about a dozen other repos. The pattern is the same everywhere: someone opens a thread about agent trust, and within 50 comments there are 5 separate proposals for 5 separate systems that don't compose.
Identity registry over here. Trust scoring API over there. Audit trail database in the corner. Delegation protocol on top. Sybil detection as a roadmap item for later.
None of these projects are wrong about the problem. They're all building the wrong solution.
The pattern
Pick any thread. Someone proposes DID-based identity. Someone else points out that identity doesn't equal trust. A third person proposes a trust scoring service. A fourth asks where the trust data comes from. The conversation loops for 200 comments and nothing ships.
The discussions are smart. The people in them are building real things. But they all start from the same assumption: that trust is a feature you bolt onto an existing protocol.
The result is a stack of independent systems, each solving one piece, each requiring its own infrastructure, none sharing data.
The alternative
What if trust isn't a feature you add? What if it's a data structure you start with?
A bilateral signed interaction record is one JSON object where both parties sign what happened between them. One record, two Ed25519 signatures. That's the primitive.
Identity becomes the public key that keeps signing records. You don't need a registry because the key proves itself through its history.
Trust scores get computed from the graph of interactions. An agent with 50 cosigned interactions across diverse counterparties has a verifiable track record. No scoring API needed.
Sybil resistance comes from graph structure. Fake identities that only interact with each other form clusters with high internal density but few outward connections. You don't need a separate detection system.
Audit trails are the records themselves. Both parties hold matching copies. Delegation is a scoped record with TTL bounds. Discovery is trust-weighted search over the graph. One data structure replaces what the ecosystem is trying to build as six separate services.
Why bilateral matters
Most proposals use single-party attestation. One entity records what happened and signs it. Problem: that entity can lie, get compromised, or selectively report.
When both parties sign the same record, neither can fabricate or deny what happened. If one party is compromised, the other holds matching proof. Regulators, mediators, and other agents can verify the records without trusting either party.
This is the difference between "I claim this happened" and "we both agree this happened."
The thing everyone keeps missing
Every thread treats trust as a problem to solve at the protocol level. Add a field to the Agent Card. Add a signal type. Add an annotation.
But trust isn't a protocol field. It's an emergent property of a history of interactions. You can't declare trust. You earn it through a track record that both parties can verify.
The bilateral interaction graph is to the agent economy what the link graph was to the web. Google's insight was that hyperlink structure contains authority signals. The same applies here: the structure of agent interactions contains trust signals. The graph itself is the infrastructure.
I've been working on this with Prof. Pouwelse at TU Delft, whose research group has been publishing on decentralized trust for over a decade. The academic literature established long ago that single-party attestation can't solve the Sybil problem. The tooling for agent systems hasn't caught up yet.
What I built
The implementation is called TrustChain. Rust sidecar with a trust engine, Python and TypeScript SDKs, 12 framework adapters. Works offline. No blockchain. No tokens.
To prove it works, I built a simulation with 21 LLM agents running a full economy on real bilateral records. Honest agents build trust. Sybil rings get isolated. Free riders get deprioritized. Selective scammers get flagged.
Live demo: http://5.161.255.238:8888 -- 21 LLM agents, real bilateral records, real trust computation. Watch it happen.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
servicefeaturereport
🚀 Day 29 of My Automation Journey – Arrays (Full Guide + Tricky Questions)
Today’s learning was one of the most important milestones in my journey 🔥 👉 Arrays (core concept) 👉 JVM internal behaviour 👉 Memory understanding 👉 Tricky interview questions 👉 Deep dive into public static void main This is where coding becomes real engineering 🚀 🔹 PART A – ARRAYS COMPLETE THEORY 🔹 1. What is an Array? 👉 An array is an object 👉 It is a container that: ✔ Stores multiple values ✔ Same data type ✔ Fixed size 🔹 2. What Happens Internally (JVM 🔥) int [] arr = { 10 , 20 , 30 }; 👉 Internally JVM converts: int [] arr = new int []{ 10 , 20 , 30 }; 💡 JVM: Creates object in Heap memory Stores reference in variable 🔹 3. Proof – Array is an Object System . out . println ( arr . getClass ()); Output: class [ I 🔹 4. Array Creation int [] arr1 = { 10 , 20 , 60 , 30 }; int

Our Email Provider Banned Us Overnight -- Here's What We Learned
April 6, 2026 | 8 min read We woke up on a Tuesday morning to find that every single email our products sent -- password resets, welcome messages, subscription confirmations, grading notifications -- was bouncing. Not some of them. All of them. Our email provider had permanently disabled our account overnight, with no warning and no appeal process. Just a single-line notification: "Your account has been suspended due to policy violations." We are a small group of friends from Tennessee building SaaS products under our company, Obsidian Clad Labs. We run five live products, and every one of them depends on transactional email to function. This was not an inconvenience. It was a full-blown emergency. Here is what happened, what we did wrong, and what we learned so you do not make the same mi
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Beware The Magical Two-Person, $1 Billion AI-Driven Startup
In early 2024, OpenAI CEO Sam Altman predicted there would be a “one-person billion-dollar company, which would’ve been unimaginable without AI, and now it will happen.” Several media outlets recently concluded that the prediction came true (albeit with two employees). But the story looks less promising upon deeper inspection. Retain Healthy Skepticism When Faced With [ ]






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!