AI agents are starting to act autonomously.
They browse the web. They execute code. They manage files. They make API calls. They interact with databases. Some are beginning to hold wallets and transact.
But today, we cannot reliably answer basic questions:
- What did the agent actually do?
- Did it follow policy?
- Can we verify it independently?
Most systems rely on logs. Logs can be modified. Logs can disappear. Logs are not proofs.
This is a trust gap. And it’s growing.
The Davos Insight
At Davos 2026, MIT professor Romesh Shar described what’s missing:
“The green padlock moment for AI agents will be critical. Hopefully five years from now we will have this green padlock and that will allow us to imagine these AI agents being therapists or doctors or advisers.”
The green padlock. The simple, visible signal that transformed the early web from “scary place to enter credit cards” to “global commerce infrastructure.”
HTTPS didn’t just encrypt traffic. It created a trust signal that everyone — users, businesses, regulators — could understand and verify.
AI agents need the same thing.
Shar outlined four pillars required for an “internet of AI agents”:
| Pillar | What It Means |
|---|---|
| Registry | Agent identity — who is this agent? |
| Certifying Authorities | Capability control — what is it allowed to do? |
| Interoperability | Framework compatibility — can it work across systems? |
| Attestation | Verifiable execution — can we prove what happened? |
These are the building blocks of agent trust.
Architecture Alignment
When we heard this framing, something clicked.
Our architecture maps closely to those pillars:
| Davos Concept | Substr8 Implementation |
|---|---|
| Registry | FDAA — File-Driven Agent Architecture. Agent identity as versioned, hash-verified artifacts. |
| Certifying Authorities | ACC — Agent Capability Control. Policy enforcement for what agents can and cannot do. |
| Interoperability | MCP — Model Context Protocol. 12 governance tools that work with any framework. |
| Attestation | RunProof — Portable, cryptographically verifiable artifact proving what happened. |
We didn’t set out to build “what Davos asked for.” We set out to solve a practical problem: how do you trust an AI agent that acts autonomously?
The architectures converged because the problem is real.
What RunProof Does
RunProof is the key piece.
Every governed agent run produces a RunProof — a portable artifact containing:
- DCT ledger — Hash-chained audit trail of every action
- Policy checks — Record of ACC enforcement decisions
- Tool invocations — What tools were called, with what parameters
- Memory operations — What was written or retrieved (with provenance)
- Root hash — Cryptographic seal binding everything together
Modify any entry and the chain breaks. The tampering is detectable.
This is the same principle behind software supply-chain security tools like Sigstore and SLSA. Those tools verify how software was built. RunProof verifies how an agent ran.
| Software Supply Chain | Agent Execution |
|---|---|
| Source code | Agent configuration |
| Build pipeline | Agent run |
| Build logs | DCT ledger |
| SBOM | Memory + tool trace |
| Attestation | RunProof |
The pattern is the same. The domain is different.
Why This Matters
Agents are going to:
- Write and deploy code
- Execute financial transactions
- Control infrastructure
- Interact with sensitive systems
- Operate with increasing autonomy
When that happens, organizations will need to answer:
“Can we verify what the agent actually did?”
Not “what did the logs say.” Not “what did the platform claim.”
Can we independently verify it?
That’s what RunProof provides.
Use cases:
- Enterprise automation — Audit trails for compliance
- Agent marketplaces — Trust signals for third-party agents
- Autonomous agents — Verification when humans aren’t in the loop
- Incident response — Forensic analysis of what went wrong
Try It
pip install substr8
substr8 init my-agent
cd my-agent
substr8 run examples/langgraph/agent.py
Every run produces a .runproof.tgz file.
Verify locally:
substr8 verify runproofs/run-xxxx.runproof.tgz
Or verify online at verify.substr8labs.com
The Green Padlock Moment
The green padlock didn’t arrive all at once. It took years of infrastructure work — certificate authorities, browser integration, protocol standardization — before HTTPS became the default.
The green padlock for AI agents will follow the same path.
We don’t claim to have built the entire future. But we’ve built a working implementation of the trust layer that Davos is asking for.
Frameworks build agents. Substr8 proves what they did.