You gave your AI agent access to:
- Your email (to send messages for you)
- Your calendar (to schedule meetings)
- Your code repos (to write software)
- Your databases (to query data)
- Your social media (to post content)
Now ask yourself: what stops it from using all of that at once?
Nothing.
Most AI agent frameworks have no permission model. Zero. Nada. Your agent has ambient authority — it can do whatever it has credentials for.
Why Traditional Access Control Fails
Traditional access control (RBAC, OAuth) assumes:
- Human users who act intermittently
- Predictable patterns (email in morning, code in afternoon)
- Judgment about what’s appropriate
AI agents break all of these:
| Human User | AI Agent |
|---|---|
| 50-200 actions/day | 1,000+ actions/hour |
| Follows patterns | Emergent tool chains |
| “Should I do this?” | No inherent judgment |
| One session | Spawns sub-agents dynamically |
And then there’s delegation explosion.
Your main agent spawns a “research agent.” That spawns a “web crawler.” That spawns a “summarizer.” Each inherits the parent’s permissions.
Four levels deep, you have an agent with access to… everything you started with.
Toxic Tool Combinations
Individual permissions seem safe. Combinations are dangerous:
| Tool A | + Tool B | = Risk |
|---|---|---|
read_file | send_email | Data exfiltration |
web_search | execute_code | RCE via prompt injection |
read_calendar | post_slack | Information disclosure |
git_push | ssh_connect | Supply chain attack |
Most frameworks evaluate tools individually. They can’t reason about compositions.
ACC: Capability-Based Security for AI
We went back to fundamentals. Like, 1966 fundamentals.
Dennis and Van Horn invented capability-based security for the exact problem we face: how do you control what a system can do when you can’t trust it to ask permission?
A capability is:
- An unforgeable reference to a resource
- Combined with specific access rights
- Held by an actor who can exercise those rights
The capability IS the permission. No capability, no access.
Three Layers of ACC
1. Policy (RBAC.md)
Your organization’s role hierarchy:
| Role | Can Spawn | Max Delegation |
|------|-----------|----------------|
| owner | any | admin |
| admin | agent, worker | agent |
| agent | worker, reader | worker |
| worker | reader | reader |
2. Agent Declaration (SOUL.md)
What your agent has:
acc:
role: agent
capabilities:
- data:*
- social:write
denied:
- infra:*
- social:dm
constraints:
max_spawn_depth: 3
require_approval:
- social:write
3. Skill Declaration (SKILL.md)
What the skill needs:
acc:
required:
- social:write
- external:post
denied_roles:
- guest
- reader
At runtime: agent.capabilities ⊇ skill.required → ALLOW, else DENY.
Monotonic Attenuation
The critical security property:
A sub-agent can never possess capabilities that its parent does not possess.
This is monotonic reduction. Capabilities only decrease as you go down the delegation chain:
Parent Caps ⊇ Child Caps ⊇ Grandchild Caps
No privilege escalation. Ever.
Human-in-the-Loop Gating
Some capabilities require human approval:
constraints:
require_approval:
- social:write
- external:post
- infra:restart
Approvals are:
- Scoped: Only for the specific action
- Short-lived: 5-minute default expiration
- Signed: Human GPG signature
- Logged: Immutable audit record
Revocation
Four mechanisms:
- Expiration: All certificates expire (1-24 hours typical)
- Revocation Lists: Immediate invalidation
- Chain Revocation: Revoke parent → all children invalid
- Emergency Kill Switch:
substr8 acc revoke --all
Revoking a parent automatically revokes all its sub-agents.
Try It
pip install substr8-cli
substr8 acc agent show ada
substr8 acc skill check ada publish-twitter
substr8 acc revoke did:key:z6Mk... --reason "compromised"
substr8 acc audit list --since 24h
The Difference
| Without ACC | With ACC |
|---|---|
| Agents have ambient authority | Explicit capability grants |
| Sub-agents inherit everything | Monotonic attenuation |
| No audit trail | Complete audit logging |
| “Hope it doesn’t misbehave” | Provable authorization |
Your regulators will ask how your AI got permission to do what it did.
ACC gives you the receipts.