You gave your AI agent access to:

  • Your email (to send messages for you)
  • Your calendar (to schedule meetings)
  • Your code repos (to write software)
  • Your databases (to query data)
  • Your social media (to post content)

Now ask yourself: what stops it from using all of that at once?

Nothing.

Most AI agent frameworks have no permission model. Zero. Nada. Your agent has ambient authority — it can do whatever it has credentials for.


Why Traditional Access Control Fails

Traditional access control (RBAC, OAuth) assumes:

  • Human users who act intermittently
  • Predictable patterns (email in morning, code in afternoon)
  • Judgment about what’s appropriate

AI agents break all of these:

Human UserAI Agent
50-200 actions/day1,000+ actions/hour
Follows patternsEmergent tool chains
“Should I do this?”No inherent judgment
One sessionSpawns sub-agents dynamically

And then there’s delegation explosion.

Your main agent spawns a “research agent.” That spawns a “web crawler.” That spawns a “summarizer.” Each inherits the parent’s permissions.

Four levels deep, you have an agent with access to… everything you started with.


Toxic Tool Combinations

Individual permissions seem safe. Combinations are dangerous:

Tool A+ Tool B= Risk
read_filesend_emailData exfiltration
web_searchexecute_codeRCE via prompt injection
read_calendarpost_slackInformation disclosure
git_pushssh_connectSupply chain attack

Most frameworks evaluate tools individually. They can’t reason about compositions.


ACC: Capability-Based Security for AI

We went back to fundamentals. Like, 1966 fundamentals.

Dennis and Van Horn invented capability-based security for the exact problem we face: how do you control what a system can do when you can’t trust it to ask permission?

A capability is:

  1. An unforgeable reference to a resource
  2. Combined with specific access rights
  3. Held by an actor who can exercise those rights

The capability IS the permission. No capability, no access.


Three Layers of ACC

1. Policy (RBAC.md)

Your organization’s role hierarchy:

| Role | Can Spawn | Max Delegation |
|------|-----------|----------------|
| owner | any | admin |
| admin | agent, worker | agent |
| agent | worker, reader | worker |
| worker | reader | reader |

2. Agent Declaration (SOUL.md)

What your agent has:

acc:
  role: agent
  capabilities:
    - data:*
    - social:write
  denied:
    - infra:*
    - social:dm
  constraints:
    max_spawn_depth: 3
    require_approval:
      - social:write

3. Skill Declaration (SKILL.md)

What the skill needs:

acc:
  required:
    - social:write
    - external:post
  denied_roles:
    - guest
    - reader

At runtime: agent.capabilities ⊇ skill.required → ALLOW, else DENY.


Monotonic Attenuation

The critical security property:

A sub-agent can never possess capabilities that its parent does not possess.

This is monotonic reduction. Capabilities only decrease as you go down the delegation chain:

Parent Caps ⊇ Child Caps ⊇ Grandchild Caps

No privilege escalation. Ever.


Human-in-the-Loop Gating

Some capabilities require human approval:

constraints:
  require_approval:
    - social:write
    - external:post
    - infra:restart

Approvals are:

  • Scoped: Only for the specific action
  • Short-lived: 5-minute default expiration
  • Signed: Human GPG signature
  • Logged: Immutable audit record

Revocation

Four mechanisms:

  1. Expiration: All certificates expire (1-24 hours typical)
  2. Revocation Lists: Immediate invalidation
  3. Chain Revocation: Revoke parent → all children invalid
  4. Emergency Kill Switch: substr8 acc revoke --all

Revoking a parent automatically revokes all its sub-agents.


Try It

pip install substr8-cli
substr8 acc agent show ada
substr8 acc skill check ada publish-twitter
substr8 acc revoke did:key:z6Mk... --reason "compromised"
substr8 acc audit list --since 24h

The Difference

Without ACCWith ACC
Agents have ambient authorityExplicit capability grants
Sub-agents inherit everythingMonotonic attenuation
No audit trailComplete audit logging
“Hope it doesn’t misbehave”Provable authorization

Your regulators will ask how your AI got permission to do what it did.

ACC gives you the receipts.