What Is Authorization, Really?
Let’s start with the fundamentals. Authorization is the process of determining what you’re allowed to do after the system knows who you are. Think of it this way: authentication is showing your ID at the door, proving you are who you say you are. Authorization is what happens next: the bouncer checking the list to see if you can access the VIP section, the main floor, or just the coat check.
At its core, authorization protects objects (files, databases, APIs, services) from unauthorized operations (reading, writing, deleting, executing). It’s the invisible security perimeter around every digital interaction you have. When you share a photo album with family, edit a collaborative document, or access your bank account, authorization is working behind the scenes, making split-second decisions about whether to grant or deny your request.
And here’s the thing: if authorization fails, everything fails. Get it wrong, and you either lock out legitimate users (denial of service) or, far worse, grant access to people who shouldn’t have it (security breach). It’s the foundation of digital trust.
The Traditional Playbook: How We’ve Been Doing Authorization
For decades, organizations relied on a few tried-and-true approaches to access control. Let’s walk through them.
Identity-Based Access Control (IBAC) — The Guest List Approach
The simplest model: Access Control Lists (ACLs). You literally maintain a list of who can access what. Alice can read Document A. Bob can edit Document B. Carol can delete Document C. Does your username match an entry on the list? You’re in. Doesn’t match? Denied.
This works beautifully for small systems, maybe 10 users, 10 files. But scale it to an enterprise with thousands of employees and millions of resources? It becomes an administrative nightmare. You’re forced to manage privileges individually for every single person accessing every single object. And here’s the killer: it’s completely static. When something changes like someone gets promoted, leaves the company, finishes a project, you have to manually update every single ACL they’re on.
The result? What security folks call “privilege creep.” Users accumulate more and more access over time because it’s just too cumbersome to revoke everything when they change roles. They end up with way more authority than their current job requires, creating a massive attack surface just waiting to be exploited.
Role-Based Access Control (RBAC) — The Org Chart Solution
RBAC was the industry’s answer to ACL chaos, and it was rightly hailed as a major leap forward. Instead of managing individual permissions, you define roles that mirror your organizational structure: “Marketing Manager,” “Senior Engineer,” “Financial Analyst” and assign bundles of permissions to those roles.
When someone joins as a Senior Engineer, they automatically inherit all the permissions that role carries. When they leave or transfer, you update their role assignment, and the system handles the complex recalculation of their effective permissions. You’re managing 10 roles instead of 10,000 individual user permissions. It’s elegant, it scales, and it mirrors how organizations actually work.
But here’s where RBAC starts to crack: it struggles with nuance. Roles are typically static, mirroring the org chart. What happens when you need a policy that says, “Only supervisors who’ve completed mandatory ethics training, logging in from a corporate VPN connection after 6 PM, can approve payments over $50,000”?
With pure RBAC, you’d have to create a new role: “VPN-Ethics-Trained-After-Hours-Supervisor-Over-50K-Approver.” This is called “role explosion,” and it’s absurd. You end up creating so many granular, one-off roles that you’ve just replaced one administrative nightmare with another.
Policy-Based Access Control (PBAC) — Codifying the Rules
Before we dive into more sophisticated models, we should talk about Policy-Based Access Control, which takes a different approach: expressing authorization decisions as explicit, formal policies that can be centrally managed and evaluated.
Think of PBAC as taking all those implicit rules scattered across your organization , “contractors can’t access financial data,” “documents can only be downloaded during business hours,” “approval requires manager sign-off” and codifying them into a machine-readable policy language.
The most prominent example is XACML (eXtensible Access Control Markup Language), which provides a standardized way to write these policies. A policy might look like: “IF subject.role = ‘physician’ AND subject.department = resource.owning_department AND action = ‘write’ THEN permit WITH obligation(send_email_to_patient).”
Where PBAC Shines:
- Centralized policy management: All your rules live in one place, not scattered across application code
- Separation of concerns: Policy writers (security team) don’t need to understand application internals; developers don’t need to be security experts
- Auditability: Policies are explicit, documented, and can be reviewed independently of the code
- Flexibility: You can express complex boolean logic: AND, OR, NOT conditions across multiple attributes
Why PBAC Alone Isn’t Enough for AI Agents:
The challenge with pure PBAC for AI agents is that it’s fundamentally about evaluating decisions, not about structuring the authorization model itself. PBAC is brilliant at saying “here’s how to decide,” but it doesn’t tell you what data model to use (roles? attributes? relationships?) or how to handle the dynamic, compositional nature of agent requests.
In practice, PBAC works best as a layer on top of another model:
- PBAC + RBAC: Policies that reference roles and hierarchy
- PBAC + ABAC: Policies that evaluate dynamic attributes
- PBAC + ReBAC: Policies that traverse relationship graphs
For AI agents specifically, PBAC’s rigid policy evaluation can create bottlenecks. If every agent action requires complex policy evaluation with network round-trips to attribute sources, you’re introducing latency that destroys the agent’s responsiveness. Additionally, policies need to be written ahead of time — but agents often need to perform novel combinations of actions that policy writers never anticipated.
That said, PBAC’s strength is in governance. For high-stakes agent decisions: “can this agent approve a $100K purchase?” or “can this agent access customer PII?” having explicit, auditable policies is non-negotiable. The key is combining PBAC’s governance strengths with more flexible models for the agent’s routine operations.
The OAuth 2.0 Revolution — Delegation Without Passwords
Then came OAuth 2.0, which solved a different but equally critical problem: the password anti-pattern. Before OAuth, if a third-party photo printing service needed to access your photos stored on another platform, you had to give them your username and password — your master key to everything.
The problems were brutal:
- Third parties had to store your credentials (often insecurely)
- They gained access to everything your password unlocked, not just photos
- You couldn’t revoke access to just one app without changing your password everywhere
- If that app got hacked, attackers got your credentials to every service using that password
OAuth introduced the concept of delegation through limited, temporary access tokens. Instead of handing over your password, you authenticate directly with the trusted service, which then issues the third-party app a token that says: “This app can read only the photos tagged ‘print’ for the next hour.” The token is scoped (limited permissions), time-bound (expires), and revocable (you can cancel it anytime).
This separation: the authorization layer sitting between the client app and your resources, was revolutionary. It’s why you can safely click “Sign in with Google” without worrying about giving every website your Google password.
Enter AI Agents: Why Everything Just Got Complicated
Now we’re in a new world. AI agents aren’t just reading files or executing predetermined functions: they’re making decisions, taking actions, and operating with a level of autonomy we’ve never dealt with before.
Traditional authorization models are fundamentally static. Even OAuth, brilliant as it is, assumes you can define the scope of access upfront. You know what the photo printing service needs (read photos), so you grant exactly that permission.
But what happens when an AI agent needs to:
- Analyze your email to find a meeting invite
- Check your calendar for conflicts
- Book a conference room
- Draft a summary email to attendees
- Pull relevant documents from your drive
- Maybe even make a purchase if it needs to order lunch for the meeting
Each of those actions touches different resources (email, calendar, facilities system, drive, payment system). The agent doesn’t know ahead of time exactly which resources it’ll need, that depends on the context of your request and what it discovers along the way.
The All-or-Nothing Trap
Right now, most AI agent authorization falls into what I call the “all-or-nothing trap.” Either:
**Option 1: The Agent Gets Everything \ You grant the agent broad, sweeping permissions, essentially your own authority level. It can read your email, access your files, make API calls on your behalf, execute commands. This is terrifyingly insecure. If the agent has a bug, gets prompt-injected, or its credentials leak, an attacker now has full run of your digital kingdom. Remember that Solitaire example? The simple card game running with the authority to delete your entire hard drive? Same problem, but now the “card game” is a semi-autonomous AI that might hallucinate its instructions.
**Option 2: The Agent Gets (Almost) Nothing \ You lock it down with minimal permissions, forcing it to request approval for every single action. This destroys the entire value proposition. Why have an autonomous agent if you have to babysit every API call? The friction is so high that users either abandon the agent or, worse, override the restrictions to “get things done,” creating shadow security risks.
Neither option is tenable for the long term.
What Authorization Models Actually Fit AI Agents?
This is where things get interesting. We need authorization systems that are as dynamic and context-aware as the agents themselves. Let’s look at what’s emerging.
Attribute-Based Access Control (ABAC) — The Dynamic Decision Engine
ABAC is where modern authorization starts to match the sophistication of AI agents. Instead of asking “Is this user a manager?” (RBAC), ABAC asks: “Does this request satisfy a complex, multi-factor policy based on attributes of the subject, the resource, the action, and the environment?”
Here’s how it works for our earlier example: nurse Nancy trying to access medical records:
- Subject attributes: Nancy is a nurse practitioner in the cardiology department
- Resource attributes: This is a medical record of heart patients, owned by cardiology
- Action: Read (view-only)
- Environment conditions: Request is coming from the hospital network, during business hours
The policy says: “IF subject.role = ‘nurse practitioner’ AND subject.department = resource.owning_department THEN permit view.”
The magic? When Nancy transfers to oncology next month, you don’t touch the access rule. You don’t touch the patient records. You just update her employee file, her subject attributes. The next time she tries accessing cardiology records, the policy evaluation fails automatically. She can now access oncology records instead, because that’s where her department attribute now points.
Pros of ABAC for AI Agents:
- Dynamic by nature: Decisions happen in real-time based on current attribute values, perfect for agents whose needs change request-by-request
- Fine-grained control: You can write incredibly specific policies like “permit only if the agent is operating on behalf of a user with active NDA status, requesting data classified below ‘confidential,’ during business hours, from a corporate IP”
- Scales elegantly: New agents or users just need the right attributes; no need to update every policy
- External user friendly: You can grant access based on “what” rather than “who”, the agent doesn’t need to be pre-registered if it arrives with validated attributes
Cons of ABAC for AI Agents:
- Complexity explosion: You’re managing attributes across multiple systems (HR, security, asset management). Those attributes need to be authoritative, current, and trustworthy
- Before-the-fact auditing becomes nearly impossible: With traditional ACLs, you can list “who has access to File X.” With ABAC, access depends on dynamic attribute evaluation, you’d have to simulate every possible request to know
- Distributed trust chain: The root of trust is no longer the resource owner, but scattered across multiple attribute authorities. If the HR system that certifies someone’s clearance level is stale or compromised, your entire security posture crumbles
- Performance overhead: Every decision requires gathering and evaluating attributes from multiple sources in real-time
Relationship-Based Access Control (ReBAC) — Google’s Graph Approach
This is where things get really interesting for complex, interconnected systems. ReBAC, pioneered by Google in their Zanzibar system, takes a fundamentally different approach: it models authorization as a graph of relationships between users, resources, and groups.
Instead of asking “Does this user have permission?” or “Does this user have the right attributes?”, ReBAC asks: “Is there a path in the relationship graph that connects this user to this resource?”
Here’s how it works. Instead of storing giant ACLs or evaluating complex attribute policies, ReBAC models access as simple relationship tuples:
doc:proposal#viewer@user:alice(Alice can view the proposal)doc:proposal#viewer@group:eng#member(Anyone who's a member of the eng group can view the proposal)folder:secret#viewer@doc:proposal#parent(The proposal's viewers inherit from its parent folder)
The beauty is in how these relationships compose. You can express: “The viewers of this document are anyone who is an editor of this document OR anyone who is a member of the parent folder’s viewer group OR anyone in a group that has been granted viewer access.”
For AI agents operating in complex organizational structures, this is incredibly powerful. The agent needs to:
- Check permissions on a document? Traverse the relationship graph
- Understand who can see meeting notes? Follow the parent folder relationships
- Determine if it can act on behalf of a team? Check group membership chains
Google’s Zanzibar implementation handles trillions of these relationship tuples and processes millions of permission checks per second with sub-10-millisecond latency. That’s the scale needed when your AI agent might need to check permissions across Drive, Gmail, Calendar, and YouTube simultaneously.
The key innovation for agents is what Zanzibar calls consistency tokens (Zookies). Before an agent modifies content — say, editing a document — it requests a Zookie from the authorization system. That Zookie encodes a timestamp. When the agent later accesses that content, it sends the Zookie back, essentially saying: “Evaluate my permissions using a snapshot at least as fresh as this timestamp.”
This guarantees the agent never sees stale permissions relative to the content it’s accessing. If you remove Bob’s access at time T1, then share new confidential content at time T2, the system ensures the T1 action is processed before T2 — even across different continents and services. This prevents the “new enemy problem” where authorization lags dangerously behind reality.
Pros of ReBAC for AI Agents:
- Naturally models organizational reality: Companies aren’t flat, they’re webs of teams, projects, departments, and reporting structures. ReBAC captures this directly
- Inheritance is built-in: When you add someone to a group, they automatically get access to everything that group can access. Perfect for agents that need to “act as” a team
- Compositional: You can build complex policies by composing simple relationships. “Editors inherit from owners” + “Viewers inherit from editors” = automatic role hierarchy
- Massive scale: Google proved you can do this at planet-scale with strong consistency guarantees
- Interoperability: Agents working across multiple services (email, calendar, files) can use one unified relationship graph
Cons of ReBAC for AI Agents:
- Infrastructure complexity: Running a globally distributed relationship graph with strong consistency isn’t trivial. You need sophisticated database systems (like Spanner), specialized indexing (like Leopard), and complex consistency protocols
- The “before-the-fact audit” problem: Same as ABAC, asking “who can access X?” requires traversing potentially millions of relationship paths. You can’t just look at a list
- Debugging is hard: When access is denied, figuring out why requires understanding which relationship in the chain failed. “Why can’t the agent access this file?” might require traversing folder hierarchies, group memberships, and delegation chains
- Relationship modeling is an art: Getting the relationship types right (owner, editor, viewer, member, parent) requires careful design. Bad models lead to permission leaks or denial-of-service
Capability-Based Security — The Key-Centric Approach
This is where things get really elegant for AI agents. Instead of asking “Does this agent have permission?”, you make the permission itself the unforgeable token the agent must possess.
Think of it this way: Traditional systems are name-centric. You say “access file foo.txt,” and the system looks you up in some global directory to see if you’re allowed. Capability systems are key-centric. You present a cryptographic token (the capability) that is both the designation (what you’re accessing) and the authority (the right to access it) bundled together.
When you want to give an AI agent access to your Carol resource, you don’t update some central ACL. You create a small security-enforcing program like a caretaker that sits in front of Carol. The caretaker gives you two things back:
- A reference to itself (Carol2), which you hand to the agent
- A secret “revocation gate” that only you keep
When the agent sends messages to Carol2, the caretaker checks an internal switch. If it’s on, messages get forwarded to Carol. If it’s off, they’re dropped. The beautiful part? The agent’s permission to talk to Carol2 never changed, it still has that reference. But your ability to cut off its authority is instant. You call the revocation gate’s disable method, and the agent’s access to Carol evaporates, even though it still “has the key.”
Pros for AI Agents:
- Principle of least authority by default: The agent is born with zero capabilities. It only gets the specific references you explicitly hand it, no ambient authority to stumble into
- Revocation is trivial: No central registry to update, no complex ACL modifications. Just flip the switch on the caretaker
- Confinement guarantees: Combined with loader isolation (the agent can’t secretly build backdoor channels), you can prove information only flows one way, perfect for handling sensitive data
- Scales to zero-trust environments: Every reference is independently managed; compromise of one doesn’t cascade
Cons for AI Agents:
- Requires ground-up runtime support: You can’t bolt this onto traditional systems. The entire environment — language runtime, operating system — must enforce that references are the only way to cause effects
- Complexity in delegation: If the agent needs to delegate its authority to a sub-process or another agent, you need careful capability hygiene to avoid accidentally amplifying authority
- Legacy system integration is brutal: Most of your infrastructure uses name-centric access (file paths, URLs). Bridging capability security with legacy systems requires careful proxy design
The Emerging Pattern: Hybrid, Context-Aware Authorization
The future for AI agent authorization isn’t picking one model, it’s thoughtfully combining them.
Imagine an architecture where:
- Capability-based security governs the agent’s initial access, it gets specific, revocable references to resources, nothing more
- ABAC policies evaluate fine-grained, contextual rules at runtime, checking that the agent is operating during business hours, on behalf of a user with proper clearance, accessing data below a certain sensitivity threshold
- ReBAC relationship graphs manage complex inheritance and organizational hierarchies, determining which groups the agent can operate on behalf of, which folders it can access through parent relationships
- OAuth 2.0 patterns handle delegation to external services, the agent gets time-limited tokens to access third-party APIs
- PBAC policies provide the governance layer, explicit, auditable rules for high-stakes decisions that require formal approval
The policy might look like:
IF agent.purpose = "meeting_scheduler"
AND user.has_attribute("calendar_delegation_approved")
AND relationship_exists(user, "member", "scheduling_team")
AND resource.classification <= user.clearance_level
AND environment.time IN business_hours
AND environment.network = "corporate_vpn"
THEN grant_capability(calendar.read, calendar.write)
WITH obligation(audit_log.record(agent_id, action, timestamp))
The agent gets a capability (unforgeable reference) but only after the ABAC policy is satisfied, the ReBAC relationship is verified, the OAuth-style time constraints are met, and only with mandatory audit obligations enforced.
What This Means for You
If you’re building with AI agents today:
Don’t default to giving the agent your full authority. That’s the Solitaire-running-as-admin antipattern all over again. Instead:
- Start with zero permissions; grant incrementally
- Use short-lived, scoped tokens (OAuth patterns)
- Build in revocation mechanisms from day one (caretaker patterns)
- Require explicit user approval for sensitive operations
- Log everything, attribution matters
Think beyond roles; think relationships and attributes. Agent needs change request-by-request. Static roles won’t cut it. Design your policies around:
- What the agent is doing (purpose/task)
- Who it’s acting on behalf of (user attributes)
- What relationships exist (group membership, organizational hierarchy)
- What it’s trying to access (resource sensitivity)
- When and where (environmental context)
Embrace ReBAC for organizational complexity. If your agents need to navigate team structures, folder hierarchies, or delegation chains, relationship graphs are your friend. They naturally model how organizations actually work.
Plan for dynamic trust chains. Your authorization system isn’t just checking one thing anymore. It’s coordinating attribute authorities (HR, security, compliance), relationship graphs, resource metadata, and environmental sensors. Document your trust assumptions. Know where your data comes from and how fresh it is.
Use PBAC for governance, not routine operations. Explicit policies are essential for auditing high-stakes decisions. But don’t make every agent action go through heavyweight policy evaluation — you’ll kill performance. Reserve PBAC for the decisions that matter most.
Embrace the “before-the-fact audit” challenge. Yes, it’s hard to answer “who can access X?” in an ABAC or ReBAC world. But that’s the price of dynamic, context-aware security. Invest in simulation tools that can answer “who could access X under what conditions?” Build audit trails that capture not just “what happened” but “why was it allowed?”
The Bottom Line
Authorization for AI agents isn’t a solved problem, it’s an evolving one. The traditional models gave us important pieces: OAuth taught us delegation and scoping, RBAC taught us organizational structure, PBAC taught us explicit governance, ABAC taught us contextual decisions, ReBAC taught us relationship composition, and capabilities taught us unforgeable authority.
The challenge now is composing these pieces thoughtfully. AI agents are too powerful and too autonomous to rely on all-or-nothing access. We need authorization systems that match their dynamic nature, granting just enough authority, just in time, with continuous verification and effortless revocation.
The good news? The building blocks exist. Google has proven ReBAC works at planetary scale. ABAC systems handle millions of contextual decisions daily. Capability-based systems provide mathematical guarantees about confinement. The hard part is architecting them together in ways that serve both security and usability.
Because at the end of the day, the goal isn’t to lock everything down or throw the doors wide open.
It’s to enable the right agent to do the right thing at the right time and nothing more.