As AI agents become more autonomous and capable, their role is shifting from passive assistants to proactive actors. Today’s large language models (LLMs) don’t just generate text—they execute tasks, access APIs, modify databases, and even control infrastructure.

AI agents are taking actions that were once reserved strictly for human users, whether it’s scheduling a meeting, deploying a service, or accessing a sensitive document.

When agents operate without guardrails, they can inadvertently make harmful or unauthorized decisions. A single hallucinated command, misunderstood prompt, or overly broad permission can result in data leaks, compliance violations, or broken systems.

That’s why integrating human-in-the-loop (HITL) workflows is essential for agent safety and accountability.

Permit.io’s Access Request MCP is a framework designed to enable AI agents with the ability to request sensitive actions, while allowing humans to remain the final decision-makers.

Built on Permit.io and integrated into popular agent frameworks like LangChain and LangGraph, this system lets you insert approval workflows directly into your LLM-powered applications.

In this tutorial, you’ll learn:

Before we dive into our demo application and implementation steps, let’s briefly discuss the importance of delegating AI permissions to humans.

Why Delegating AI Permissions to Humans Is Critical

AI agents are powerful, but, as we all know, they’re not infallible.

They follow instructions, but they don’t understand context like humans do. They generate responses, but they can’t judge consequences. And when those agents are integrated into real systems—banking tools, internal dashboards, infrastructure controls—that’s a dangerous gap.

In this context, everything that can go wrong is pretty clear:

Delegation is the solution.

Instead of giving agents unchecked power, we give them a protocol: “You may ask, but a human decides.”

By introducing human-in-the-loop (HITL) approval at key decision points, you get:

It’s the difference between an agent doing something and an agent requesting to do something.

And it’s exactly what Permit.io’s Access Request MCP enables.

Permit.io’s Access Request MCP

The Access Request MCP is a core part of Permit.io’s Model Context Protocol (MCP)—a specification that gives AI agents safe, policy-aware access to tools and resources.

Think of it as a bridge between LLMs that want to act and humans who need control.

What it does

Permit’s Access Request MCP enables AI agents to:

Behind the scenes, it uses Permit.io’s authorization capabilities built to support:

Plug-and-play with LangChain and LangGraph

Permit’s MCP is integrated directly into the LangChain MCP Adapter and LangGraph ecosystem:

It’s the easiest way to inject human judgment into AI behavior—no custom backend needed.

Understanding the implementation and its benefits, let’s get into our demo application.

What We’ll Build - Demo Application Overview

In this tutorial, we’ll build a real-time approval workflow in which an AI agent can request access or perform sensitive actions, but only a human can approve them.

Scenario: Family Food Ordering System

To see how Permit’s MCP can help enable an HITL workflow in a user application, we’ll model a food ordering system for a family:

This use case reflects a common pattern: “Agents can help, but humans decide.”

Tech Stack

We’ll build this HITL-enabled agent using:

You’ll end up with a working system where agents can collaborate with humans to ensure safe, intentional behavior—using real policies, real tools, and real-time approvals.

A repository with the full code for this application is available here.

Step-by-Step Tutorial

In this section, we’ll walk through how to implement a fully functional human-in-the-loop agent system using Permit.io and LangGraph.

We’ll cover:

Let’s get into it -

Modeling Permissions with Permit

We’ll start by defining your system’s access rules inside the Permit.io dashboard. This lets you model which users can do what, and what actions should trigger an approval flow.

Create a ReBAC Resource

Navigate to the Policy page from the sidebar, then:

Now, go to the Policy Editor tab and assign permissions:

Set Up Permit Elements

Go to the Elements tab from the sidebar. In the User Management section, click Create Element.

Add Operation Approval Elements

Add Test Users & Resource Instances

Once Permit is configured, we’re ready to clone the MCP server and connect your policies to a working agent.

Setting Up the Permit MCP Server

With your policies modeled in the Permit dashboard, it’s time to bring them to life by setting up the Permit MCP server—a local service that exposes your access request and approval flows as tools that an AI agent can use.

Clone and Install the MCP Server

Start by cloning the MCP server repository and setting up a virtual environment.

git clone <https://github.com/permitio/permit-mcp>
cd permit-mcp

# Create virtual environment, activate it and install dependencies
uv venv
source .venv/bin/activate # For Windows: .venv\\Scripts\\activate
uv pip install -e .

Add Environment Configuration

Create a .env file at the root of the project based on the provided .env.example, and populate it with the correct values from your Permit setup:

bash
CopyEdit
RESOURCE_KEY=restaurants
ACCESS_ELEMENTS_CONFIG_ID=restaurant-requests
OPERATION_ELEMENTS_CONFIG_ID=dish-requests
TENANT= # e.g. default
LOCAL_PDP_URL=
PERMIT_API_KEY=
PROJECT_ID=
ENV_ID=

You can retrieve these values using the following resources:

⚠️ Note: We are using Permit’s Local PDP (Policy Decision Point) for this tutorial to support ReBAC evaluation and low-latency, offline testing.

Start the Server

With everything in place, you can now run the MCP server locally:

uv run -m src.permit_mcp

Once the server is running, it will expose your configured Permit Elements (access request, approval management, etc.) as tools the agent can call through the MCP protocol.

Creating a LangGraph + LangChain MCP Client

Now that the Permit MCP server is up and running, we’ll build an AI agent client that can interact with it. This client will:

Let’s connect the dots.

Install Required Dependencies

Inside your MCP project directory, install the necessary packages:

uv add langchain-mcp-adapters langgraph langchain-google-genai

This gives you:

Add Google API Key

You’ll need an API key from Google AI Studio to use Gemini.

Add the key to your .env file:

GOOGLE_API_KEY=your-key-here

Build the MCP Client

Create a file named client.py in your project root.

We’ll break this file down into logical blocks:

load_dotenv()

global_llm_with_tools = None

llm = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    google_api_key=os.getenv('GOOGLE_API_KEY')
)

Define the shared agent state:

class State(TypedDict):
    messages: Annotated[list, add_messages]

In the above code, we have defined an LLM node and its conditional edge, which routes to the run_tool node if there is a tool call in the state's message, or ends the graph. We have also defined a function to set up and compile the graph with an in-memory checkpointer.

Next, add the following line of code to stream response from the graph and add an interactive chat loop, which will run until it’s explicitly exited.

Once you’ve saved everything, start the client:

uv run client.py

After running, a new image file called workflow_graph.png will be created, which shows the graph.

With everything set up, we can now specify queries like this:

Query: My user id is henry, request access to pizza palace with the reason: I am now 18, and the role child-can-order
Query: My user id is joe, list all access requests

Your agent is now able to call MCP tools dynamically!

Adding Human-in-the-Loop with interrupt()

With your LangGraph-powered MCP client up and running, Permit tools can now be invoked automatically. But what happens when the action is sensitive, like granting access to a restricted resource or approving a high-risk operation?

That’s where LangGraph’s interrupt() becomes useful.

We’ll now add a human approval node to intercept and pause the workflow whenever the agent tries to invoke critical tools like:

A human will be asked to manually approve or deny the tool call before the agent proceeds.

Define the Human Review Node

At the top of your client.py file (before setup_graph), add the following function:

async def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
    """Handle human review process."""
    last_message = state["messages"][-1]
    tool_call = last_message.tool_calls[-1]

    high_risk_tools = ['approve_access_request', 'approve_operation_approval']
    if tool_call["name"] not in high_risk_tools:
        return Command(goto="run_tool")

    human_review = interrupt({
        "question": "Do you approve this tool call? (yes/no)",
        "tool_call": tool_call,
    })

    review_action = human_review["action"]

    if review_action == "yes":
        return Command(goto="run_tool")

    return Command(goto="call_llm", update={"messages": [{
        "role": "tool",
        "content": f"The user declined your request to execute the {tool_call.get('name', 'Unknown')} tool, with arguments {tool_call.get('args', 'N/A')}",
        "name": tool_call["name"],
        "tool_call_id": tool_call["id"],
    }]})

This node checks whether the tool being called is considered “high risk.” If it is, the graph is interrupted with a prompt asking for human confirmation.

Update Graph Routing

Modify the route_after_llm function so that the tool calls the route to the human review node instead of running immediately:

def route_after_llm(state) -> Literal[END, "human_review_node"]:
    """Route logic after LLM processing."""
    return END if len(state["messages"][-1].tool_calls) == 0 else "human_review_node"

Wire in the HITL Node

Update the setup_graph function to add the human_review_node as a node in the graph:

async def setup_graph(tools):
    builder = StateGraph(State)
    run_tool = ToolNode(tools)
    builder.add_node(call_llm)
    builder.add_node('run_tool', run_tool)
    builder.add_node(human_review_node)  # Add the interrupt node here

    builder.add_edge(START, "call_llm")
    builder.add_conditional_edges("call_llm", route_after_llm)
    builder.add_edge("run_tool", "call_llm")

    memory = MemorySaver()
    return builder.compile(checkpointer=memory)

Handle Human Input During Runtime

Finally, let’s enhance your stream_responses function to detect when the graph is interrupted, prompt for a decision, and resume with human input using Command(resume={"action": user_input}).

After running the client, the graph should not look like this:

After running the client, your graph diagram (workflow_graph.png) will now include a human review node between the LLM and tool execution stages:

This ensures that you remain in control whenever the agent tries to make a decision that could alter permissions or bypass restrictions.

With this, you've successfully added human oversight to your AI agent, without rewriting your tools or backend logic.

Conclusion

In this tutorial, we built a secure, human-aware AI agent using Permit.io’s Access Request MCP, LangGraph, and LangChain MCP Adapters.

Instead of letting the agent operate unchecked, we gave it the power to request access and defer critical decisions to human users, just like a responsible team member would.

We covered:

Want to see the full demo in action? Check out the GitHub Repo.

Further Reading -