Why HITL Is a First-Class LangGraph Feature
Most agent frameworks treat human review as an afterthought — a wrapper you bolt on. LangGraph builds it into the graph execution model. You can pause at any edge, inspect the full agent state, modify it, and resume — across sessions, across servers, across days if needed.
This works because LangGraph's checkpointing system persists graph state to a database. A paused graph is just a row in Postgres — it will be there when the human reviews it an hour later.
The Three HITL Approaches
| Approach | How it works | Best for |
|---|---|---|
| interrupt_before | Pause before a named node executes | Review what the agent is about to do |
| interrupt_after | Pause after a named node completes | Review what the agent just did before continuing |
| interrupt() inside a node | Pause mid-node and wait for human input | Ask the human a specific question during execution |
interrupt_before: Approve Before Action
The most common pattern: pause before a tool-calling node so a human can approve the action before it executes.
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient."""
# In production: actually send the email
return f"Email sent to {to}"
llm = ChatAnthropic(model="claude-haiku-4-5-20251001")
checkpointer = MemorySaver() # Use PostgresSaver in production
agent = create_react_agent(
llm,
tools=[send_email],
checkpointer=checkpointer,
interrupt_before=["tools"] # pause before ANY tool call
)
config = {"configurable": {"thread_id": "thread-001"}}
input_msg = {"messages": [HumanMessage(content="Send a welcome email to alice@example.com")]}
# Run until the interrupt
result = agent.invoke(input_msg, config=config)
print("Agent paused. Pending tool call:")
print(result["messages"][-1]) # The AI message with tool_calls
The agent has stopped just before executing send_email. The state is checkpointed. Now resume:
# Human reviews and approves — resume by passing None
final_result = agent.invoke(None, config=config)
print(final_result["messages"][-1].content)
# Or reject by updating the state before resuming
# (see state editing section below)
Passing None as the input to invoke() resumes from the last checkpoint. The graph continues exactly where it paused.interrupt() Inside a Node: Ask a Question
The newer interrupt() function (LangGraph 0.2+) lets you pause mid-node and surface a specific question to the human:
from langgraph.types import interrupt, Command
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, Annotated
import operator
class State(TypedDict):
messages: Annotated[list, operator.add]
approved: bool
def review_node(state: State) -> dict:
"""Ask the human to approve or reject before continuing."""
last_msg = state["messages"][-1]
# Pause and surface this question to whoever is calling the graph
human_decision = interrupt({
"question": "Do you approve this action?",
"context": last_msg.content,
"options": ["approve", "reject"]
})
return {"approved": human_decision == "approve"}
def action_node(state: State) -> dict:
if not state["approved"]:
return {"messages": [{"role": "assistant", "content": "Action cancelled by reviewer."}]}
# ... perform the action ...
return {"messages": [{"role": "assistant", "content": "Action completed."}]}
checkpointer = MemorySaver()
graph = StateGraph(State)
graph.add_node("review", review_node)
graph.add_node("action", action_node)
graph.set_entry_point("review")
graph.add_edge("review", "action")
graph.add_edge("action", END)
app = graph.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "hitl-001"}}
# First invocation — runs until interrupt()
result = app.invoke(
{"messages": [{"role": "user", "content": "Delete all draft emails"}], "approved": False},
config=config
)
# result contains the interrupt payload — show to human
print(result) # {'__interrupt__': {'question': 'Do you approve...', ...}}
# Human responds — resume with Command
from langgraph.types import Command
final = app.invoke(
Command(resume="approve"), # or "reject"
config=config
)
print(final["messages"][-1]["content"])
Editing State Before Resuming
A powerful HITL pattern: the human not only approves/rejects but edits the agent's plan before it continues. Use update_state():
from langchain_core.messages import AIMessage, ToolMessage
# Agent paused before tool call. Let's inspect what it wants to do:
state_snapshot = agent.get_state(config)
pending_tool_call = state_snapshot.values["messages"][-1] # AI message with tool_calls
print("Pending:", pending_tool_call.tool_calls)
# e.g. [{'name': 'send_email', 'args': {'to': 'wrong@example.com', ...}}]
# Edit: fix the recipient before resuming
corrected_message = AIMessage(
content=pending_tool_call.content,
tool_calls=[{
"id": pending_tool_call.tool_calls[0]["id"],
"name": "send_email",
"args": {"to": "correct@example.com", "subject": "Welcome", "body": "Hello!"}
}]
)
# Update the checkpointed state
agent.update_state(
config,
{"messages": [corrected_message]},
as_node="agent" # pretend this came from the agent node
)
# Now resume — it will execute the corrected tool call
final = agent.invoke(None, config=config)
Production Checkpointing with PostgreSQL
MemorySaver is for development only — it is in-memory and disappears on restart. Production HITL requires a persistent checkpointer:
from langgraph.checkpoint.postgres import PostgresSaver
import psycopg
DB_URL = "postgresql://user:password@localhost:5432/langgraph"
with psycopg.connect(DB_URL) as conn:
checkpointer = PostgresSaver(conn)
checkpointer.setup() # creates required tables on first run
agent = create_react_agent(
llm,
tools=[send_email],
checkpointer=checkpointer,
interrupt_before=["tools"]
)
PostgresSaver requires psycopg3 (pip install psycopg[binary]). It is not compatible with psycopg2. If you see import errors, check which version is installed.Building a Review Queue
A practical HITL system needs a queue so reviewers can see pending tasks and act on them. A simple pattern using the checkpointer's list_threads capability:
from fastapi import FastAPI
from langgraph.checkpoint.postgres import PostgresSaver
api = FastAPI()
# List all threads currently paused at an interrupt
@api.get("/review-queue")
async def get_review_queue():
pending = []
for thread in checkpointer.list(filter={"status": "interrupted"}):
state = agent.get_state({"configurable": {"thread_id": thread.thread_id}})
pending.append({
"thread_id": thread.thread_id,
"pending_action": state.values["messages"][-1].tool_calls,
"created_at": thread.created_at,
})
return pending
# Approve a pending thread
@api.post("/review/{thread_id}/approve")
async def approve(thread_id: str):
config = {"configurable": {"thread_id": thread_id}}
result = agent.invoke(None, config=config) # resume
return {"status": "resumed", "result": str(result["messages"][-1].content)}
Summary
LangGraph's HITL system is production-grade because it is built on checkpointing, not on fragile in-process callbacks. The key decisions are: which interrupt strategy to use (interrupt_before for pre-approval, interrupt() for mid-node questions), which checkpointer to use in production (PostgresSaver), and how to surface the queue to human reviewers (a simple API endpoint is sufficient to start).