How to build orchestrator/subagent architectures without a separate framework
PydanticAI's approach to multi-agent
PydanticAI does not have a built-in multi-agent framework like LangGraph or CrewAI. Instead, it uses a simple and powerful pattern: one agent can call another agent as if it were a tool. This is called the agent-as-tool pattern, and it is surprisingly capable for most use cases.
The advantage: no framework overhead, full type safety, and easy testing with dependency injection. The disadvantage: you have to wire the orchestration yourself.
The agent-as-tool pattern
import asyncio
from pydantic import BaseModel
from pydantic_ai import Agent, RunContext
# --- Specialist agents ---
research_agent = Agent(
'openai:gpt-4o',
system_prompt='You are a research specialist. Return factual summaries.'
)
writer_agent = Agent(
'openai:gpt-4o',
system_prompt='You are a professional writer. Turn research into engaging copy.'
)
# --- Orchestrator agent ---
orchestrator = Agent(
'openai:gpt-4o',
system_prompt=(
'You coordinate research and writing tasks. '
'Use the research tool first, then the write tool.'
)
)
@orchestrator.tool_plain
async def research(topic: str) -> str:
"""Research a topic and return a factual summary."""
result = await research_agent.run(f'Research this topic: {topic}')
return result.data
@orchestrator.tool_plain
async def write_article(research_summary: str, tone: str = 'professional') -> str:
"""Turn a research summary into a well-written article."""
result = await writer_agent.run(
f'Write a {tone} article based on this research:\n{research_summary}'
)
return result.data
async def main():
result = await orchestrator.run(
'Create an article about the history of the internet.'
)
print(result.data)
asyncio.run(main())
Passing dependencies across agents
Use deps_type to share context (database connections, config, user info) with subagents. Define a shared dependency type and pass it when calling each agent.
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
@dataclass
class SharedDeps:
user_id: str
db_conn: object
api_key: str
summariser = Agent('openai:gpt-4o-mini', deps_type=SharedDeps)
@summariser.tool
async def lookup_user_history(ctx: RunContext[SharedDeps]) -> str:
user_id = ctx.deps.user_id
# Use ctx.deps.db_conn to query
return f'History for user {user_id}: ...'
orchestrator_with_deps = Agent('openai:gpt-4o', deps_type=SharedDeps)
@orchestrator_with_deps.tool
async def run_summariser(ctx: RunContext[SharedDeps], topic: str) -> str:
"""Run the summariser subagent with shared deps."""
result = await summariser.run(
f'Summarise the user history related to: {topic}',
deps=ctx.deps, # pass deps through
)
return result.data
Parallel subagent execution
When subagents do not depend on each other, run them in parallel with asyncio.gather() for significant speed improvements.
import asyncio
from pydantic_ai import Agent
sentiment_agent = Agent('openai:gpt-4o-mini', system_prompt='Analyse sentiment: positive/negative/neutral.')
summary_agent = Agent('openai:gpt-4o-mini', system_prompt='Summarise text in one sentence.')
keyword_agent = Agent('openai:gpt-4o-mini', system_prompt='Extract top 5 keywords as a comma-separated list.')
async def analyse_in_parallel(text: str) -> dict:
sentiment_task = sentiment_agent.run(text)
summary_task = summary_agent.run(text)
keyword_task = keyword_agent.run(text)
sentiment, summary, keywords = await asyncio.gather(
sentiment_task, summary_task, keyword_task
)
return {
'sentiment': sentiment.data,
'summary': summary.data,
'keywords': keywords.data,
}
result = asyncio.run(analyse_in_parallel('The product launch was a huge success...'))
print(result)
Structured outputs across agents
Use result_type on subagents to get typed outputs that the orchestrator can use reliably — no string parsing.
from pydantic import BaseModel
from typing import List
from pydantic_ai import Agent
class ResearchResult(BaseModel):
key_facts: List[str]
confidence: float
sources_needed: List[str]
typed_research_agent = Agent(
'openai:gpt-4o',
result_type=ResearchResult,
system_prompt='Return structured research with facts and confidence.'
)
@orchestrator.tool_plain
async def structured_research(topic: str) -> str:
result = await typed_research_agent.run(f'Research: {topic}')
# result.data is a validated ResearchResult
facts = '\n'.join(f'- {f}' for f in result.data.key_facts)
return f'Facts (confidence {result.data.confidence:.0%}):\n{facts}'
Testing multi-agent systems
Dependency injection makes testing straightforward — replace real subagent calls with mock implementations.
from pydantic_ai import Agent
from unittest.mock import AsyncMock, patch
async def test_orchestrator():
# Mock the subagent's run method
mock_result = AsyncMock()
mock_result.data = 'Mocked research summary'
with patch.object(research_agent, 'run', return_value=mock_result):
result = await orchestrator.run('Write about quantum computing.')
# orchestrator calls research tool -> mocked -> writer tool -> real LLM
assert len(result.data) > 0
When to use PydanticAI multi-agent vs LangGraph or CrewAI
| Need | PydanticAI agent-as-tool | LangGraph | CrewAI |
|---|---|---|---|
| Simple orchestrator + 2-3 subagents | Best fit | Overkill | Overkill |
| Complex conditional routing | Manual but possible | Best fit | Possible |
| Human-in-the-loop checkpoints | Manual | Built-in | Limited |
| Role-based agent personas | Manual | Manual | Built-in |
| Type-safe outputs at every step | Best fit | Possible | Limited |
| Minimal framework overhead | Best fit | Medium | High |
Use PydanticAI multi-agent when you want full control and type safety without framework magic. Upgrade to LangGraph when your orchestration logic becomes complex enough to warrant a graph abstraction.