Downstream agents ignoring upstream results is CrewAI's most-reported production bug. Here's what's happening and how to fix it.

The Problem

You build a CrewAI pipeline: Agent 1 researches a topic, Agent 2 writes a report based on that research, Agent 3 reviews the report. Straightforward. But in production, Agent 2 ignores what Agent 1 produced and invents its own research. Agent 3 reviews a report that bears no relation to Agent 1's findings.

This is the most commonly reported issue in the CrewAI community. The cause is almost always one of two things: context not being passed explicitly, or the wrong output mechanism being used. The fix is simple once you understand how CrewAI actually passes data between agents.

How CrewAI Passes Data Between Tasks

CrewAI has two mechanisms for sharing data between tasks -- and mixing them up is the root of most output-passing bugs.

Mechanism How it works When to use it
context parameter Explicitly pass the output of Task A as context for Task B When Task B needs to directly reference Task A's output
output_file / output_pydantic Save Task A's output to a file or Pydantic model When the output needs to be structured or shared with external systems

The most important thing to understand: CrewAI does NOT automatically pass all previous task outputs to every subsequent task. You must explicitly declare which tasks a task depends on using the context parameter.

The Fix: Use the context Parameter

Without context (broken)

from crewai import Agent, Task, Crew
 
researcher = Agent(role="Researcher", goal="Research the topic", ...)
writer = Agent(role="Writer", goal="Write a report", ...)
 
research_task = Task(
    description="Research recent developments in {topic}",
    agent=researcher,
)
 
# BROKEN: writer has no access to research_task output
write_task = Task(
    description="Write a 500-word report on {topic}",
    agent=writer,
)
 
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])

With context (working)

research_task = Task(
    description="Research recent developments in {topic}",
    agent=researcher,
    expected_output="A structured summary with 5 key findings and sources",
)
 
# FIXED: explicitly pass research_task as context
write_task = Task(
    description="Write a 500-word report based on the research provided.",
    agent=writer,
    context=[research_task],   # <-- this is the key line
    expected_output="A 500-word report with an introduction, 3 body paragraphs, and a conclusion",
)
The context parameter takes a list of Task objects. The full output of each listed task is injected into the current task's prompt. Be selective -- passing too many tasks as context inflates the prompt and can confuse the agent.

The expected_output Field

A second common cause of broken output passing is vague or missing expected_output definitions. CrewAI uses expected_output to shape what the agent produces -- without it, the agent returns whatever it feels is appropriate, which may not be in a format the next agent can use.

# Vague -- agent returns unpredictable format
research_task = Task(
    description='Research {topic}',
    agent=researcher,
    # No expected_output -- the agent decides the format
)
 
# Better -- specific format the next agent can reliably consume
research_task = Task(
    description='Research {topic}',
    agent=researcher,
    expected_output=(
        'A JSON object with keys:\n'
        '- findings: list of 5 key findings, each as a string\n'
        '- sources: list of URLs\n'
        '- summary: 2-sentence executive summary'
    ),
)

Structured Output with Pydantic

For the most reliable output passing -- especially in programmatic pipelines -- use Pydantic models to define exactly what a task must return. CrewAI will validate and enforce the schema.

from pydantic import BaseModel
from typing import List
 
class ResearchOutput(BaseModel):
    findings: List[str]
    sources: List[str]
    summary: str
 
research_task = Task(
    description="Research {topic}",
    agent=researcher,
    output_pydantic=ResearchOutput,  # enforces the return schema
)
 
# After the crew runs, access the structured output
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])
result = crew.kickoff(inputs={"topic": "AI agents"})
 
# Access structured output
structured = research_task.output.pydantic
print(structured.findings)   # guaranteed to be a list
print(structured.summary)    # guaranteed to be a string

Debugging Output Passing

When you're not sure why a downstream agent is ignoring upstream output, add verbose=True to your agents and tasks:

researcher = Agent(
    role="Researcher",
    verbose=True,   # prints the full prompt the agent receives
    ...
)

With verbose mode on, you can see exactly what context each agent receives. If an agent is ignoring research, you'll see its prompt doesn't include the research output -- which confirms the context parameter is missing or misconfigured.

Quick Reference

  • CrewAI does NOT auto-pass all previous outputs -- you must use context=[task] explicitly
  • Set expected_output with specific format requirements for every task
  • Use output_pydantic for structured, validated outputs in programmatic pipelines
  • Add verbose=True to agents to inspect the full prompt they receive
  • Keep context lists small -- too much context confuses agents and inflates cost