Stop guessing which mode to use — here is a clear decision guide
The question every Dify builder asks twice
You open the Dify app editor and hit the first fork: Chatbot, Agent, Workflow, or Chatflow. Most builders pick one by instinct, hit a wall after an hour, and wonder if they chose wrong. This guide gives you a framework for making that call in under two minutes.
We will focus on the two modes that cause the most confusion: Workflow and Agent. Chatbot is just a simple wrapper around a single LLM call, and Chatflow is Workflow with a chat interface bolted on — once you understand Workflow and Agent, those two are obvious.
What Workflow actually is
Workflow mode is a directed acyclic graph (DAG) of nodes you define at build time. The path through the graph is deterministic — you control every branch, every condition, every output format. The LLM is just one node in a larger pipeline, not the orchestrator.
Typical node types: Start, LLM, Code, Knowledge Retrieval, HTTP Request, Template, Conditional Branch, Iteration, Variable Aggregator, End.
Think of Workflow like a flowchart where you drew every box yourself. The LLM fills in the boxes you assign it, but it does not decide which boxes exist.Workflow strengths
- Predictable execution path — every run follows the same logic
- Easy to debug — you can inspect every node's input and output
- Cheap to run — no agent reasoning overhead, no wasted LLM calls
- Audit-friendly — compliance teams can review the graph
- Iteration nodes let you process arrays (e.g. summarise 20 documents)
Workflow weaknesses
- Cannot handle open-ended tasks where the next step is unknown
- Adding a new capability requires editing the graph
- No built-in loop until done — you must design termination logic manually
What Agent mode actually is
Agent mode hands the LLM a list of tools and a goal, then lets it decide which tools to call, in what order, and when it is done. The execution path is not fixed — it emerges at runtime based on the model's reasoning. Dify implements this as a ReAct loop internally.
You define: which model to run, which tools are available (built-in or custom API tools), the system prompt, and optionally memory settings. The model does the rest.
Think of Agent mode like hiring a contractor. You describe the outcome you want and give them a toolbox. They decide how to use the tools.Agent strengths
- Handles tasks where the number of steps is not known in advance
- Adding a new capability = adding a tool, not redesigning a graph
- Naturally handles follow-up questions and course corrections
- Better at multi-step research, comparison, and synthesis tasks
Agent weaknesses
- Non-deterministic — same input may follow a different path each time
- More expensive — extra LLM calls for reasoning steps
- Harder to audit — you see the tool calls but not the internal reasoning trace unless you use LangFuse or LangSmith
- Can loop or hallucinate tool calls with weaker models
The decision table
| Situation | Use Workflow | Use Agent |
|---|---|---|
| Steps are known in advance | Yes | No |
| Output format must be exact | Yes | No |
| Task involves open-ended research | No | Yes |
| You need to process a list of items | Yes (Iteration node) | No |
| Tool selection is dynamic | No | Yes |
| Budget is tight / high volume | Yes | Avoid for simple tasks |
| Compliance audit required | Yes | Difficult |
| Conversation with back-and-forth clarification | Use Chatflow | Yes |
Real examples
Use Workflow for: invoice processing
Extract fields from a PDF → validate against rules → write to database. Every step is defined. The LLM's job is structured extraction, not deciding what to do next.
Use Agent for: competitive research assistant
The user asks: 'Compare Notion and Linear for a 50-person engineering team.' The agent needs to decide how many searches to run, which aspects to compare, and when it has enough information. You cannot pre-draw that graph.
Use Chatflow for: customer support bot
You want a conversational interface but still need deterministic routing — tier-1 FAQ → escalation check → CRM lookup. Chatflow gives you the chat UI with Workflow's graph underneath.
The hybrid pattern: Agent node inside a Workflow
Dify lets you embed an Agent node inside a Workflow graph. This is the best of both worlds for pipelines where most steps are deterministic but one step requires open-ended reasoning.
Example: a content pipeline that deterministically fetches data, then hands it to an Agent node to generate a personalised report, then deterministically formats and emails the result.
# Dify Workflow DSL excerpt (simplified)
nodes:
- id: fetch_data
type: http_request
url: https://api.example.com/metrics
- id: agent_analysis
type: agent # <-- Agent node embedded in Workflow
model: gpt-4o
tools: [web_search, calculator]
system_prompt: Analyse the metrics and identify anomalies.
- id: format_email
type: template
input: '{{agent_analysis.output}}'
Migration: switching from Agent to Workflow
If your Agent works but you need predictability or lower cost, here is the extraction process:
- Run your Agent on 20 representative inputs and log all tool call sequences
- Look for the most common paths — these become your Workflow branches
- Hardcode those paths as Workflow nodes with Conditional Branch for routing
- Keep a fallback Agent node for the long-tail cases that don't fit the pattern
- Compare costs and accuracy between the two implementations before switching fully
Do not migrate to Workflow if your tool call sequences vary significantly across inputs. You will spend more time maintaining branches than the cost savings are worth.Quick recap
- Workflow = deterministic graph you design; LLM is one node
- Agent = LLM-driven tool selection; you provide the toolbox
- Chatflow = Workflow with a chat UI (most common for support bots)
- Agent node inside Workflow = hybrid, best of both worlds
- When in doubt: start with Agent, extract patterns into Workflow once you have data