The Confusion Is Understandable
Flowise has two canvas modes and the docs do not always make the distinction clear. Many builders start with Chatflow, hit a wall trying to add agent behaviour, then discover Agentflow — or vice versa. Others build complex Chatflows when a simple Agentflow would have been cleaner.
This guide gives you a clear mental model for both, then maps common use cases to the right choice.
One-Line Definitions
- Chatflow: A fixed pipeline. You define every step — the LLM, the memory, the retriever — and they execute in sequence. The LLM cannot decide to call tools dynamically.
- Agentflow: A reasoning loop. The agent (LLM) decides which tools to call and when, based on the user's message. You define the available tools; the agent decides the execution path.
Architecture Comparison
| Chatflow | Agentflow | |
|---|---|---|
| Execution model | Fixed pipeline — nodes run in defined order | Dynamic — agent decides tool use at runtime |
| LLM role | One step in the pipeline | Orchestrator that controls the whole flow |
| Tool calling | Not supported — LLM cannot call tools dynamically | Core feature — agent selects and calls tools |
| Memory | Managed by you via memory nodes | Managed by the agent node with memory config |
| Multi-agent | Not supported | Supported — agents can hand off to other agents |
| Predictability | High — same input = same execution path | Lower — agent decisions can vary |
| Debugging | Easier — fixed paths are traceable | Harder — you must inspect agent reasoning |
| Performance | Faster — no reasoning overhead | Slower — agent reasoning adds latency |
When to Use Chatflow
Chatflow is the right choice when:
- You are building a document Q&A chatbot — user asks a question, you retrieve context, the LLM answers. The flow is always the same.
- You need predictable, auditable outputs. A pipeline with fixed steps is easier to test and explain to stakeholders.
- You are building a simple chatbot with history. Chatflow handles conversational memory well without the overhead of an agent loop.
- Performance is critical. Chatflows are faster because there is no agent reasoning step — the LLM just generates a response.
- You need custom pre/post processing steps — Chatflows let you add function nodes at any point in the pipeline.
Classic Chatflow: RAG Q&A Bot
User message → retriever (vector store) → LLM with context → response. This is a Chatflow. The LLM does not decide to retrieve — the pipeline does it unconditionally.
When to Use Agentflow
Agentflow is the right choice when:
- You need tool use — the agent must decide whether to search the web, query a database, or call an API based on the user's intent.
- You are building a multi-step assistant that may take different paths for different questions. A travel assistant that sometimes needs to check flights and sometimes needs to check hotel availability, but not always both.
- You need multiple specialised agents working together. Agentflow supports agent nodes that can hand off to other agent nodes.
- Your flow logic is too complex to express as a fixed pipeline. If you find yourself building many conditional branches in Chatflow, Agentflow is likely a better fit.
Classic Agentflow: Research Assistant
User asks a question. The agent decides: search the web? Query the database? Use the calculator? Generate a plan and execute it in steps? Each run may take a different path — this is Agentflow territory.
Decision Framework
| Scenario | Use |
|---|---|
| Document Q&A / RAG chatbot | Chatflow |
| Simple customer support with a knowledge base | Chatflow |
| Any flow where tools are conditionally needed | Agentflow |
| Research assistant, travel planner, scheduling assistant | Agentflow |
| Multi-agent system with specialised roles | Agentflow |
| Chatbot with just history + LLM (no retrieval) | Chatflow |
| Automated report generation from fixed data sources | Chatflow |
| Complex workflows that branch on user intent | Agentflow |
Hybrid: Tools in Chatflow
Chatflow does have one way to call tools: the LLMChain node does not support tool use, but the OpenAI Functions chain node and the Conversational Agent node (available within Chatflow's palette) add limited tool use. However, these are single-agent, non-hierarchical, and cannot hand off to other agents.
If you need tool use in a Chatflow and only have one or two simple tools, this works. As soon as you need dynamic multi-tool selection, multiple agents, or agent-to-agent handoffs, switch to Agentflow.
Migrating from Chatflow to Agentflow
If you have a working Chatflow and want to add agent capabilities:
- Create a new Agentflow canvas.
- Add an Agent node and configure it with the same LLM and model settings as your Chatflow's LLM node.
- Convert your Chatflow's retriever setup to a Tool node in Agentflow — Flowise has a 'Retriever Tool' node specifically for this.
- Add any other tools the agent needs.
- Transfer your memory configuration — Agentflow's Agent node has built-in memory options.
- Test with the same queries you used in your Chatflow and compare outputs.
Do not try to convert a Chatflow in place. Build the Agentflow fresh in a new canvas and test it in parallel before replacing the original.Common Misconceptions
- 'Agentflow is always better' — Not true. For fixed pipelines, Agentflow adds reasoning overhead for no benefit. Chatflows are faster and more predictable for straightforward Q&A.
- 'I need Agentflow for memory' — Not true. Chatflows have excellent memory support via Buffer Memory, Summary Memory, and vector store-backed memory nodes.
- 'Chatflows cannot call external APIs' — Not true. You can use HTTP Request nodes and Function nodes to call any API in a Chatflow. The difference is that the LLM cannot decide to call them — they always execute.