What MCP Adds to n8n

n8n already has 400+ native integrations. MCP (Model Context Protocol, introduced by Anthropic in late 2024) adds a different kind of integration: a standardised protocol that lets AI agents discover and call tools dynamically, without pre-built nodes.

The practical impact: any tool that exposes an MCP server interface — a database, a local file system, a browser, a custom API — can be used by n8n's AI Agent node without waiting for n8n to build a dedicated node for it. The agent calls the MCP server, gets a list of available tools, and uses them in its reasoning loop.

How the n8n MCP Integration Works

n8n's AI Agent node can be configured with MCP tool connections. The agent sends requests to the MCP server using the MCP protocol (stdio or HTTP/SSE transport), retrieves available tools, and includes them alongside any other tools (native n8n tool nodes) in its tool loop.

Component Role
AI Agent node The LLM reasoning loop — decides which tools to call
MCP Client (built into n8n) Communicates with external MCP servers using the MCP protocol
MCP Server Exposes tools: filesystem, database, browser, custom API, etc.
Native n8n tool nodes HTTP Request, Code, etc. — work alongside MCP tools

Setting Up an MCP Connection in n8n

In the AI Agent node, open the Tools section and add an MCP tool source. You will need to provide:

  • Transport type: stdio (local process) or SSE (HTTP server)
  • For stdio: the command to start the MCP server (e.g. npx -y @modelcontextprotocol/server-filesystem /path/to/dir)
  • For SSE: the server URL (e.g. http://localhost:3001/sse)
  • Any environment variables the MCP server needs

Once configured, n8n automatically calls the MCP server's list_tools endpoint and makes all discovered tools available to the agent.

Example: Filesystem Access via MCP

The official MCP filesystem server gives your n8n agent the ability to read, write, list, and search files in a directory you specify. This is useful for workflows that process local documents or outputs from other tools.

# Install the filesystem MCP server (Node.js required)
npx -y @modelcontextprotocol/server-filesystem --version
 

In the AI Agent node MCP configuration:

  • Transport: stdio
  • Command: npx -y @modelcontextprotocol/server-filesystem /Users/yourname/Documents/workflows
  • The agent now has access to: read_file, write_file, list_directory, search_files, create_directory

Example workflow: a trigger fires when a new file appears in a folder, the AI Agent reads it using the MCP filesystem tool, processes the content, and writes a summary back to the same folder.

Example: Database Queries via MCP

The MCP SQLite and PostgreSQL servers let your agent run queries against a database using natural language — the agent decides which SQL to generate and execute.

# SQLite MCP server
npx -y @modelcontextprotocol/server-sqlite --db-path /path/to/database.db
 
# PostgreSQL MCP server (community)
npx -y @modelcontextprotocol/server-postgres postgresql://user:pass@localhost/mydb
 

Available tools typically include: read_query (SELECT), write_query (INSERT/UPDATE/DELETE), create_table, and list_tables.

Give the MCP server the minimum database permissions it needs. If the agent only needs to read data, connect with a read-only database user. A misconfigured agent calling write_query with broad permissions can modify or delete data.

Example: Web Browsing via MCP

Browser MCP servers (Playwright-based, Browserbase, Puppeteer) give agents the ability to navigate URLs, click elements, fill forms, and extract content — all orchestrated by the agent's reasoning loop.

# Playwright MCP server
npx -y @playwright/mcp@latest
 

Useful tools exposed: browser_navigate, browser_click, browser_type, browser_screenshot, browser_get_text. The agent can use these to fill in a web form, extract a table from a page, or check the status of a dashboard — all within a regular n8n workflow.

Combining MCP Tools with Native n8n Nodes

MCP tools and native n8n tool nodes (HTTP Request, Code node, Google Sheets, etc.) work side by side in the same AI Agent node. The agent decides at runtime which tool is best for each step:

  • Use native n8n nodes for integrations that already have well-built nodes (Gmail, Slack, Airtable, etc.)
  • Use MCP for capabilities that do not have native nodes: local filesystem, custom databases, browser automation
  • The agent's system prompt can guide it on when to prefer each — e.g. 'Use the Slack tool for notifications, use the filesystem tool to read input files'

Building Your Own MCP Server

If you have an internal API or custom data source you want to expose to n8n agents, you can build an MCP server with the official SDK:

from mcp.server.fastmcp import FastMCP
 
mcp = FastMCP("Internal CRM Tools")
 
@mcp.tool()
def get_customer(customer_id: str) -> dict:
    """Retrieve customer details by ID from the internal CRM."""
    # Your actual CRM query here
    return {"id": customer_id, "name": "Alice", "plan": "pro"}
 
@mcp.tool()
def list_open_tickets(customer_id: str) -> list:
    """List all open support tickets for a customer."""
    return [{"ticket_id": "T-123", "status": "open", "subject": "Login issue"}]
 
if __name__ == "__main__":
    mcp.run(transport="stdio")  # or transport="sse" for HTTP
 

Once running, point your n8n AI Agent node at this server and it automatically discovers get_customer and list_open_tickets as callable tools.

Security Considerations for MCP in n8n

  • Never expose stdio MCP servers over the network — stdio transport runs as a local process and is not designed to be network-accessible.
  • For SSE transport (HTTP), add authentication to the MCP server endpoint — n8n will pass headers you configure.
  • Be explicit in agent system prompts about which operations are allowed: 'You may read files but must not write or delete them.'
  • Log all MCP tool calls in n8n's execution history — review them regularly when the agent has broad permissions.