What MCP Support in Activepieces Means

Model Context Protocol (MCP) is the standard for connecting LLMs and AI agents to external tools and data sources. Activepieces has native support for running and calling MCP servers inside workflows — meaning your automation workflows can act as the coordination layer between AI agents and 280+ MCP-compatible servers.

The practical upside: instead of writing custom API integrations for every tool your AI agent needs, you configure an MCP server once and any workflow can use it.

The MCP Piece

Activepieces ships an MCP piece that can connect to any MCP server via SSE (server-sent events) or stdio transport. You configure the server URL or command, and the piece exposes all the server's tools as callable actions.

To add an MCP server to your workflow:

  1. Add the MCP piece to your flow
  2. Configure the transport: HTTP (SSE) for remote servers, stdio for local processes
  3. Select which tool to call from the server's exposed tool list
  4. Map your workflow data to the tool's inputs
  5. The tool result is available as a step output for downstream steps

Example: Using a Local Filesystem MCP Server

The MCP filesystem server lets AI agents read and write local files. This is useful for document processing workflows.

{
  "mcpServer": {
    "transport": "stdio",
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/data/documents"]
  },
  "toolName": "read_file",
  "toolInput": {
    "path": "/data/documents/{{trigger.filename}}"
  }
}
 
Activepieces evaluates {{trigger.filename}} and other expressions before calling the MCP tool. You can use any upstream step output as a dynamic tool input, making MCP calls part of a larger data pipeline.

Example: AI Agent Workflow with MCP Database Access

This pattern uses an LLM step to generate a database query, then an MCP step to execute it — without exposing the database directly to the internet.

  1. Trigger: HTTP webhook receives a user question
  2. LLM step (OpenAI/Claude piece): generates a SQL SELECT query based on the user's question and your schema description
  3. MCP step: calls a local MCP database server with the generated query
  4. LLM step: formats the query result into a natural language answer
  5. Response step: returns the answer to the webhook caller

The MCP database server runs locally or in your private network. Your database credentials never leave your infrastructure. The LLM only sees the query and its result — not your credentials.

Setting Up the MCP Piece in Self-Hosted Activepieces

For stdio-based servers (local processes), the MCP server command runs as a subprocess of the Activepieces worker. In Docker Compose, make sure:

# In your docker-compose.yml for the Activepieces worker
services:
  activepieces-worker:
    image: activepieces/activepieces:latest
    volumes:
      # Mount the directory your MCP server needs to access
      - /data/documents:/data/documents
    environment:
      - AP_ENGINE_EXECUTABLE_PATH=/usr/local/bin/node
 
When running stdio MCP servers inside a Docker container, the MCP server process inherits the container's environment. Ensure the required runtime (Node.js, Python) is available in the Activepieces container image or use SSE-based remote servers instead.

Available MCP Servers Worth Knowing

Server What it provides Use case
@modelcontextprotocol/server-filesystem File read/write/list Document processing, log analysis
@modelcontextprotocol/server-postgres PostgreSQL queries AI-generated SQL on your database
@modelcontextprotocol/server-github GitHub repos/issues/PRs Developer workflow automation
@modelcontextprotocol/server-slack Slack messages/channels Team notification agents
@modelcontextprotocol/server-brave-search Web search via Brave API Research agents without scraping
mcp-server-sqlite SQLite queries Local data access for lightweight setups

When to Use MCP vs Native Pieces

Use MCP when:

  • The tool you need already has an MCP server but not an Activepieces piece
  • You want to share the same server configuration across multiple AI agent frameworks (Claude Desktop, Cursor, Activepieces)
  • You need access to local resources that should not be exposed via public API

Use native pieces when:

  • A well-maintained Activepieces piece already exists for the integration
  • You need fine-grained OAuth handling that MCP does not support
  • You want the visual input mapping that native pieces provide in the Activepieces UI