What Make.com AI Modules Enable

Make.com's AI modules let you add LLM-powered steps to any automation scenario: classify an incoming email, summarise a document, extract structured data from free text, or route a ticket to the right team based on content. No code required.

Available AI modules as of March 2026:

  • OpenAI (ChatGPT) -- chat completions, image generation, transcription (Whisper), embeddings
  • Anthropic (Claude) -- messages API (Claude Sonnet, Haiku, Opus)
  • Google AI (Gemini) -- text generation, vision
  • Hugging Face -- text generation, classification, image classification

Pattern 1: Email Classification and Routing

This is the most common AI automation pattern in Make. Incoming emails are classified by an LLM, then routed to different downstream steps.

  1. Gmail: Watch Emails (trigger -- new email arrives)
  2. OpenAI: Create a Completion -- classify the email
  3. Router module -- branch based on the classification result
  4. Branch A: Slack -- notify support team (if classification = 'support')
  5. Branch B: HubSpot -- create deal (if classification = 'sales')
  6. Branch C: Ignore (if classification = 'newsletter')

The prompt for the OpenAI module:

Classify this email into exactly one of these categories:
- support (customer needs help with a product or service)
- sales (potential customer expressing buying interest)
- newsletter (automated marketing or subscription email)
- other (does not fit above categories)
 
Respond with ONLY the category name, nothing else.
 
Email subject: {{1.subject}}
Email body: {{1.snippet}}
 
For routing in Make, your LLM must return a consistent, predictable value. Ask for exactly one word or one of a fixed list. Do not ask for a sentence -- the Router module matches on exact values. Use the Text Parser module to clean up the LLM response if needed.

Pattern 2: Structured Data Extraction

Extract specific fields from unstructured text and map them to a CRM, spreadsheet, or database. The key is asking the LLM to return JSON, then using Make's JSON Parse module.

Scenario flow:

  1. Webhook: receives a customer message (free-form text)
  2. Anthropic: Create a Message -- extract structured data as JSON
  3. JSON: Parse JSON -- converts the string to a Make data object
  4. Google Sheets: Add a Row -- write the extracted fields to a sheet

Prompt template for the Claude module:

Extract the following information from this customer message.
Return a JSON object with exactly these keys:
- name (string or null if not mentioned)
- company (string or null)
- issue_type (one of: billing, technical, account, other)
- urgency (one of: high, medium, low)
- summary (1 sentence summary of the issue)
 
Return ONLY the JSON object, no other text.
 
Customer message:
{{1.message}}
 
LLMs occasionally return JSON with extra text before or after the object (e.g. 'Here is the JSON: {...}'). Add a Text Parser module after the AI module to extract the JSON using a regex like \{[\s\S]*\} before parsing. This prevents the scenario from erroring when the LLM adds commentary.

Pattern 3: Document Summarisation Pipeline

  1. Google Drive: Watch Files in Folder (trigger)
  2. Google Drive: Download a File
  3. OpenAI: Create a Completion -- summarise the document content
  4. Notion: Create Page -- save the summary to a Notion database
  5. Slack: Send Message -- notify the team that a new summary is ready

For long documents that exceed the LLM's context window, use Make's Text Parser module to split the document into chunks, then an Iterator to process each chunk, and an Aggregator to combine the results before the final summarisation step.

Managing Token Costs in Make

Each OpenAI or Claude module call costs tokens. In high-volume scenarios this adds up. Three practical controls:

  • Truncate input text: use the substring function in Make to limit document length before the AI module. Most summaries only need the first 2,000-4,000 words.
  • Choose the right model: Claude Haiku and GPT-4o-mini are 10-20x cheaper per token than their full-size counterparts. Use smaller models for classification and routing; save larger models for complex analysis.
  • Cache repeated queries: if the same document or content is processed repeatedly, use a Make data store to cache the AI result and skip the module on repeat runs.

Limitations of Make.com AI Modules

  • No streaming -- Make modules wait for the full AI response before passing it to the next step. For long outputs this can cause module timeouts on slow models.
  • No agent loop -- Make is a linear pipeline tool. If you need an agent that calls tools, reasons, and loops, n8n's AI Agent node or a dedicated agent framework is the right choice.
  • Context window management is manual -- splitting long documents requires explicit Iterator/Aggregator patterns.
  • No built-in embeddings pipeline -- RAG workflows require external vector databases (Pinecone, Qdrant) connected via HTTP modules.