Horizontal scaling with Flowise is possible but barely documented. Here is the full setup: Redis, load balancing, and shared storage.

Why Flowise Scaling Is Tricky

Running a single Flowise instance is easy. Scaling to multiple instances is possible -- but Flowise's official docs on the topic amount to a few paragraphs with no concrete configuration examples. The community has worked it out through trial and error. This article consolidates what actually works.

The core challenge: Flowise stores flow configurations, credentials, and chat history in a local SQLite database by default. Multiple instances cannot share a SQLite file. You need to migrate to a shared database and a shared cache before load balancing makes sense.

Step 1: Switch from SQLite to PostgreSQL

PostgreSQL is the recommended shared database backend for multi-instance Flowise. All instances read and write to the same database, so flows created on one instance are immediately available on all others.

# Environment variables for PostgreSQL
DATABASE_TYPE=postgres
DATABASE_HOST=your-postgres-host
DATABASE_PORT=5432
DATABASE_NAME=flowise
DATABASE_USER=flowise_user
DATABASE_PASSWORD=your-password
 
# Flowise will auto-run migrations on first start
Use a managed PostgreSQL service (Supabase, Neon, Railway Postgres, RDS) rather than self-hosting PostgreSQL unless your team already manages Postgres infrastructure. The operational overhead of running your own Postgres rarely makes sense for a Flowise deployment.

Step 2: Add Redis for Shared Caching and Queues

With multiple instances, the in-memory flow cache on each instance is isolated -- instance A caches a flow that instance B has not cached. Redis provides a shared cache layer so all instances share the same compiled flow cache, and the queue system works across instances.

# Redis configuration
FLOWISE_CACHE_TYPE=redis
REDIS_URL=redis://your-redis-host:6379
 
# If using Redis with auth:
REDIS_URL=redis://:your-password@your-redis-host:6379
 
# Enable queue mode across instances
FLOWISE_QUEUE_ENABLED=true
FLOWISE_QUEUE_CONCURRENCY=5

Step 3: Shared File Storage for Uploads

When users upload files (documents for RAG, images), Flowise stores them on the local filesystem by default. In a multi-instance setup, an upload received by instance A will not be visible to instance B. Use S3-compatible storage to share uploads across instances.

# S3 file storage
STORAGE_TYPE=s3
S3_STORAGE_BUCKET_NAME=your-flowise-bucket
S3_STORAGE_ACCESS_KEY_ID=your-access-key
S3_STORAGE_SECRET_ACCESS_KEY=your-secret-key
S3_STORAGE_REGION=us-east-1
 
# For S3-compatible stores (R2, MinIO, Backblaze):
S3_ENDPOINT_URL=https://your-endpoint-url

Step 4: Docker Compose Multi-Instance Setup

Here is a complete Docker Compose configuration for a two-instance Flowise setup with Nginx load balancing:

# docker-compose.yml
version: '3.8'
 
services:
  flowise-1:
    image: flowiseai/flowise:latest
    environment:
      - DATABASE_TYPE=postgres
      - DATABASE_HOST=postgres
      - DATABASE_NAME=flowise
      - DATABASE_USER=flowise
      - DATABASE_PASSWORD=${DB_PASSWORD}
      - FLOWISE_CACHE_TYPE=redis
      - REDIS_URL=redis://redis:6379
      - FLOWISE_QUEUE_ENABLED=true
      - STORAGE_TYPE=s3
      - S3_STORAGE_BUCKET_NAME=${S3_BUCKET}
      - S3_STORAGE_ACCESS_KEY_ID=${S3_KEY}
      - S3_STORAGE_SECRET_ACCESS_KEY=${S3_SECRET}
    depends_on: [postgres, redis]
 
  flowise-2:
    image: flowiseai/flowise:latest
    environment:
      # Same environment as flowise-1
      - DATABASE_TYPE=postgres
      - DATABASE_HOST=postgres
      - DATABASE_NAME=flowise
      - DATABASE_USER=flowise
      - DATABASE_PASSWORD=${DB_PASSWORD}
      - FLOWISE_CACHE_TYPE=redis
      - REDIS_URL=redis://redis:6379
      - FLOWISE_QUEUE_ENABLED=true
      - STORAGE_TYPE=s3
      - S3_STORAGE_BUCKET_NAME=${S3_BUCKET}
      - S3_STORAGE_ACCESS_KEY_ID=${S3_KEY}
      - S3_STORAGE_SECRET_ACCESS_KEY=${S3_SECRET}
    depends_on: [postgres, redis]
 
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on: [flowise-1, flowise-2]
 
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: flowise
      POSTGRES_USER: flowise
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
 
  redis:
    image: redis:7-alpine
 
volumes:
  postgres_data:
# nginx.conf
upstream flowise {
    least_conn;
    server flowise-1:3000;
    server flowise-2:3000;
}
 
server {
    listen 80;
    location / {
        proxy_pass http://flowise;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_read_timeout 300s;  # allow long-running flows to complete
    }
}

Scaling Checklist

  • Migrate from SQLite to PostgreSQL (DATABASE_TYPE=postgres)
  • Configure Redis for shared cache and queue (FLOWISE_CACHE_TYPE=redis)
  • Configure S3-compatible storage for file uploads (STORAGE_TYPE=s3)
  • Use least_conn load balancing in Nginx (distributes to least-busy instance)
  • Set proxy_read_timeout high enough for your longest-running flow
  • Run database migrations only on one instance at startup -- they are idempotent but noisy
  • Test failover: kill one instance and verify the load balancer routes correctly to the other