PlanetScale works with any MySQL-compatible client, but there are specific patterns that matter for AI applications — especially around connection pooling in serverless environments and integrating with frameworks like LangChain that expect Postgres-style database connections.

Connection Pooling in Serverless Environments

PlanetScale includes a built-in connection pooler, but the maximum connections per branch depend on your plan. For serverless deployments on Vercel, each function invocation may open a new connection — quickly exhausting the pool.

// Use PlanetScale's HTTP driver for serverless (no TCP connection needed)
import { connect } from '@planetscale/database';
 
// This driver works in Vercel Edge Functions and Cloudflare Workers
const conn = connect({
  host:     process.env.DATABASE_HOST,
  username: process.env.DATABASE_USERNAME,
  password: process.env.DATABASE_PASSWORD,
});
 
// Execute a query
const results = await conn.execute(
  'SELECT * FROM documents WHERE user_id = ?',
  [userId]
);
console.log(results.rows);

Prisma + PlanetScale Setup

npm install prisma @prisma/client
npm install -D @planetscale/database
// schema.prisma
generator client {
  provider = "prisma-client-js"
}
 
datasource db {
  provider     = "mysql"
  url          = env("DATABASE_URL")
  relationMode = "prisma"
}
 
model Conversation {
  id        String    @id @default(cuid())
  userId    String
  messages  Message[]
  createdAt DateTime  @default(now())
  @@index([userId])
}
 
model Message {
  id             String       @id @default(cuid())
  conversationId String
  conversation   Conversation @relation(fields: [conversationId], references: [id])
  role           String       // 'user' | 'assistant' | 'system'
  content        String       @db.LongText
  createdAt      DateTime     @default(now())
  @@index([conversationId])
}

Recipe: LangChain Conversation History in PlanetScale

# Store LangChain conversation history in PlanetScale
import pymysql, os, json
from langchain.memory import ConversationBufferMemory
from langchain.schema import HumanMessage, AIMessage
 
def get_db():
    return pymysql.connect(
        host=os.environ['DB_HOST'],
        user=os.environ['DB_USER'],
        password=os.environ['DB_PASSWORD'],
        database=os.environ['DB_NAME'],
        ssl={'ssl_ca': '/etc/ssl/certs/ca-certificates.crt'}
    )
 
def load_history(conversation_id: str) -> list:
    db = get_db()
    with db.cursor() as cur:
        cur.execute(
            'SELECT role, content FROM messages '
            'WHERE conversation_id = %s ORDER BY created_at',
            (conversation_id,)
        )
        rows = cur.fetchall()
    db.close()
 
    messages = []
    for role, content in rows:
        if role == 'user':
            messages.append(HumanMessage(content=content))
        else:
            messages.append(AIMessage(content=content))
    return messages
 
def save_message(conversation_id: str, role: str, content: str):
    db = get_db()
    with db.cursor() as cur:
        cur.execute(
            'INSERT INTO messages (conversation_id, role, content) VALUES (%s, %s, %s)',
            (conversation_id, role, content)
        )
    db.commit()
    db.close()

Schema Migration Best Practices with PlanetScale

  • Always create a branch before making schema changes — never run DDL directly against main
  • Test your migration on the branch with production-scale data volumes before creating a deploy request
  • Add indexes as a separate deploy request from column additions — index creation is the slow part
  • Use additive changes first (add a nullable column, populate it, then make it NOT NULL) for zero-downtime migrations
Metadata Value
Title Connecting PlanetScale to Your AI Stack: Prisma, LangChain, and Connection Pooling
Tool PlanetScale
Primary SEO keyword planetscale prisma langchain
Secondary keywords planetscale connection pooling, planetscale serverless, planetscale mysql driver, planetscale python
Estimated read time 8 minutes
Research date 2026-04-14