Most real-world AI workflows aren't a single background task — they're pipelines. You embed a document, store the vectors, notify a webhook, then send a summary email. Each step can fail independently. Trigger.dev's task API is designed exactly for this: named steps that checkpoint progress, automatic retries on failure, and built-in delay primitives that don't consume worker threads.
This article covers three practical patterns: chaining jobs, implementing delays, and configuring retry logic — illustrated with an AI document processing pipeline.
Pattern 1: Chaining Jobs with sendEvent
Rather than building one monolithic job, break workflows into independent jobs triggered by events. This keeps each job small, testable, and independently retryable.
// trigger/jobs/process-document.ts // Job 1: Parse and chunk the uploaded document client.defineJob({ id: "process-document", name: "Process Document", version: "1.0.0", trigger: eventTrigger({ name: "document.uploaded", schema: z.object({ documentId: z.string(), url: z.string() }), }), run: async (payload, io) => { const chunks = await io.runTask("parse-document", async () => { const text = await fetchAndParseDocument(payload.url); return chunkText(text, { size: 512, overlap: 50 }); }); await io.logger.info(`Parsed ${chunks.length} chunks`); // Trigger the next job in the pipeline await io.sendEvent("trigger-embedding", { name: "document.chunked", payload: { documentId: payload.documentId, chunks, }, }); return { chunksCount: chunks.length }; }, }); // Job 2: Embed and store vectors client.defineJob({ id: "embed-document", name: "Embed Document Chunks", version: "1.0.0", trigger: eventTrigger({ name: "document.chunked", schema: z.object({ documentId: z.string(), chunks: z.array(z.string()), }), }), run: async (payload, io) => { const embeddings = await io.runTask("generate-embeddings", async () => { const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); const response = await openai.embeddings.create({ model: "text-embedding-3-small", input: payload.chunks, }); return response.data.map((d) => d.embedding); }); await io.runTask("store-vectors", async () => { await supabase.from("document_chunks").insert( payload.chunks.map((chunk, i) => ({ document_id: payload.documentId, content: chunk, embedding: embeddings[i], })) ); }); // Chain to notification job await io.sendEvent("notify-complete", { name: "document.embedded", payload: { documentId: payload.documentId }, }); }, });
Pattern 2: Built-in Delays
`io.wait()` pauses execution for a specified duration without blocking a worker thread. Trigger serializes the job state and resumes it later — similar to how durable execution works in Temporal.
client.defineJob({ id: "onboarding-sequence", name: "Onboarding Email Sequence", version: "1.0.0", trigger: eventTrigger({ name: "user.created", schema: z.object({ userId: z.string(), email: z.string() }), }), run: async (payload, io) => { // Day 0: Welcome email (immediate) await io.runTask("welcome-email", async () => { await sendEmail(payload.email, "welcome"); }); // Day 3: Tips email (wait 3 days) await io.wait("wait-3-days", 3 * 24 * 60 * 60); // seconds await io.runTask("tips-email", async () => { await sendEmail(payload.email, "tips"); }); // Day 7: Check-in email (wait 4 more days) await io.wait("wait-4-more-days", 4 * 24 * 60 * 60); await io.runTask("checkin-email", async () => { await sendEmail(payload.email, "checkin"); }); }, });
Each `io.wait()` key must be unique within a job run. Trigger uses these keys as idempotency checkpoints — if the job resumes after the wait, it skips already-completed tasks.Pattern 3: Retry Configuration
Tasks retry automatically on failure. Configure retry behavior per task or globally on the job.
await io.runTask( "call-external-api", async () => { const response = await fetch("https://slow-api.example.com/data"); if (!response.ok) throw new Error(`API error: ${response.status}`); return response.json(); }, { name: "Call External API", retry: { limit: 5, // max attempts factor: 2, // exponential backoff multiplier minTimeoutInMs: 1000, // 1s initial delay maxTimeoutInMs: 30000, // 30s max delay randomize: true, // add jitter to avoid thundering herd }, } );
Combining All Three Patterns: Document Pipeline
client.defineJob({ id: "full-document-pipeline", name: "Full Document Pipeline", version: "1.0.0", trigger: eventTrigger({ name: "document.uploaded" }), run: async (payload, io) => { // Step 1: Parse (retry up to 3x) const chunks = await io.runTask("parse", parseDocument, { retry: { limit: 3, minTimeoutInMs: 2000, factor: 2 }, }); // Step 2: Embed (retry up to 5x — OpenAI can rate-limit) const embeddings = await io.runTask("embed", () => embedChunks(chunks), { retry: { limit: 5, minTimeoutInMs: 1000, factor: 2, randomize: true }, }); // Step 3: Store await io.runTask("store", () => storeVectors(payload.documentId, embeddings)); // Step 4: Wait 1 hour, then notify (good for async review flows) await io.wait("processing-buffer", 3600); // Step 5: Send completion notification await io.runTask("notify", () => notifyUser(payload.userId, "Your document is ready for search.") ); }, });
Monitoring and Replaying
Every task appears in the Trigger dashboard with its status, duration, input, and output. If a job fails mid-pipeline (say, the embed step fails permanently after retries), you can fix the bug, redeploy, and replay the run — Trigger replays from the last successful checkpoint, skipping the parse step and re-attempting only the embed step.
Replay works correctly only if your tasks are idempotent. Storing vectors twice for the same document ID should either upsert or check for existence before inserting.