UNPKG

workflow

Version:

Workflow DevKit - Build durable, resilient, and observable workflows

169 lines (123 loc) 7.44 kB
--- title: Resumable Streams description: Handle network interruptions, page refreshes, and timeouts without losing agent progress. type: guide summary: Reconnect to interrupted agent streams using `WorkflowChatTransport` without losing progress. prerequisites: - /docs/ai - /docs/foundations/streaming related: - /docs/ai/chat-session-modeling - /docs/api-reference/workflow-ai/workflow-chat-transport - /docs/api-reference/workflow-api/get-run --- When building chat interfaces, it's common to run into network interruptions, page refreshes, or serverless function timeouts, which can break the connection to an in-progress agent. Where a standard chat implementation would require the user to resend their message and wait for the entire response again, workflow runs are durable, and so are the streams attached to them. This means a stream can be resumed at any point, optionally only syncing the data that was missed since the last connection. Resumable streams come out of the box with Workflow DevKit, however, the client needs to recognize that a stream exists, and needs to know which stream to reconnect to, and needs to know where to start from. For this, Workflow DevKit provides the [`WorkflowChatTransport`](/docs/api-reference/workflow-ai/workflow-chat-transport) helper, a drop-in transport for the AI SDK that handles client-side resumption logic for you. ## Implementing stream resumption Let's add stream resumption to our Flight Booking Agent that we build in the [Building Durable AI Agents](/docs/ai) guide. <Steps> <Step> ### Return the Run ID from Your API Modify your chat endpoint to include the workflow run ID in a response header. The Run ID uniquely identifies the run's stream, so it allows the client to know which stream to reconnect to. {/* @skip-typecheck: incomplete code sample */} ```typescript title="app/api/chat/route.ts" lineNumbers // ... imports ... export async function POST(req: Request) { // ... existing logic to create the workflow ... const run = await start(chatWorkflow, [modelMessages]); return createUIMessageStreamResponse({ stream: run.readable, headers: { // [!code highlight "x-workflow-run-id": run.runId, // [!code highlight] }, // [!code highlight] }); } ``` </Step> <Step> ### Add a Stream Reconnection Endpoint Currently we only have one API endpoint that always creates a new run, so we need to create a new API route that returns the stream for an existing run: ```typescript title="app/api/chat/[id]/stream/route.ts" lineNumbers import { createUIMessageStreamResponse } from "ai"; import { getRun } from "workflow/api"; // [!code highlight] export async function GET( request: Request, { params }: { params: Promise<{ id: string }> } ) { const { id } = await params; const { searchParams } = new URL(request.url); // Client provides the last chunk index they received const startIndexParam = searchParams.get("startIndex"); // [!code highlight] const startIndex = startIndexParam ? parseInt(startIndexParam, 10) : undefined; // Instead of starting a new run, we fetch an existing run. const run = getRun(id); // [!code highlight] const stream = run.getReadable({ startIndex }); // [!code highlight] return createUIMessageStreamResponse({ stream }); // [!code highlight] } ``` The `startIndex` parameter ensures the client can choose where to resume the stream from. For instance, if the function times out during streaming, the chat transport will use `startIndex` to resume the stream exactly from the last token it received. </Step> <Step> ### Use `WorkflowChatTransport` in the Client Replace the default transport in AI-SDK's `useChat` with [`WorkflowChatTransport`]( /docs/api-reference/workflow-ai/workflow-chat-transport ), and update the callbacks to store and use the latest run ID. For now, we'll store the run ID in localStorage. For your own app, this would be stored wherever you store session information. ```typescript title="app/page.tsx" lineNumbers "use client"; import { useChat } from "@ai-sdk/react"; import { WorkflowChatTransport } from "@workflow/ai"; // [!code highlight] import { useMemo, useState } from "react"; export default function ChatPage() { // Check for an active workflow run on mount const activeRunId = useMemo(() => { // [!code highlight] if (typeof window === "undefined") return; // [!code highlight] return localStorage.getItem("active-workflow-run-id") ?? undefined; // [!code highlight] }, []); // [!code highlight] const { messages, sendMessage, status } = useChat({ resume: Boolean(activeRunId), // [!code highlight] transport: new WorkflowChatTransport({ // [!code highlight] api: "/api/chat", // Store the run ID when a new chat starts onChatSendMessage: (response) => { // [!code highlight] const workflowRunId = response.headers.get("x-workflow-run-id"); // [!code highlight] if (workflowRunId) { // [!code highlight] localStorage.setItem("active-workflow-run-id", workflowRunId); // [!code highlight] } // [!code highlight] }, // [!code highlight] // Clear the run ID when the chat completes onChatEnd: () => { // [!code highlight] localStorage.removeItem("active-workflow-run-id"); // [!code highlight] }, // [!code highlight] // Use the stored run ID for reconnection prepareReconnectToStreamRequest: ({ api, ...rest }) => { // [!code highlight] const runId = localStorage.getItem("active-workflow-run-id"); // [!code highlight] if (!runId) throw new Error("No active workflow run ID found"); // [!code highlight] return { // [!code highlight] ...rest, // [!code highlight] api: `/api/chat/${encodeURIComponent(runId)}/stream`, // [!code highlight] }; // [!code highlight] }, // [!code highlight] }), // [!code highlight] }); // ... render your chat UI } ``` </Step> </Steps> Now try the flight booking example again. Open it up in a separate tab, or spam the refresh button, and see how the client connects to the same chat stream every time. ## How It Works 1. When the user sends a message, `WorkflowChatTransport` makes a POST to `/api/chat` 2. The API starts a workflow and returns the run ID in the `x-workflow-run-id` header 3. `onChatSendMessage` stores this run ID in localStorage 4. If the stream is interrupted before receiving a "finish" chunk, the transport automatically reconnects 5. `prepareReconnectToStreamRequest` builds the reconnection URL using the stored run ID, pointing to the new endpoint `/api/chat/{runId}/stream` 6. The reconnection endpoint returns the stream from where the client left off 7. When the stream completes, `onChatEnd` clears the stored run ID This approach also handles page refreshes, as the client will automatically reconnect to the stream from the last known position when the UI loads with a stored run ID, following the behavior of [AI SDK's stream resumption](https://ai-sdk.dev/docs/ai-sdk-ui/chatbot-resume-streams#chatbot-resume-streams). ## Related Documentation - [`WorkflowChatTransport` API Reference](/docs/api-reference/workflow-ai/workflow-chat-transport) - Full configuration options - [Streaming](/docs/foundations/streaming) - Understanding workflow streams - [`getRun()` API Reference](/docs/api-reference/workflow-api/get-run) - Retrieving existing runs