|
| 1 | +--- |
| 2 | +title: "Database persistence for chat" |
| 3 | +sidebarTitle: "Database persistence" |
| 4 | +description: "Split conversation state and live session metadata across hooks — preload, turn start, turn complete — without tying the pattern to a specific ORM or schema." |
| 5 | +--- |
| 6 | + |
| 7 | +Durable chat runs can span **hours** and **many turns**. You usually want: |
| 8 | + |
| 9 | +1. **Conversation state** — full **`UIMessage[]`** (or equivalent) keyed by **`chatId`**, so reloads and history views work. |
| 10 | +2. **Live session state** — the **current Trigger `runId`**, a **scoped access token** for realtime + input streams, and optionally **`lastEventId`** for stream resume. |
| 11 | + |
| 12 | +This page describes a **hook mapping** that works with any database. The [ai-chat reference app](https://github.com/triggerdotdev/trigger.dev/tree/main/references/ai-chat) implements the same idea with a SQL database and an ORM; adapt table and column names to your stack. |
| 13 | + |
| 14 | +## Conceptual data model |
| 15 | + |
| 16 | +You can use one table or two; the important split is **semantic**: |
| 17 | + |
| 18 | +| Concept | Purpose | Typical fields | |
| 19 | +| ------- | ------- | -------------- | |
| 20 | +| **Conversation** | Durable transcript + display metadata | Stable id (same as **`chatId`**), serialized **`uiMessages`**, title, model choice, owner/user id, timestamps | |
| 21 | +| **Active session** | Reconnect + resume the **same** run | Same **`chatId`** as key (or FK), **current `runId`**, **`publicAccessToken`** (or your stored PAT), optional **`lastEventId`** | |
| 22 | + |
| 23 | +The **conversation** row is what your UI lists as “chats.” The **session** row is what the **transport** needs after a refresh or token expiry: *which run is live* and *how to authenticate* to it. |
| 24 | + |
| 25 | +<Note> |
| 26 | + Store **`UIMessage[]`** in a JSON-compatible column, or normalize to a messages table — the pattern is *when* you read/write, not *how* you encode rows. |
| 27 | +</Note> |
| 28 | + |
| 29 | +## Where each hook writes |
| 30 | + |
| 31 | +### `onPreload` (optional) |
| 32 | + |
| 33 | +When the user triggers [preload](/ai-chat/features#preload), the run starts **before** the first user message. |
| 34 | + |
| 35 | +- Ensure the **conversation** row exists (create or no-op). |
| 36 | +- **Upsert session**: **`runId`**, **`chatAccessToken`** from the event (this is the turn-scoped token for that run). |
| 37 | +- Load any **user / tenant context** you need for prompts (`clientData`). |
| 38 | + |
| 39 | +If you skip preload, do the equivalent in **`onChatStart`** when **`preloaded`** is false. |
| 40 | + |
| 41 | +### `onChatStart` (turn 0, non-preloaded path) |
| 42 | + |
| 43 | +- If **`preloaded`** is true, return early — **`onPreload`** already ran. |
| 44 | +- Otherwise mirror preload: user/context, conversation create, session upsert. |
| 45 | +- If **`continuation`** is true, the conversation row usually **already exists** (previous run ended or timed out); only update **session** fields so the **new** run id and token are stored. |
| 46 | + |
| 47 | +### `onTurnStart` |
| 48 | + |
| 49 | +- Persist **`uiMessages`** (full accumulated history including the new user turn) **before** streaming starts — so a mid-stream refresh still shows the user’s message. |
| 50 | +- Optionally use [`chat.defer()`](/ai-chat/features#chat-defer) so the write does not block the model if your driver is slow. |
| 51 | + |
| 52 | +### `onTurnComplete` |
| 53 | + |
| 54 | +- Persist **`uiMessages`** again with the **assistant** reply finalized. |
| 55 | +- **Upsert session** with **`runId`**, fresh **`chatAccessToken`**, and **`lastEventId`** from the event. |
| 56 | + |
| 57 | +**`lastEventId`** lets the frontend [resume](/ai-chat/frontend) without replaying SSE events it already applied. Treat it as part of session state, not optional polish, if you care about duplicate chunks after refresh. |
| 58 | + |
| 59 | +## Token renewal (app server) |
| 60 | + |
| 61 | +Turn tokens expire (see **`chatAccessTokenTTL`** on **`chat.task`**). When the transport gets **401** on realtime or input streams, mint a **new** public access token with the **same** scopes the task uses — typically **read** for that **`runId`** and **write** for **input streams** on that run — then **persist** it on your **session** row. |
| 62 | + |
| 63 | +Your **Next.js server action**, **Remix action**, or **API route** should: |
| 64 | + |
| 65 | +1. Load **session** by **`chatId`** → **`runId`**. |
| 66 | +2. Call **`auth.createPublicToken`** (or your platform’s equivalent) with those scopes. |
| 67 | +3. Save the new token (and confirm **`runId`** is unchanged unless you started a new run). |
| 68 | + |
| 69 | +No Trigger task code needs to run for renewal. |
| 70 | + |
| 71 | +## Minimal pseudocode |
| 72 | + |
| 73 | +```typescript |
| 74 | +// Pseudocode — replace saveConversation / saveSession with your DB layer. |
| 75 | + |
| 76 | +chat.task({ |
| 77 | + id: "my-chat", |
| 78 | + clientDataSchema: z.object({ userId: z.string() }), |
| 79 | + |
| 80 | + onPreload: async ({ chatId, runId, chatAccessToken, clientData }) => { |
| 81 | + if (!clientData) return; |
| 82 | + await ensureUser(clientData.userId); |
| 83 | + await upsertConversation({ id: chatId, userId: clientData.userId /* ... */ }); |
| 84 | + await upsertSession({ chatId, runId, publicAccessToken: chatAccessToken }); |
| 85 | + }, |
| 86 | + |
| 87 | + onChatStart: async ({ chatId, runId, chatAccessToken, clientData, continuation, preloaded }) => { |
| 88 | + if (preloaded) return; |
| 89 | + await ensureUser(clientData.userId); |
| 90 | + if (!continuation) { |
| 91 | + await upsertConversation({ id: chatId, userId: clientData.userId /* ... */ }); |
| 92 | + } |
| 93 | + await upsertSession({ chatId, runId, publicAccessToken: chatAccessToken }); |
| 94 | + }, |
| 95 | + |
| 96 | + onTurnStart: async ({ chatId, uiMessages }) => { |
| 97 | + chat.defer(saveConversationMessages(chatId, uiMessages)); |
| 98 | + }, |
| 99 | + |
| 100 | + onTurnComplete: async ({ chatId, uiMessages, runId, chatAccessToken, lastEventId }) => { |
| 101 | + await saveConversationMessages(chatId, uiMessages); |
| 102 | + await upsertSession({ |
| 103 | + chatId, |
| 104 | + runId, |
| 105 | + publicAccessToken: chatAccessToken, |
| 106 | + lastEventId, |
| 107 | + }); |
| 108 | + }, |
| 109 | + |
| 110 | + run: async ({ messages, signal }) => { |
| 111 | + /* streamText, etc. */ |
| 112 | + }, |
| 113 | +}); |
| 114 | +``` |
| 115 | + |
| 116 | +## Design notes |
| 117 | + |
| 118 | +- **`chatId`** is stable for the life of a thread; **`runId`** changes when the user starts a **new** run (timeout, cancel, explicit new chat). Session rows must always reflect the **current** run. |
| 119 | +- **`continuation: true`** means “same logical chat, new run” — update session, don’t assume an empty conversation. |
| 120 | +- Keep **task modules** that perform writes **out of** browser bundles; the pattern assumes persistence runs **in the worker** (or your BFF that the task calls). |
| 121 | + |
| 122 | +## See also |
| 123 | + |
| 124 | +- [Backend — Lifecycle hooks](/ai-chat/backend#lifecycle-hooks) |
| 125 | +- [Session management](/ai-chat/frontend#session-management) — `resume`, `lastEventId`, transport |
| 126 | +- [`chat.defer()`](/ai-chat/features#chat-defer) — non-blocking writes during a turn |
| 127 | +- [Code execution sandbox](/ai-chat/patterns/code-sandbox) — combines **`onWait`** / **`onComplete`** with this persistence model |
0 commit comments