You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai-chat/backend.mdx
+67-2Lines changed: 67 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ description: "Three approaches to building your chat backend — chat.task(), se
9
9
The highest-level approach. Handles message accumulation, stop signals, turn lifecycle, and auto-piping automatically.
10
10
11
11
<Tip>
12
-
To fix a **custom**`UIMessage` subtype (typed custom data parts, tool map, etc.), use [`chat.withUIMessage<...>().task({...})`](/ai-chat/types) instead of `chat.task({...})`. Options are the same; defaults for `toUIMessageStream()` can be set on `withUIMessage`.
12
+
To fix a **custom**`UIMessage` subtype or typed client data schema, use the [ChatBuilder](/ai-chat/types#chatbuilder) via `chat.withUIMessage<...>()` and/or `chat.withClientData({ schema })`. Builder-level hooks can also be chained before `.task()`. See [Types](/ai-chat/types).
13
13
</Tip>
14
14
15
15
### Simple: return a StreamTextResult
@@ -71,7 +71,9 @@ async function runAgentLoop(messages: ModelMessage[]) {
71
71
72
72
Every chat lifecycle callback and the **`run`** payload include **`ctx`**: the same run context object as `task({ run: (payload, { ctx }) => ... })`. Import the type with **`import type { TaskRunContext } from "@trigger.dev/sdk"`** (the **`Context`** export is the same type). Use **`ctx`** for tags, metadata, or any API that needs the full run record. The string **`runId`** on chat events is always **`ctx.run.id`** (both are provided for convenience). See [Task context (`ctx`)](/ai-chat/reference#task-context-ctx) in the API reference.
73
73
74
-
Standard **[task lifecycle hooks](/tasks/overview)** — **`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.task()`** with the same shapes as on a normal `task()`. For example, tear down an external sandbox **right before the run suspends** waiting for the next message using **`onWait`** when **`wait.type === "token"`**. See the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
74
+
Standard **[task lifecycle hooks](/tasks/overview)** — **`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.task()`** with the same shapes as on a normal `task()`.
75
+
76
+
Chat tasks also have two dedicated suspension hooks — **`onChatSuspend`** and **`onChatResume`** — that fire at the idle-to-suspended transition with full chat context. Use them for resource cleanup (e.g. tearing down sandboxes) and re-initialization. See [onChatSuspend / onChatResume](#onchatsuspend--onchatresume) and the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
For a full **conversation + session** persistence pattern (including preload, continuation, and token renewal), see [Database persistence](/ai-chat/patterns/database-persistence).
277
279
</Tip>
278
280
281
+
#### onChatSuspend / onChatResume
282
+
283
+
Chat-specific hooks that fire at the **idle-to-suspended** transition — the moment the run stops using compute and waits for the next message. These replace the need for the generic `onWait` / `onResume` task hooks for chat-specific work.
284
+
285
+
The `phase` discriminator tells you **when** the suspend/resume happened:
286
+
287
+
-`"preload"` — after `onPreload`, waiting for the first message
288
+
-`"turn"` — after `onTurnComplete`, waiting for the next message
289
+
290
+
```ts
291
+
exportconst myChat =chat.task({
292
+
id: "my-chat",
293
+
onChatSuspend: async (event) => {
294
+
// Tear down expensive resources before suspending
295
+
awaitdisposeCodeSandbox(event.ctx.run.id);
296
+
if (event.phase==="turn") {
297
+
logger.info("Suspending after turn", { turn: event.turn });
298
+
}
299
+
},
300
+
onChatResume: async (event) => {
301
+
// Re-initialize after waking up
302
+
logger.info("Resumed", { phase: event.phase });
303
+
},
304
+
run: async ({ messages, signal }) => {
305
+
returnstreamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
Unlike `onWait` (which fires for all wait types — duration, task, batch, token), `onChatSuspend` fires only at chat suspension points with full chat context. No need to filter on `wait.type`.
323
+
</Tip>
324
+
325
+
#### exitAfterPreloadIdle
326
+
327
+
When set to `true`, a preloaded run completes successfully after the idle timeout elapses instead of suspending. Use this for "fire and forget" preloads — if the user doesn't send a message during the idle window, the run ends cleanly.
328
+
329
+
```ts
330
+
exportconst myChat =chat.task({
331
+
id: "my-chat",
332
+
preloadIdleTimeoutInSeconds: 10,
333
+
exitAfterPreloadIdle: true,
334
+
onPreload: async ({ chatId, clientData }) => {
335
+
// Eagerly set up state — if no message comes, the run just ends
336
+
awaitinitializeChat(chatId, clientData);
337
+
},
338
+
run: async ({ messages, signal }) => {
339
+
returnstreamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
340
+
},
341
+
});
342
+
```
343
+
279
344
### Using prompts
280
345
281
346
Use [AI Prompts](/ai/prompts) to manage your system prompt as versioned, overridable config. Store the resolved prompt in a lifecycle hook with `chat.prompt.set()`, then spread `chat.toStreamTextOptions()` into `streamText` — it includes the system prompt, model, config, and telemetry automatically.
Copy file name to clipboardExpand all lines: docs/ai-chat/reference.mdx
+51-5Lines changed: 51 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,9 @@ Options for `chat.task()`.
29
29
|`preloadIdleTimeoutInSeconds`|`number`| Same as `idleTimeoutInSeconds`| Idle timeout after `onPreload` fires |
30
30
|`preloadTimeout`|`string`| Same as `turnTimeout`| Suspend timeout for preloaded runs |
31
31
|`uiMessageStreamOptions`|`ChatUIMessageStreamOptions`| — | Default options for `toUIMessageStream()`. Per-turn override via `chat.setUIMessageStreamOptions()`|
32
+
|`onChatSuspend`|`(event: ChatSuspendEvent) => Promise<void> \| void`| — | Fires right before the run suspends. See [onChatSuspend](/ai-chat/backend#onchatsuspend--onchatresume)|
33
+
|`onChatResume`|`(event: ChatResumeEvent) => Promise<void> \| void`| — | Fires right after the run resumes from suspension |
34
+
|`exitAfterPreloadIdle`|`boolean`|`false`| Exit run after preload idle timeout instead of suspending. See [exitAfterPreloadIdle](/ai-chat/backend#exitafterpreloadidle)|
32
35
33
36
Plus all standard [TaskOptions](/tasks/overview) — `retry`, `queue`, `machine`, `maxDuration`, **`onWait`**, **`onResume`**, **`onComplete`**, and other lifecycle hooks. Those hooks use the same parameter shapes as on a normal `task()` (including `ctx`).
34
37
@@ -148,6 +151,36 @@ Passed to the `onBeforeTurnComplete` callback. Same fields as `TurnCompleteEvent
148
151
|_(all TurnCompleteEvent fields)_|| See [TurnCompleteEvent](#turncompleteevent) (includes `ctx`) |
149
152
|`writer`|[`ChatWriter`](#chatwriter)| Stream writer — the stream is still open so chunks appear in the current turn |
150
153
154
+
## ChatSuspendEvent
155
+
156
+
Passed to the `onChatSuspend` callback. A discriminated union on `phase`.
|`chat.MessageAccumulator`| Class that accumulates conversation messages across turns |
325
-
|`chat.withUIMessage(config?).task(options)`| Same as `chat.task`, but fixes a custom `UIMessage` subtype and optional default stream options. See [Types](/ai-chat/types)|
358
+
|`chat.withUIMessage(config?)`| Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed `UIMessage` subtype. See [Types](/ai-chat/types)|
359
+
|`chat.withClientData({ schema })`| Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed client data schema. See [Types](/ai-chat/types#typed-client-data-with-chatwithclientdata)|
326
360
327
361
## `chat.withUIMessage`
328
362
329
-
Returns `{ task }`, where `task` is like [`chat.task`](#chat-namespace) but parameterized on a UI message type `TUIM`.
363
+
Returns a [`ChatBuilder`](/ai-chat/types#chatbuilder) with a fixed `UIMessage` subtype. Chain `.withClientData()`, hook methods, and `.task()`.
Use this when you need [`InferChatUIMessage`](#inferchatuimessage) / typed `data-*` parts / `InferUITools` to line up across backend hooks and `useChat`. Full guide: [Types](/ai-chat/types).
342
374
375
+
## `chat.withClientData`
376
+
377
+
Returns a [`ChatBuilder`](/ai-chat/types#chatbuilder) with a fixed client data schema. All hooks and `run` get typed `clientData` without passing `clientDataSchema` in `.task()` options.
0 commit comments