You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai-chat/backend.mdx
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: "Backend"
3
3
sidebarTitle: "Backend"
4
-
description: "Three approaches to building your chat backend — chat.task(), session iterator, or raw task primitives."
4
+
description: "Three approaches to building your chat backend — chat.agent(), session iterator, or raw task primitives."
5
5
---
6
6
7
-
## chat.task()
7
+
## chat.agent()
8
8
9
9
The highest-level approach. Handles message accumulation, stop signals, turn lifecycle, and auto-piping automatically.
10
10
11
11
<Tip>
12
-
To fix a **custom**`UIMessage` subtype or typed client data schema, use the [ChatBuilder](/ai-chat/types#chatbuilder) via `chat.withUIMessage<...>()` and/or `chat.withClientData({ schema })`. Builder-level hooks can also be chained before `.task()`. See [Types](/ai-chat/types).
12
+
To fix a **custom**`UIMessage` subtype or typed client data schema, use the [ChatBuilder](/ai-chat/types#chatbuilder) via `chat.withUIMessage<...>()` and/or `chat.withClientData({ schema })`. Builder-level hooks can also be chained before `.agent()`. See [Types](/ai-chat/types).
13
13
</Tip>
14
14
15
15
### Simple: return a StreamTextResult
@@ -21,7 +21,7 @@ import { chat } from "@trigger.dev/sdk/ai";
21
21
import { streamText } from"ai";
22
22
import { openai } from"@ai-sdk/openai";
23
23
24
-
exportconst simpleChat =chat.task({
24
+
exportconst simpleChat =chat.agent({
25
25
id: "simple-chat",
26
26
run: async ({ messages, signal }) => {
27
27
returnstreamText({
@@ -44,7 +44,7 @@ import { streamText } from "ai";
44
44
import { openai } from"@ai-sdk/openai";
45
45
importtype { ModelMessage } from"ai";
46
46
47
-
exportconst agentChat =chat.task({
47
+
exportconst agentChat =chat.agent({
48
48
id: "agent-chat",
49
49
run: async ({ messages }) => {
50
50
// Don't return anything — chat.pipe is called inside
@@ -71,9 +71,9 @@ async function runAgentLoop(messages: ModelMessage[]) {
71
71
72
72
Every chat lifecycle callback and the **`run`** payload include **`ctx`**: the same run context object as `task({ run: (payload, { ctx }) => ... })`. Import the type with **`import type { TaskRunContext } from "@trigger.dev/sdk"`** (the **`Context`** export is the same type). Use **`ctx`** for tags, metadata, or any API that needs the full run record. The string **`runId`** on chat events is always **`ctx.run.id`** (both are provided for convenience). See [Task context (`ctx`)](/ai-chat/reference#task-context-ctx) in the API reference.
73
73
74
-
Standard **[task lifecycle hooks](/tasks/overview)** — **`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.task()`** with the same shapes as on a normal `task()`.
74
+
Standard **[task lifecycle hooks](/tasks/overview)** — **`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.agent()`** with the same shapes as on a normal `task()`.
75
75
76
-
Chat tasks also have two dedicated suspension hooks — **`onChatSuspend`** and **`onChatResume`** — that fire at the idle-to-suspended transition with full chat context. Use them for resource cleanup (e.g. tearing down sandboxes) and re-initialization. See [onChatSuspend / onChatResume](#onchatsuspend--onchatresume) and the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
76
+
Chat agents also have two dedicated suspension hooks — **`onChatSuspend`** and **`onChatResume`** — that fire at the idle-to-suspended transition with full chat context. Use them for resource cleanup (e.g. tearing down sandboxes) and re-initialization. See [onChatSuspend / onChatResume](#onchatsuspend--onchatresume) and the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
77
77
78
78
#### onPreload
79
79
@@ -82,7 +82,7 @@ Fires when a preloaded run starts — before any messages arrive. Use it to eage
82
82
Preloaded runs are triggered by calling `transport.preload(chatId)` on the frontend. See [Preload](/ai-chat/features#preload) for details.
@@ -125,7 +125,7 @@ Fires once on the first turn (turn 0) before `run()` executes. Use it to create
125
125
The `continuation` field tells you whether this is a brand new chat or a continuation of an existing one (where the previous run timed out or was cancelled). The `preloaded` field tells you whether `onPreload` already ran.
Fires after the response is captured but **before** the stream closes. The `writer` can send custom chunks that appear in the current turn — use this for post-processing indicators, compaction progress, or any data the user should see before the turn ends.
When set to `true`, a preloaded run completes successfully after the idle timeout elapses instead of suspending. Use this for "fire and forget" preloads — if the user doesn't send a message during the idle window, the run ends cleanly.
When stop happens mid-stream, the captured response message can contain parts in an incomplete state — tool calls stuck in `partial-call`, reasoning blocks still marked as `streaming`, etc. These can cause UI issues like permanent spinners.
471
471
472
-
`chat.task` automatically cleans up the `responseMessage` when stop is detected before passing it to `onTurnComplete`. If you use `chat.pipe()` manually and capture response messages yourself, use `chat.cleanupAbortedParts()`:
472
+
`chat.agent` automatically cleans up the `responseMessage` when stop is detected before passing it to `onTurnComplete`. If you use `chat.pipe()` manually and capture response messages yourself, use `chat.cleanupAbortedParts()`:
Users can send messages while the agent is executing tool calls. With `pendingMessages`, these messages are injected between tool-call steps, steering the agent mid-execution:
661
661
662
662
```ts
663
-
exportconst myChat =chat.task({
663
+
exportconst myChat =chat.agent({
664
664
id: "my-chat",
665
665
pendingMessages: {
666
666
shouldInject: ({ steps }) =>steps.length>0,
@@ -690,7 +690,7 @@ On the frontend, the `usePendingMessages` hook handles sending, tracking, and re
690
690
Inject context from background work into the conversation using `chat.inject()`. Combine with `chat.defer()` to run analysis between turns and inject results before the next response — self-review, RAG augmentation, safety checks, etc.
691
691
692
692
```ts
693
-
exportconst myChat =chat.task({
693
+
exportconst myChat =chat.agent({
694
694
id: "my-chat",
695
695
onTurnComplete: async ({ messages }) => {
696
696
chat.defer(
@@ -727,7 +727,7 @@ Transform model messages before they're used anywhere — in `run()`, in compact
727
727
Use this for Anthropic cache breaks, injecting system context, stripping PII, etc.
728
728
729
729
```ts
730
-
exportconst myChat =chat.task({
730
+
exportconst myChat =chat.agent({
731
731
id: "my-chat",
732
732
prepareMessages: ({ messages, reason }) => {
733
733
// Add Anthropic cache breaks to the last message
@@ -798,7 +798,7 @@ When `streamText` encounters an error mid-stream (rate limits, API failures, net
798
798
By default, the raw error message is sent to the frontend. Use `onError` to sanitize errors and avoid leaking internal details:
`chat.setUIMessageStreamOptions()` works across all abstraction levels — `chat.task()`, `chat.createSession()` / `turn.complete()`, and `chat.pipeAndCapture()`.
865
+
`chat.setUIMessageStreamOptions()` works across all abstraction levels — `chat.agent()`, `chat.createSession()` / `turn.complete()`, and `chat.pipeAndCapture()`.
866
866
867
867
See [ChatUIMessageStreamOptions](/ai-chat/reference#chatuimessagestreamoptions) for the full reference.
Manual mode does not get automatic message accumulation or the `onTurnComplete`/`onChatStart`
902
902
lifecycle hooks. The `responseMessage` field in `onTurnComplete` will be `undefined` when using
903
-
`chat.pipe()` directly. Use `chat.task()` for the full multi-turn experience.
903
+
`chat.pipe()` directly. Use `chat.agent()` for the full multi-turn experience.
904
904
</Warning>
905
905
906
906
---
907
907
908
908
## chat.createSession()
909
909
910
-
A middle ground between `chat.task()` and raw primitives. You get an async iterator that yields `ChatTurn` objects — each turn handles stop signals, message accumulation, and turn-complete signaling automatically. You control initialization, model/tool selection, persistence, and any custom per-turn logic.
910
+
A middle ground between `chat.agent()` and raw primitives. You get an async iterator that yields `ChatTurn` objects — each turn handles stop signals, message accumulation, and turn-complete signaling automatically. You control initialization, model/tool selection, persistence, and any custom per-turn logic.
911
911
912
912
Use `chat.createSession()` inside a standard `task()`:
Copy file name to clipboardExpand all lines: docs/ai-chat/background-injection.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ Messages are appended to the model messages before the next LLM inference call.
31
31
The most powerful pattern combines `chat.defer()` (background work) with `chat.inject()` (inject results). Background work runs in parallel with the idle wait between turns, and results are injected before the next response.
32
32
33
33
```ts
34
-
exportconst myChat =chat.task({
34
+
exportconst myChat =chat.agent({
35
35
id: "my-chat",
36
36
onTurnComplete: async ({ messages }) => {
37
37
// Kick off background analysis — doesn't block the turn
Copy file name to clipboardExpand all lines: docs/ai-chat/compaction.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ description: "Automatic context compaction to keep long conversations within tok
8
8
9
9
Long conversations accumulate tokens across turns. Eventually the context window fills up, causing errors or degraded responses. Compaction solves this by automatically summarizing the conversation when token usage exceeds a threshold, then using that summary as the context for future turns.
10
10
11
-
The `compaction` option on `chat.task()` handles this in both paths:
11
+
The `compaction` option on `chat.agent()` handles this in both paths:
12
12
13
13
-**Between tool-call steps** (inner loop) — via the AI SDK's `prepareStep`, compaction runs between tool calls within a single turn
14
14
-**Between turns** (outer loop) — for single-step responses with no tool calls, where `prepareStep` never fires
@@ -22,7 +22,7 @@ import { chat } from "@trigger.dev/sdk/ai";
The fully manual APIs only handle inner-loop compaction (between tool-call steps). For outer-loop coverage, use the `compaction` option on `chat.task()`, `chat.createSession()`, or `MessageAccumulator`.
295
+
The fully manual APIs only handle inner-loop compaction (between tool-call steps). For outer-loop coverage, use the `compaction` option on `chat.agent()`, `chat.createSession()`, or `MessageAccumulator`.
@@ -320,7 +320,7 @@ On the frontend, render the custom data part:
320
320
The `target` option accepts:
321
321
-`"self"` — current run (default)
322
322
-`"parent"` — parent task's run
323
-
-`"root"` — root task's run (the chat task)
323
+
-`"root"` — root task's run (the chat agent)
324
324
- A specific run ID string
325
325
326
326
---
@@ -409,7 +409,7 @@ When the transport needs a trigger token for preload, your `accessToken` callbac
409
409
On the backend, the `onPreload` hook fires immediately. The run then waits for the first message. When the user sends a message, `onChatStart` fires with `preloaded: true` — you can skip initialization that was already done in `onPreload`:
410
410
411
411
```ts
412
-
exportconst myChat =chat.task({
412
+
exportconst myChat =chat.agent({
413
413
id: "my-chat",
414
414
onPreload: async ({ chatId, clientData }) => {
415
415
// Eagerly initialize — runs before the first message
Copy file name to clipboardExpand all lines: docs/ai-chat/frontend.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ The transport is created once on first render and reused across re-renders. Pass
34
34
35
35
## Typed messages (`chat.withUIMessage`)
36
36
37
-
If your chat task is defined with [`chat.withUIMessage<YourUIMessage>()`](/ai-chat/types) (custom `data-*` parts, typed tools, etc.), pass the same message type through `useChat` so `messages` and `message.parts` are narrowed on the client:
37
+
If your chat agent is defined with [`chat.withUIMessage<YourUIMessage>()`](/ai-chat/types) (custom `data-*` parts, typed tools, etc.), pass the same message type through `useChat` so `messages` and `message.parts` are narrowed on the client:
Instead of manually parsing `clientData` with Zod in every hook, pass a `clientDataSchema` to `chat.task`. The schema validates the data once per turn, and `clientData` is typed in all hooks and `run`:
192
+
Instead of manually parsing `clientData` with Zod in every hook, pass a `clientDataSchema` to `chat.agent`. The schema validates the data once per turn, and `clientData` is typed in all hooks and `run`:
0 commit comments