Skip to content

Commit 63d4719

Browse files
committed
docs: rename chat.task to chat.agent across all AI docs
1 parent 0dd6b11 commit 63d4719

15 files changed

+111
-109
lines changed

docs/ai-chat/backend.mdx

Lines changed: 28 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: "Backend"
33
sidebarTitle: "Backend"
4-
description: "Three approaches to building your chat backend — chat.task(), session iterator, or raw task primitives."
4+
description: "Three approaches to building your chat backend — chat.agent(), session iterator, or raw task primitives."
55
---
66

7-
## chat.task()
7+
## chat.agent()
88

99
The highest-level approach. Handles message accumulation, stop signals, turn lifecycle, and auto-piping automatically.
1010

1111
<Tip>
12-
To fix a **custom** `UIMessage` subtype or typed client data schema, use the [ChatBuilder](/ai-chat/types#chatbuilder) via `chat.withUIMessage<...>()` and/or `chat.withClientData({ schema })`. Builder-level hooks can also be chained before `.task()`. See [Types](/ai-chat/types).
12+
To fix a **custom** `UIMessage` subtype or typed client data schema, use the [ChatBuilder](/ai-chat/types#chatbuilder) via `chat.withUIMessage<...>()` and/or `chat.withClientData({ schema })`. Builder-level hooks can also be chained before `.agent()`. See [Types](/ai-chat/types).
1313
</Tip>
1414

1515
### Simple: return a StreamTextResult
@@ -21,7 +21,7 @@ import { chat } from "@trigger.dev/sdk/ai";
2121
import { streamText } from "ai";
2222
import { openai } from "@ai-sdk/openai";
2323

24-
export const simpleChat = chat.task({
24+
export const simpleChat = chat.agent({
2525
id: "simple-chat",
2626
run: async ({ messages, signal }) => {
2727
return streamText({
@@ -44,7 +44,7 @@ import { streamText } from "ai";
4444
import { openai } from "@ai-sdk/openai";
4545
import type { ModelMessage } from "ai";
4646

47-
export const agentChat = chat.task({
47+
export const agentChat = chat.agent({
4848
id: "agent-chat",
4949
run: async ({ messages }) => {
5050
// Don't return anything — chat.pipe is called inside
@@ -71,9 +71,9 @@ async function runAgentLoop(messages: ModelMessage[]) {
7171

7272
Every chat lifecycle callback and the **`run`** payload include **`ctx`**: the same run context object as `task({ run: (payload, { ctx }) => ... })`. Import the type with **`import type { TaskRunContext } from "@trigger.dev/sdk"`** (the **`Context`** export is the same type). Use **`ctx`** for tags, metadata, or any API that needs the full run record. The string **`runId`** on chat events is always **`ctx.run.id`** (both are provided for convenience). See [Task context (`ctx`)](/ai-chat/reference#task-context-ctx) in the API reference.
7373

74-
Standard **[task lifecycle hooks](/tasks/overview)****`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.task()`** with the same shapes as on a normal `task()`.
74+
Standard **[task lifecycle hooks](/tasks/overview)****`onWait`**, **`onResume`**, **`onComplete`**, **`onFailure`**, etc. — are also available on **`chat.agent()`** with the same shapes as on a normal `task()`.
7575

76-
Chat tasks also have two dedicated suspension hooks — **`onChatSuspend`** and **`onChatResume`** — that fire at the idle-to-suspended transition with full chat context. Use them for resource cleanup (e.g. tearing down sandboxes) and re-initialization. See [onChatSuspend / onChatResume](#onchatsuspend--onchatresume) and the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
76+
Chat agents also have two dedicated suspension hooks — **`onChatSuspend`** and **`onChatResume`** — that fire at the idle-to-suspended transition with full chat context. Use them for resource cleanup (e.g. tearing down sandboxes) and re-initialization. See [onChatSuspend / onChatResume](#onchatsuspend--onchatresume) and the [Code execution sandbox](/ai-chat/patterns/code-sandbox) pattern.
7777

7878
#### onPreload
7979

@@ -82,7 +82,7 @@ Fires when a preloaded run starts — before any messages arrive. Use it to eage
8282
Preloaded runs are triggered by calling `transport.preload(chatId)` on the frontend. See [Preload](/ai-chat/features#preload) for details.
8383

8484
```ts
85-
export const myChat = chat.task({
85+
export const myChat = chat.agent({
8686
id: "my-chat",
8787
clientDataSchema: z.object({ userId: z.string() }),
8888
onPreload: async ({ ctx, chatId, clientData, runId, chatAccessToken }) => {
@@ -125,7 +125,7 @@ Fires once on the first turn (turn 0) before `run()` executes. Use it to create
125125
The `continuation` field tells you whether this is a brand new chat or a continuation of an existing one (where the previous run timed out or was cancelled). The `preloaded` field tells you whether `onPreload` already ran.
126126

127127
```ts
128-
export const myChat = chat.task({
128+
export const myChat = chat.agent({
129129
id: "my-chat",
130130
onChatStart: async ({ chatId, clientData, continuation, preloaded }) => {
131131
if (preloaded) return; // Already set up in onPreload
@@ -167,7 +167,7 @@ Fires at the start of every turn, after message accumulation and `onChatStart` (
167167
| `writer` | [`ChatWriter`](/ai-chat/reference#chatwriter) | Stream writer for custom chunks |
168168

169169
```ts
170-
export const myChat = chat.task({
170+
export const myChat = chat.agent({
171171
id: "my-chat",
172172
onTurnStart: async ({ chatId, uiMessages, runId, chatAccessToken }) => {
173173
await db.chat.update({
@@ -196,7 +196,7 @@ export const myChat = chat.task({
196196
Fires after the response is captured but **before** the stream closes. The `writer` can send custom chunks that appear in the current turn — use this for post-processing indicators, compaction progress, or any data the user should see before the turn ends.
197197

198198
```ts
199-
export const myChat = chat.task({
199+
export const myChat = chat.agent({
200200
id: "my-chat",
201201
onBeforeTurnComplete: async ({ writer, usage, uiMessages }) => {
202202
// Write a custom data part while the stream is still open
@@ -245,7 +245,7 @@ Fires after each turn completes — after the response is captured and the strea
245245
| `rawResponseMessage` | `UIMessage \| undefined` | The raw assistant response before abort cleanup (same as `responseMessage` when not stopped) |
246246

247247
```ts
248-
export const myChat = chat.task({
248+
export const myChat = chat.agent({
249249
id: "my-chat",
250250
onTurnComplete: async ({ chatId, uiMessages, runId, chatAccessToken, lastEventId }) => {
251251
await db.chat.update({
@@ -288,7 +288,7 @@ The `phase` discriminator tells you **when** the suspend/resume happened:
288288
- `"turn"` — after `onTurnComplete`, waiting for the next message
289289

290290
```ts
291-
export const myChat = chat.task({
291+
export const myChat = chat.agent({
292292
id: "my-chat",
293293
onChatSuspend: async (event) => {
294294
// Tear down expensive resources before suspending
@@ -327,7 +327,7 @@ export const myChat = chat.task({
327327
When set to `true`, a preloaded run completes successfully after the idle timeout elapses instead of suspending. Use this for "fire and forget" preloads — if the user doesn't send a message during the idle window, the run ends cleanly.
328328

329329
```ts
330-
export const myChat = chat.task({
330+
export const myChat = chat.agent({
331331
id: "my-chat",
332332
preloadIdleTimeoutInSeconds: 10,
333333
exitAfterPreloadIdle: true,
@@ -362,7 +362,7 @@ const systemPrompt = prompts.define({
362362
content: `You are a helpful assistant for {{name}}.`,
363363
});
364364

365-
export const myChat = chat.task({
365+
export const myChat = chat.agent({
366366
id: "my-chat",
367367
clientDataSchema: z.object({ userId: z.string() }),
368368
onChatStart: async ({ clientData }) => {
@@ -404,7 +404,7 @@ The `run` function receives three abort signals:
404404
| `cancelSignal` | Run cancel, expire, or maxDuration exceeded | Cleanup that should only happen on full cancellation |
405405

406406
```ts
407-
export const myChat = chat.task({
407+
export const myChat = chat.agent({
408408
id: "my-chat",
409409
run: async ({ messages, signal, stopSignal, cancelSignal }) => {
410410
return streamText({
@@ -426,7 +426,7 @@ export const myChat = chat.task({
426426
The `onTurnComplete` event includes a `stopped` boolean that indicates whether the user stopped generation during that turn:
427427

428428
```ts
429-
export const myChat = chat.task({
429+
export const myChat = chat.agent({
430430
id: "my-chat",
431431
onTurnComplete: async ({ chatId, uiMessages, stopped }) => {
432432
await db.chat.update({
@@ -446,7 +446,7 @@ You can also check stop status from **anywhere** during a turn using `chat.isSto
446446
import { chat } from "@trigger.dev/sdk/ai";
447447
import { streamText } from "ai";
448448

449-
export const myChat = chat.task({
449+
export const myChat = chat.agent({
450450
id: "my-chat",
451451
run: async ({ messages, signal }) => {
452452
return streamText({
@@ -469,7 +469,7 @@ export const myChat = chat.task({
469469

470470
When stop happens mid-stream, the captured response message can contain parts in an incomplete state — tool calls stuck in `partial-call`, reasoning blocks still marked as `streaming`, etc. These can cause UI issues like permanent spinners.
471471

472-
`chat.task` automatically cleans up the `responseMessage` when stop is detected before passing it to `onTurnComplete`. If you use `chat.pipe()` manually and capture response messages yourself, use `chat.cleanupAbortedParts()`:
472+
`chat.agent` automatically cleans up the `responseMessage` when stop is detected before passing it to `onTurnComplete`. If you use `chat.pipe()` manually and capture response messages yourself, use `chat.cleanupAbortedParts()`:
473473

474474
```ts
475475
const cleaned = chat.cleanupAbortedParts(rawResponseMessage);
@@ -508,7 +508,7 @@ import { openai } from "@ai-sdk/openai";
508508
import { z } from "zod";
509509
import { db } from "@/lib/db";
510510

511-
export const myChat = chat.task({
511+
export const myChat = chat.agent({
512512
id: "my-chat",
513513
clientDataSchema: z.object({
514514
userId: z.string(),
@@ -660,7 +660,7 @@ export function Chat({ chatId, initialMessages, initialSessions }) {
660660
Users can send messages while the agent is executing tool calls. With `pendingMessages`, these messages are injected between tool-call steps, steering the agent mid-execution:
661661

662662
```ts
663-
export const myChat = chat.task({
663+
export const myChat = chat.agent({
664664
id: "my-chat",
665665
pendingMessages: {
666666
shouldInject: ({ steps }) => steps.length > 0,
@@ -690,7 +690,7 @@ On the frontend, the `usePendingMessages` hook handles sending, tracking, and re
690690
Inject context from background work into the conversation using `chat.inject()`. Combine with `chat.defer()` to run analysis between turns and inject results before the next response — self-review, RAG augmentation, safety checks, etc.
691691

692692
```ts
693-
export const myChat = chat.task({
693+
export const myChat = chat.agent({
694694
id: "my-chat",
695695
onTurnComplete: async ({ messages }) => {
696696
chat.defer(
@@ -727,7 +727,7 @@ Transform model messages before they're used anywhere — in `run()`, in compact
727727
Use this for Anthropic cache breaks, injecting system context, stripping PII, etc.
728728

729729
```ts
730-
export const myChat = chat.task({
730+
export const myChat = chat.agent({
731731
id: "my-chat",
732732
prepareMessages: ({ messages, reason }) => {
733733
// Add Anthropic cache breaks to the last message
@@ -798,7 +798,7 @@ When `streamText` encounters an error mid-stream (rate limits, API failures, net
798798
By default, the raw error message is sent to the frontend. Use `onError` to sanitize errors and avoid leaking internal details:
799799

800800
```ts
801-
export const myChat = chat.task({
801+
export const myChat = chat.agent({
802802
id: "my-chat",
803803
uiMessageStreamOptions: {
804804
onError: (error) => {
@@ -836,7 +836,7 @@ const { messages, sendMessage } = useChat({
836836
Control which AI SDK features are forwarded to the frontend:
837837

838838
```ts
839-
export const myChat = chat.task({
839+
export const myChat = chat.agent({
840840
id: "my-chat",
841841
uiMessageStreamOptions: {
842842
sendReasoning: true, // Forward model reasoning (default: true)
@@ -862,7 +862,7 @@ run: async ({ messages, clientData, signal }) => {
862862
},
863863
```
864864

865-
`chat.setUIMessageStreamOptions()` works across all abstraction levels — `chat.task()`, `chat.createSession()` / `turn.complete()`, and `chat.pipeAndCapture()`.
865+
`chat.setUIMessageStreamOptions()` works across all abstraction levels — `chat.agent()`, `chat.createSession()` / `turn.complete()`, and `chat.pipeAndCapture()`.
866866

867867
See [ChatUIMessageStreamOptions](/ai-chat/reference#chatuimessagestreamoptions) for the full reference.
868868

@@ -900,14 +900,14 @@ export const manualChat = task({
900900
<Warning>
901901
Manual mode does not get automatic message accumulation or the `onTurnComplete`/`onChatStart`
902902
lifecycle hooks. The `responseMessage` field in `onTurnComplete` will be `undefined` when using
903-
`chat.pipe()` directly. Use `chat.task()` for the full multi-turn experience.
903+
`chat.pipe()` directly. Use `chat.agent()` for the full multi-turn experience.
904904
</Warning>
905905

906906
---
907907

908908
## chat.createSession()
909909

910-
A middle ground between `chat.task()` and raw primitives. You get an async iterator that yields `ChatTurn` objects — each turn handles stop signals, message accumulation, and turn-complete signaling automatically. You control initialization, model/tool selection, persistence, and any custom per-turn logic.
910+
A middle ground between `chat.agent()` and raw primitives. You get an async iterator that yields `ChatTurn` objects — each turn handles stop signals, message accumulation, and turn-complete signaling automatically. You control initialization, model/tool selection, persistence, and any custom per-turn logic.
911911

912912
Use `chat.createSession()` inside a standard `task()`:
913913

docs/ai-chat/background-injection.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Messages are appended to the model messages before the next LLM inference call.
3131
The most powerful pattern combines `chat.defer()` (background work) with `chat.inject()` (inject results). Background work runs in parallel with the idle wait between turns, and results are injected before the next response.
3232

3333
```ts
34-
export const myChat = chat.task({
34+
export const myChat = chat.agent({
3535
id: "my-chat",
3636
onTurnComplete: async ({ messages }) => {
3737
// Kick off background analysis — doesn't block the turn
@@ -95,7 +95,7 @@ Focus on:
9595
Be concise. Only flag issues worth fixing.`,
9696
});
9797

98-
export const myChat = chat.task({
98+
export const myChat = chat.agent({
9999
id: "my-chat",
100100
onTurnComplete: async ({ messages }) => {
101101
chat.defer(

docs/ai-chat/compaction.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ description: "Automatic context compaction to keep long conversations within tok
88

99
Long conversations accumulate tokens across turns. Eventually the context window fills up, causing errors or degraded responses. Compaction solves this by automatically summarizing the conversation when token usage exceeds a threshold, then using that summary as the context for future turns.
1010

11-
The `compaction` option on `chat.task()` handles this in both paths:
11+
The `compaction` option on `chat.agent()` handles this in both paths:
1212

1313
- **Between tool-call steps** (inner loop) — via the AI SDK's `prepareStep`, compaction runs between tool calls within a single turn
1414
- **Between turns** (outer loop) — for single-step responses with no tool calls, where `prepareStep` never fires
@@ -22,7 +22,7 @@ import { chat } from "@trigger.dev/sdk/ai";
2222
import { streamText, generateText } from "ai";
2323
import { openai } from "@ai-sdk/openai";
2424

25-
export const myChat = chat.task({
25+
export const myChat = chat.agent({
2626
id: "my-chat",
2727
compaction: {
2828
shouldCompact: ({ totalTokens }) => (totalTokens ?? 0) > 80_000,
@@ -71,7 +71,7 @@ Replace older messages with a summary but keep the last few exchanges visible:
7171
```ts
7272
import { generateId } from "ai";
7373

74-
export const myChat = chat.task({
74+
export const myChat = chat.agent({
7575
id: "my-chat",
7676
compaction: {
7777
shouldCompact: ({ totalTokens }) => (totalTokens ?? 0) > 80_000,
@@ -175,7 +175,7 @@ The `summarize` callback receives similar context:
175175
Track compaction events for logging, billing, or analytics:
176176

177177
```ts
178-
export const myChat = chat.task({
178+
export const myChat = chat.agent({
179179
id: "my-chat",
180180
compaction: { ... },
181181
onCompacted: async ({ summary, totalTokens, messageCount, chatId, turn }) => {
@@ -292,5 +292,5 @@ prepareStep: chat.compactionStep({
292292
```
293293

294294
<Note>
295-
The fully manual APIs only handle inner-loop compaction (between tool-call steps). For outer-loop coverage, use the `compaction` option on `chat.task()`, `chat.createSession()`, or `MessageAccumulator`.
295+
The fully manual APIs only handle inner-loop compaction (between tool-call steps). For outer-loop coverage, use the `compaction` option on `chat.agent()`, `chat.createSession()`, or `MessageAccumulator`.
296296
</Note>

docs/ai-chat/features.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ const userContext = chat.local<{
3030
messageCount: number;
3131
}>({ id: "userContext" });
3232

33-
export const myChat = chat.task({
33+
export const myChat = chat.agent({
3434
id: "my-chat",
3535
clientDataSchema: z.object({ userId: z.string() }),
3636
onChatStart: async ({ clientData }) => {
@@ -105,7 +105,7 @@ const analyzeData = tool({
105105
execute: ai.toolExecute(analyzeDataTask),
106106
});
107107

108-
export const myChat = chat.task({
108+
export const myChat = chat.agent({
109109
id: "my-chat",
110110
onChatStart: async ({ clientData }) => {
111111
userContext.init({ name: "Alice", plan: "pro" });
@@ -165,7 +165,7 @@ Use `chat.defer()` to run background work in parallel with streaming. The deferr
165165
This moves non-blocking work (DB writes, analytics, etc.) out of the critical path:
166166

167167
```ts
168-
export const myChat = chat.task({
168+
export const myChat = chat.agent({
169169
id: "my-chat",
170170
onTurnStart: async ({ chatId, uiMessages }) => {
171171
// Persist messages without blocking the LLM call
@@ -188,7 +188,7 @@ export const myChat = chat.task({
188188
```ts
189189
import { chat } from "@trigger.dev/sdk/ai";
190190

191-
export const myChat = chat.task({
191+
export const myChat = chat.agent({
192192
id: "my-chat",
193193
run: async ({ messages, signal }) => {
194194
// Write a custom data part to the chat stream.
@@ -286,7 +286,7 @@ const research = tool({
286286
execute: ai.toolExecute(researchTask),
287287
});
288288

289-
export const myChat = chat.task({
289+
export const myChat = chat.agent({
290290
id: "my-chat",
291291
run: async ({ messages, signal }) => {
292292
return streamText({
@@ -320,7 +320,7 @@ On the frontend, render the custom data part:
320320
The `target` option accepts:
321321
- `"self"` — current run (default)
322322
- `"parent"` — parent task's run
323-
- `"root"` — root task's run (the chat task)
323+
- `"root"` — root task's run (the chat agent)
324324
- A specific run ID string
325325

326326
---
@@ -409,7 +409,7 @@ When the transport needs a trigger token for preload, your `accessToken` callbac
409409
On the backend, the `onPreload` hook fires immediately. The run then waits for the first message. When the user sends a message, `onChatStart` fires with `preloaded: true` — you can skip initialization that was already done in `onPreload`:
410410

411411
```ts
412-
export const myChat = chat.task({
412+
export const myChat = chat.agent({
413413
id: "my-chat",
414414
onPreload: async ({ chatId, clientData }) => {
415415
// Eagerly initialize — runs before the first message

docs/ai-chat/frontend.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The transport is created once on first render and reused across re-renders. Pass
3434

3535
## Typed messages (`chat.withUIMessage`)
3636

37-
If your chat task is defined with [`chat.withUIMessage<YourUIMessage>()`](/ai-chat/types) (custom `data-*` parts, typed tools, etc.), pass the same message type through `useChat` so `messages` and `message.parts` are narrowed on the client:
37+
If your chat agent is defined with [`chat.withUIMessage<YourUIMessage>()`](/ai-chat/types) (custom `data-*` parts, typed tools, etc.), pass the same message type through `useChat` so `messages` and `message.parts` are narrowed on the client:
3838

3939
```tsx
4040
import { useChat } from "@ai-sdk/react";
@@ -189,15 +189,15 @@ sendMessage({ text: "Hello" }, { metadata: { model: "gpt-4o", priority: "high" }
189189

190190
### Typed client data with clientDataSchema
191191

192-
Instead of manually parsing `clientData` with Zod in every hook, pass a `clientDataSchema` to `chat.task`. The schema validates the data once per turn, and `clientData` is typed in all hooks and `run`:
192+
Instead of manually parsing `clientData` with Zod in every hook, pass a `clientDataSchema` to `chat.agent`. The schema validates the data once per turn, and `clientData` is typed in all hooks and `run`:
193193

194194
```ts
195195
import { chat } from "@trigger.dev/sdk/ai";
196196
import { streamText } from "ai";
197197
import { openai } from "@ai-sdk/openai";
198198
import { z } from "zod";
199199

200-
export const myChat = chat.task({
200+
export const myChat = chat.agent({
201201
id: "my-chat",
202202
clientDataSchema: z.object({
203203
model: z.string().optional(),

0 commit comments

Comments
 (0)