Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
Conversation
🦋 Changeset detectedLatest commit: c410160 The changes in this PR will be included in the next version bump. This PR includes changesets to release 3 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Greptile SummaryThis PR successfully migrates from AI SDK v5 to v6, upgrading all core dependencies and adapting to breaking API changes while maintaining backwards compatibility. Key changes:
The migration is comprehensive and well-tested (326 e2e tests passing). The backwards-compatible shim layer ensures existing code that destructures Confidence Score: 5/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[AI SDK v5] --> B[AI SDK v6 Migration]
B --> C[Update Dependencies]
C --> D[ai package v5 to v6]
C --> E[provider packages v2 to v3]
C --> F[All optional providers upgraded]
B --> G[Type Migrations]
G --> H[LanguageModelV2 to V3]
G --> I[Core*Message to ModelMessage]
B --> J[API Changes]
J --> K[generateObject deprecated]
J --> L[streamObject deprecated]
J --> M[Tool callback signature change]
K --> N[generateText + Output.object]
L --> O[streamText + Output.object]
N --> P[objectShims.ts]
O --> P
P --> Q[Preserve backwards API]
B --> R[Usage Shape Handling]
R --> S[v2: flat token numbers]
R --> T[v3: nested token objects]
R --> U[Defensive guards added]
U --> V[flowLogger middleware]
U --> W[aisdk client]
U --> X[external client]
M --> Y[result param to output param]
Y --> Z[All 8 agent tools updated]
Last reviewed commit: ac2666f |
8bb0449 to
9dadf0d
Compare
Based on #1689 by @dylnslck. Adds optional chaining and deprecated property fallbacks to token detail access across both generateObject and generateText code paths, matching the safe pattern already used in the generateObject path of aisdk.ts. Renames `u` to `usage` for consistency with the rest of the codebase. Co-Authored-By: Dylan Slack <dylnslck@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ge fields Collapse identical prompt/messages branches in generateObjectShim and streamObjectShim into single conditions. Update v3AgentHandler to use the new v6 nested token detail fields (outputTokenDetails.reasoningTokens, inputTokenDetails.cacheReadTokens) with fallback to deprecated flat fields. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
dcb29e8 to
ddcea89
Compare
AI SDK v6's generateText/streamText return usage, finishReason, output,
text, etc. as prototype getters that are lost when spread into a new
object. Explicitly copy these properties so callers who destructure
{ object, usage, finishReason } from the shim get valid values.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/core/lib/v3/flowLogger.ts">
<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1121">
P2: The fallback branch accessing `.total` lacks null safety. If `inputTokens` is `undefined` or `null` (e.g., a model that doesn't report usage), the `typeof` check is `false` and `.total` is accessed on `undefined`, throwing a `TypeError`. Add optional chaining and a fallback default to be consistent with the `?? 0` pattern used elsewhere in the codebase (e.g., `aisdk.ts`).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
The wrapLanguageModel middleware doesn't auto-adapt v2 provider results
to v3 format, so if a user passes a custom llmClient with a v2-spec
model, result.usage.inputTokens is a flat number (not { total, ... }).
Add typeof guards so token logging works with both spec versions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ac2666f to
f172c5f
Compare
| const result = { | ||
| data: objectResponse.object, | ||
| data: objectResponse.output, | ||
| usage: { |
There was a problem hiding this comment.
| usage: { | |
| usage: { | |
| ...usage, |
can we just return the whole obj that aisdk returns as well so users can use fields they might see in aisdk docs
| const usage = textResponse.usage; | ||
| return { | ||
| prompt_tokens: usage.inputTokens ?? 0, |
There was a problem hiding this comment.
| const usage = textResponse.usage; | |
| return { | |
| prompt_tokens: usage.inputTokens ?? 0, | |
| const usage = textResponse.usage; | |
| return { | |
| ...usage, | |
| prompt_tokens: usage.inputTokens ?? 0, |
ditto here
There was a problem hiding this comment.
2 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/evals/lib/AISdkClientWrapped.ts">
<violation number="1" location="packages/evals/lib/AISdkClientWrapped.ts:219">
P2: Spreading `...usage` leaks raw AI SDK v6 properties (`inputTokens`, `outputTokens`, `inputTokenDetails`, `outputTokenDetails`, etc.) into what should be an OpenAI `ChatCompletion`-compatible usage object. The core implementation in `packages/core/lib/v3/llm/aisdk.ts` does **not** spread the raw usage — it only includes the explicitly mapped properties. This creates an inconsistency between the evals wrapper and the core SDK, and pollutes the typed response with undocumented extra fields. Consider removing the spread to match the core implementation.</violation>
</file>
<file name="packages/core/lib/v3/flowLogger.ts">
<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1120">
P1: Bug: `result.usage.inputTokens` is a plain `number` in the AI SDK v6 middleware result (as confirmed by every other usage site in this codebase). Accessing `.total` on a number returns `undefined`, so `undefined ?? 0` always evaluates to `0` — silently losing all token usage data in flow logs.
This should match the pattern used everywhere else in the codebase (e.g., `aisdk.ts:242`): treat `inputTokens`/`outputTokens` as numbers directly.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -56,7 +53,7 @@ export class AISdkClient extends LLMClient { | |||
| }); | |||
|
|
|||
| if (message.role === "user") { | |||
| const userMessage: CoreUserMessage = { | |||
| const userMessage: ModelMessage = { | |||
| role: "user", | |||
| content: contentParts, | |||
| }; | |||
| @@ -66,7 +63,7 @@ export class AISdkClient extends LLMClient { | |||
| type: "text" as const, | |||
| text: part.type === "image" ? "[Image]" : part.text, | |||
| })); | |||
| const assistantMessage: CoreAssistantMessage = { | |||
| const assistantMessage: ModelMessage = { | |||
| role: "assistant", | |||
| content: textOnlyParts, | |||
| }; | |||
| @@ -82,20 +79,29 @@ export class AISdkClient extends LLMClient { | |||
| ); | |||
There was a problem hiding this comment.
nit: we can continue using the specific types for each of the types of messages rather than the union
seems they get imported as this now -
SystemModelMessage,
UserModelMessage,
AssistantModelMessage,
| usage.reasoningTokens ?? | ||
| 0, | ||
| cached_input_tokens: | ||
| usage.inputTokenDetails?.cacheReadTokens ?? | ||
| usage.cachedInputTokens ?? |
There was a problem hiding this comment.
It seems these fallbacks are deprecated usage.reasoningTokens, usage.cachedInputTokens . we probably do not need the fallback to reading from them when we check new version already
There was a problem hiding this comment.
could someone pass in an older client object when using a custom client?
There was a problem hiding this comment.
I do not think so since its scoped to v3 providers. old ones would be v2
| @@ -553,8 +557,14 @@ export class V3AgentHandler { | |||
| ? { | |||
| input_tokens: result.totalUsage.inputTokens || 0, | |||
| output_tokens: result.totalUsage.outputTokens || 0, | |||
| reasoning_tokens: result.totalUsage.reasoningTokens || 0, | |||
| cached_input_tokens: result.totalUsage.cachedInputTokens || 0, | |||
| reasoning_tokens: | |||
| result.totalUsage.outputTokenDetails?.reasoningTokens ?? | |||
| result.totalUsage.reasoningTokens ?? | |||
| 0, | |||
| cached_input_tokens: | |||
| result.totalUsage.inputTokenDetails?.cacheReadTokens ?? | |||
| result.totalUsage.cachedInputTokens ?? | |||
| 0, | |||
There was a problem hiding this comment.
same for the fallbacks here
| specificationVersion: "v3" as const, | ||
| wrapGenerate: async ({ doGenerate }) => doGenerate(), | ||
| }; | ||
| } | ||
|
|
||
| return { | ||
| specificationVersion: "v3" as const, |
There was a problem hiding this comment.
casting to const is unnecessary in both cases
| Output, | ||
| TextPart, | ||
| ToolSet, | ||
| Tool, | ||
| } from "ai"; | ||
| import * as ai from "ai"; | ||
| import { wrapAISDK } from "braintrust"; |
There was a problem hiding this comment.
Have we confirmed this works with evals CLI. I know between v4 and v5 braintrust changed the way its ai sdk integration works. Unsure if they changed anything for v6 that would cause issues though
| @@ -107,7 +105,7 @@ export class AISdkClientWrapped extends LLMClient { | |||
| }); | |||
|
|
|||
| if (message.role === "user") { | |||
| const userMessage: CoreUserMessage = { | |||
| const userMessage: ModelMessage = { | |||
| role: "user", | |||
| content: contentParts, | |||
| }; | |||
| @@ -117,7 +115,7 @@ export class AISdkClientWrapped extends LLMClient { | |||
| type: "text" as const, | |||
| text: part.type === "image" ? "[Image]" : part.text, | |||
| })); | |||
| const assistantMessage: CoreAssistantMessage = { | |||
| const assistantMessage: ModelMessage = { | |||
| role: "assistant", | |||
| content: textOnlyParts, | |||
| }; | |||
There was a problem hiding this comment.
nit use types specific to user, system, and assistant message rather than the union
| /** | ||
| * Thin shims that wrap generateText and streamText with Output.object({ schema }) | ||
| * as the replacement for deprecated generateObject and streamObject from the AI SDK. | ||
| * Callers must supply a schema; it is passed through to Output.object() for structured output. | ||
| */ | ||
|
|
There was a problem hiding this comment.
Instead of the shim I would just recommend leaving generateObject as is. It is deprecated so it can still be used, and is currently not used in our package itself. When we eventually move to a version that does not have it, we can likely just tell people we no longer support it and point them to newer methods
seanmcguire12
left a comment
There was a problem hiding this comment.
this is breaking for users passing in tools to agent, & also people passing in custom aisdk clients.
our options are:
- figure out a way to make this backward compatible
- hold off on this migration until the next major release
I would rather us hold off until the next major given the complexity & branching logic to required to support both
Summary
aifrom ^5.0.133 to ^6.0.0,@ai-sdk/providerfrom ^2.0.0 to ^3.0.0, and all optional AI provider packages to their latest major versions.LanguageModelV2toLanguageModelV3,CoreSystemMessage/CoreUserMessage/CoreAssistantMessagetoModelMessage, andexperimental_generateImagetogenerateImage.generateObject/streamObjectwithgenerateText/streamText+Output.object(), with backwards-compatible shims (objectShims.ts) to preserve the existingLLMClientAPI surface.toModelOutputcallbacks from(result)to({ output })to match the v6 tool result shape.specificationVersion: "v3"to LLM logging middleware.outputTokenDetailsandinputTokenDetailsaccess in bothaisdk.tsandAISdkClientWrapped.ts, making token usage handling consistent across generateObject and generateText code paths.u→usagevariable in generateText IIFE for consistency with the rest of the codebase.LanguageModelV2comment in test file.Based on #1689 by @dylnslck — thank you for the original upgrade work!
Test plan
pnpm installandpnpm buildsucceedtsc --noEmit)pnpm e2e:localpasses — 326 passed, 2 skipped, 0 failuresLanguageModelV3models from current provider packages without TypeScript errorsBreaking changes for external users
AISdkClientconstructor now requiresLanguageModelV3instead ofLanguageModelV2. Users must upgrade their@ai-sdk/*provider packages to v3+.🤖 Generated with Claude Code