Skip to content

Upgrade AI SDK v5 → v6 with usage null safety fixes#1694

Draft
shrey150 wants to merge 9 commits intomainfrom
shrey/upgrade-ai-sdk-v6
Draft

Upgrade AI SDK v5 → v6 with usage null safety fixes#1694
shrey150 wants to merge 9 commits intomainfrom
shrey/upgrade-ai-sdk-v6

Conversation

@shrey150
Copy link
Copy Markdown
Contributor

@shrey150 shrey150 commented Feb 18, 2026

Summary

  • Upgrades ai from ^5.0.133 to ^6.0.0, @ai-sdk/provider from ^2.0.0 to ^3.0.0, and all optional AI provider packages to their latest major versions.
  • Migrates from LanguageModelV2 to LanguageModelV3, CoreSystemMessage/CoreUserMessage/CoreAssistantMessage to ModelMessage, and experimental_generateImage to generateImage.
  • Replaces deprecated generateObject/streamObject with generateText/streamText + Output.object(), with backwards-compatible shims (objectShims.ts) to preserve the existing LLMClient API surface.
  • Updates all agent tool toModelOutput callbacks from (result) to ({ output }) to match the v6 tool result shape.
  • Adds specificationVersion: "v3" to LLM logging middleware.
  • Fixes missing optional chaining and deprecated fallbacks on outputTokenDetails and inputTokenDetails access in both aisdk.ts and AISdkClientWrapped.ts, making token usage handling consistent across generateObject and generateText code paths.
  • Renames uusage variable in generateText IIFE for consistency with the rest of the codebase.
  • Updates stale LanguageModelV2 comment in test file.

Based on #1689 by @dylnslck — thank you for the original upgrade work!

Test plan

  • pnpm install and pnpm build succeed
  • TypeScript typechecks pass (tsc --noEmit)
  • pnpm e2e:local passes — 326 passed, 2 skipped, 0 failures
  • Verify AISdkClient accepts LanguageModelV3 models from current provider packages without TypeScript errors

Breaking changes for external users

  • AISdkClient constructor now requires LanguageModelV3 instead of LanguageModelV2. Users must upgrade their @ai-sdk/* provider packages to v3+.

🤖 Generated with Claude Code

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Feb 18, 2026

🦋 Changeset detected

Latest commit: c410160

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
@browserbasehq/stagehand Minor
@browserbasehq/stagehand-evals Patch
@browserbasehq/stagehand-server Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 18, 2026

Greptile Summary

This PR successfully migrates from AI SDK v5 to v6, upgrading all core dependencies and adapting to breaking API changes while maintaining backwards compatibility.

Key changes:

  • Replaces deprecated generateObject/streamObject with generateText/streamText + Output.object(), wrapped in backwards-compatible shims (objectShims.ts) to preserve the existing LLMClient API
  • Migrates from LanguageModelV2 to LanguageModelV3 and consolidates Core*Message types to unified ModelMessage
  • Updates all 8 agent tools (click, type, screenshot, etc.) to use v6 tool callback signature: toModelOutput: ({ output }) => ...
  • Fixes token usage access with proper null-safety: adds optional chaining for outputTokenDetails?.reasoningTokens and inputTokenDetails?.cacheReadTokens with backwards-compatible fallbacks
  • Adds specificationVersion: "v3" to LLM logging middleware and handles both v2 (flat numbers) and v3 (nested objects) usage shapes
  • Upgrades ai to ^6.0.0, @ai-sdk/provider to ^3.0.0, and all optional provider packages to their latest major versions

The migration is comprehensive and well-tested (326 e2e tests passing). The backwards-compatible shim layer ensures existing code that destructures { object } from generateObject continues to work seamlessly.

Confidence Score: 5/5

  • This PR is safe to merge with high confidence
  • The migration is systematic and comprehensive with thorough testing (326 e2e tests passing). All breaking changes are properly handled through backwards-compatible shims, token usage access is properly guarded against null values, and type migrations are complete across the entire codebase. The PR builds on previous work and includes defensive code for handling both v2 and v3 usage shapes.
  • No files require special attention

Important Files Changed

Filename Overview
packages/core/lib/v3/llm/aisdk.ts Migrates from LanguageModelV2 to LanguageModelV3, replaces deprecated generateObject with generateText + Output.object(), adds proper null-safety for token detail access
packages/core/lib/v3/llm/objectShims.ts New shim layer preserves backwards-compatible API surface for generateObject/streamObject callers while using v6 generateText/streamText internally
packages/core/lib/v3/external_clients/aisdk.ts Mirrors main client migration: v2→v3 model types, generateObjectgenerateText + Output.object(), fixes token detail access patterns
packages/core/lib/v3/flowLogger.ts Adds specificationVersion: "v3" to middleware, handles both v2 (flat numbers) and v3 (nested objects) usage shapes for backwards compatibility
packages/evals/lib/AISdkClientWrapped.ts Applies same v5→v6 migration as core client with Braintrust tracing wrapper intact
packages/core/package.json Upgrades ai to ^6.0.0, @ai-sdk/provider to ^3.0.0, and all provider packages to their latest major versions

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[AI SDK v5] --> B[AI SDK v6 Migration]
    B --> C[Update Dependencies]
    C --> D[ai package v5 to v6]
    C --> E[provider packages v2 to v3]
    C --> F[All optional providers upgraded]
    
    B --> G[Type Migrations]
    G --> H[LanguageModelV2 to V3]
    G --> I[Core*Message to ModelMessage]
    
    B --> J[API Changes]
    J --> K[generateObject deprecated]
    J --> L[streamObject deprecated]
    J --> M[Tool callback signature change]
    
    K --> N[generateText + Output.object]
    L --> O[streamText + Output.object]
    
    N --> P[objectShims.ts]
    O --> P
    P --> Q[Preserve backwards API]
    
    B --> R[Usage Shape Handling]
    R --> S[v2: flat token numbers]
    R --> T[v3: nested token objects]
    R --> U[Defensive guards added]
    
    U --> V[flowLogger middleware]
    U --> W[aisdk client]
    U --> X[external client]
    
    M --> Y[result param to output param]
    Y --> Z[All 8 agent tools updated]
Loading

Last reviewed commit: ac2666f

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment thread packages/core/lib/v3/llm/objectShims.ts
@shrey150
Copy link
Copy Markdown
Contributor Author

@greptileai

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment thread packages/core/lib/v3/flowLogger.ts Outdated
@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from 8bb0449 to 9dadf0d Compare February 18, 2026 22:10
dylnslck and others added 5 commits February 20, 2026 07:47
Based on #1689 by @dylnslck. Adds optional chaining and deprecated
property fallbacks to token detail access across both generateObject
and generateText code paths, matching the safe pattern already used
in the generateObject path of aisdk.ts. Renames `u` to `usage` for
consistency with the rest of the codebase.

Co-Authored-By: Dylan Slack <dylnslck@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ge fields

Collapse identical prompt/messages branches in generateObjectShim and
streamObjectShim into single conditions. Update v3AgentHandler to use
the new v6 nested token detail fields (outputTokenDetails.reasoningTokens,
inputTokenDetails.cacheReadTokens) with fallback to deprecated flat fields.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from dcb29e8 to ddcea89 Compare February 20, 2026 15:48
@shrey150
Copy link
Copy Markdown
Contributor Author

@greptileai

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment thread packages/core/lib/v3/flowLogger.ts Outdated
AI SDK v6's generateText/streamText return usage, finishReason, output,
text, etc. as prototype getters that are lost when spread into a new
object. Explicitly copy these properties so callers who destructure
{ object, usage, finishReason } from the shim get valid values.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150
Copy link
Copy Markdown
Contributor Author

@greptileai

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/core/lib/v3/flowLogger.ts">

<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1121">
P2: The fallback branch accessing `.total` lacks null safety. If `inputTokens` is `undefined` or `null` (e.g., a model that doesn't report usage), the `typeof` check is `false` and `.total` is accessed on `undefined`, throwing a `TypeError`. Add optional chaining and a fallback default to be consistent with the `?? 0` pattern used elsewhere in the codebase (e.g., `aisdk.ts`).</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/core/lib/v3/flowLogger.ts Outdated
The wrapLanguageModel middleware doesn't auto-adapt v2 provider results
to v3 format, so if a user passes a custom llmClient with a v2-spec
model, result.usage.inputTokens is a flat number (not { total, ... }).
Add typeof guards so token logging works with both spec versions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shrey150 shrey150 force-pushed the shrey/upgrade-ai-sdk-v6 branch from ac2666f to f172c5f Compare February 20, 2026 16:53
Comment thread packages/core/lib/v3/flowLogger.ts Outdated
const result = {
data: objectResponse.object,
data: objectResponse.output,
usage: {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
usage: {
usage: {
...usage,

can we just return the whole obj that aisdk returns as well so users can use fields they might see in aisdk docs

Comment on lines +313 to +315
const usage = textResponse.usage;
return {
prompt_tokens: usage.inputTokens ?? 0,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const usage = textResponse.usage;
return {
prompt_tokens: usage.inputTokens ?? 0,
const usage = textResponse.usage;
return {
...usage,
prompt_tokens: usage.inputTokens ?? 0,

ditto here

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/evals/lib/AISdkClientWrapped.ts">

<violation number="1" location="packages/evals/lib/AISdkClientWrapped.ts:219">
P2: Spreading `...usage` leaks raw AI SDK v6 properties (`inputTokens`, `outputTokens`, `inputTokenDetails`, `outputTokenDetails`, etc.) into what should be an OpenAI `ChatCompletion`-compatible usage object. The core implementation in `packages/core/lib/v3/llm/aisdk.ts` does **not** spread the raw usage — it only includes the explicitly mapped properties. This creates an inconsistency between the evals wrapper and the core SDK, and pollutes the typed response with undocumented extra fields. Consider removing the spread to match the core implementation.</violation>
</file>

<file name="packages/core/lib/v3/flowLogger.ts">

<violation number="1" location="packages/core/lib/v3/flowLogger.ts:1120">
P1: Bug: `result.usage.inputTokens` is a plain `number` in the AI SDK v6 middleware result (as confirmed by every other usage site in this codebase). Accessing `.total` on a number returns `undefined`, so `undefined ?? 0` always evaluates to `0` — silently losing all token usage data in flow logs.

This should match the pattern used everywhere else in the codebase (e.g., `aisdk.ts:242`): treat `inputTokens`/`outputTokens` as numbers directly.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/core/lib/v3/flowLogger.ts
Comment thread packages/evals/lib/AISdkClientWrapped.ts
Comment on lines 30 to 79
@@ -56,7 +53,7 @@ export class AISdkClient extends LLMClient {
});

if (message.role === "user") {
const userMessage: CoreUserMessage = {
const userMessage: ModelMessage = {
role: "user",
content: contentParts,
};
@@ -66,7 +63,7 @@ export class AISdkClient extends LLMClient {
type: "text" as const,
text: part.type === "image" ? "[Image]" : part.text,
}));
const assistantMessage: CoreAssistantMessage = {
const assistantMessage: ModelMessage = {
role: "assistant",
content: textOnlyParts,
};
@@ -82,20 +79,29 @@ export class AISdkClient extends LLMClient {
);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we can continue using the specific types for each of the types of messages rather than the union

seems they get imported as this now -
SystemModelMessage,
UserModelMessage,
AssistantModelMessage,

Comment on lines +132 to +136
usage.reasoningTokens ??
0,
cached_input_tokens:
usage.inputTokenDetails?.cacheReadTokens ??
usage.cachedInputTokens ??
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems these fallbacks are deprecated usage.reasoningTokens, usage.cachedInputTokens . we probably do not need the fallback to reading from them when we check new version already

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could someone pass in an older client object when using a custom client?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think so since its scoped to v3 providers. old ones would be v2

Comment on lines 540 to +567
@@ -553,8 +557,14 @@ export class V3AgentHandler {
? {
input_tokens: result.totalUsage.inputTokens || 0,
output_tokens: result.totalUsage.outputTokens || 0,
reasoning_tokens: result.totalUsage.reasoningTokens || 0,
cached_input_tokens: result.totalUsage.cachedInputTokens || 0,
reasoning_tokens:
result.totalUsage.outputTokenDetails?.reasoningTokens ??
result.totalUsage.reasoningTokens ??
0,
cached_input_tokens:
result.totalUsage.inputTokenDetails?.cacheReadTokens ??
result.totalUsage.cachedInputTokens ??
0,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for the fallbacks here

Comment on lines +1013 to +1019
specificationVersion: "v3" as const,
wrapGenerate: async ({ doGenerate }) => doGenerate(),
};
}

return {
specificationVersion: "v3" as const,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

casting to const is unnecessary in both cases

Comment on lines +5 to 11
Output,
TextPart,
ToolSet,
Tool,
} from "ai";
import * as ai from "ai";
import { wrapAISDK } from "braintrust";
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we confirmed this works with evals CLI. I know between v4 and v5 braintrust changed the way its ai sdk integration works. Unsure if they changed anything for v6 that would cause issues though

Comment on lines 79 to 121
@@ -107,7 +105,7 @@ export class AISdkClientWrapped extends LLMClient {
});

if (message.role === "user") {
const userMessage: CoreUserMessage = {
const userMessage: ModelMessage = {
role: "user",
content: contentParts,
};
@@ -117,7 +115,7 @@ export class AISdkClientWrapped extends LLMClient {
type: "text" as const,
text: part.type === "image" ? "[Image]" : part.text,
}));
const assistantMessage: CoreAssistantMessage = {
const assistantMessage: ModelMessage = {
role: "assistant",
content: textOnlyParts,
};
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit use types specific to user, system, and assistant message rather than the union

Comment on lines +1 to +6
/**
* Thin shims that wrap generateText and streamText with Output.object({ schema })
* as the replacement for deprecated generateObject and streamObject from the AI SDK.
* Callers must supply a schema; it is passed through to Output.object() for structured output.
*/

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the shim I would just recommend leaving generateObject as is. It is deprecated so it can still be used, and is currently not used in our package itself. When we eventually move to a version that does not have it, we can likely just tell people we no longer support it and point them to newer methods

Copy link
Copy Markdown
Member

@seanmcguire12 seanmcguire12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is breaking for users passing in tools to agent, & also people passing in custom aisdk clients.

our options are:

  1. figure out a way to make this backward compatible
  2. hold off on this migration until the next major release

I would rather us hold off until the next major given the complexity & branching logic to required to support both

@shrey150 shrey150 marked this pull request as draft March 10, 2026 21:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants