Skip to content

Fix reasoning_effort error for GPT-5.1/5.2 in local mode#317

Merged
shrey150 merged 1 commit intov0from
fix/gpt5-reasoning-effort
Mar 10, 2026
Merged

Fix reasoning_effort error for GPT-5.1/5.2 in local mode#317
shrey150 merged 1 commit intov0from
fix/gpt5-reasoning-effort

Conversation

@shrey150
Copy link
Contributor

@shrey150 shrey150 commented Mar 9, 2026

Summary

  • GPT-5.1 and GPT-5.2 reject reasoning_effort: "minimal" — valid values are none, low, medium, high, xhigh
  • Sets reasoning_effort: "low" for GPT-5.1/5.2 models in the local mode LLM client (litellm path)
  • Other GPT-5 variants (e.g. gpt-5-nano) are unaffected — they don't get reasoning_effort set

Context

Users on v0 SDK hitting this error when using GPT-5.2:

Unsupported value: 'reasoning_effort' does not support 'minimal' with this model.
Supported values are: 'none', 'low', 'medium', 'high', and 'xhigh'.

This matches the fix already shipped in the v3 TypeScript core (aisdk.ts) which differentiates GPT-5.1/5.2 from other GPT-5 models.

Note: For API mode (BROWSERBASE) users, the reasoning_effort is set server-side. Users hitting this in API mode should upgrade to stagehand>=3.6.0 which connects to a server with the fix.

Test plan

  • Unit logic verified for GPT-5.2 ("low"), GPT-5.1 ("low"), GPT-5-nano (no override), GPT-4o (no override)
  • E2E tested locally: extract + observe with openai/gpt-5.2 in LOCAL mode passes
  • 57 unit tests pass (1 pre-existing failure unrelated)

🤖 Generated with Claude Code


Summary by cubic

Fixes OpenAI errors for gpt-5.1 and gpt-5.2 in local mode by setting reasoning_effort to "low" instead of the unsupported "minimal". Prevents failed requests when using these models via the local litellm client.

  • Bug Fixes
    • Apply reasoning_effort: "low" only for gpt-5.1 and gpt-5.2 in local (litellm) mode.
    • Other GPT-5 variants (e.g., gpt-5-nano) are unchanged.
    • For API mode, this is handled server-side; upgrade to stagehand>=3.6.0 if you still see the error.

Written for commit 6f348f2. Summary will update on new commits. Review in cubic

GPT-5.1 and GPT-5.2 reject reasoning_effort: "minimal" with error:
"Unsupported value: 'reasoning_effort' does not support 'minimal'".
Set reasoning_effort to "low" for these models to avoid the error.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file

Confidence score: 3/5

  • There is a concrete behavior risk in stagehand/llm/client.py: reasoning_effort is always overridden for GPT-5.1/5.2, even when callers pass valid explicit values like "high" or "medium".
  • Given the issue’s high severity/confidence (7/10, 9/10), this could cause user-visible regressions in model configuration rather than being a purely internal refactor concern.
  • Pay close attention to stagehand/llm/client.py - ensure the GPT-5.1/5.2 fallback only handles the rejected "minimal" case and preserves valid user-provided reasoning_effort values.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="stagehand/llm/client.py">

<violation number="1" location="stagehand/llm/client.py:120">
P1: This unconditionally overrides `reasoning_effort` for GPT-5.1/5.2, even when the user explicitly passes a valid value (e.g., `"high"`, `"medium"`). Since the bug is specifically about `"minimal"` being rejected, only that value should be replaced.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

# GPT-5.1 and GPT-5.2 don't support "minimal" reasoning_effort.
# Set "low" for these models to avoid OpenAI API errors.
if "gpt-5.1" in completion_model or "gpt-5.2" in completion_model:
filtered_params["reasoning_effort"] = "low"
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This unconditionally overrides reasoning_effort for GPT-5.1/5.2, even when the user explicitly passes a valid value (e.g., "high", "medium"). Since the bug is specifically about "minimal" being rejected, only that value should be replaced.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stagehand/llm/client.py, line 120:

<comment>This unconditionally overrides `reasoning_effort` for GPT-5.1/5.2, even when the user explicitly passes a valid value (e.g., `"high"`, `"medium"`). Since the bug is specifically about `"minimal"` being rejected, only that value should be replaced.</comment>

<file context>
@@ -114,6 +114,11 @@ async def create_response(
+            # GPT-5.1 and GPT-5.2 don't support "minimal" reasoning_effort.
+            # Set "low" for these models to avoid OpenAI API errors.
+            if "gpt-5.1" in completion_model or "gpt-5.2" in completion_model:
+                filtered_params["reasoning_effort"] = "low"
+
         self.logger.debug(
</file context>
Suggested change
filtered_params["reasoning_effort"] = "low"
if filtered_params.get("reasoning_effort") == "minimal":
filtered_params["reasoning_effort"] = "low"
Fix with Cubic

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shrey150 can users pass specific reasoning effort?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, not possible on v3 nor in v2

@shrey150 shrey150 merged commit 9488f88 into v0 Mar 10, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants