Skip to content

[BUG] GPT-OSS in Strands filtered out reasoning content, impacting performance and token efficiency #1950

@WangHong-yang

Description

@WangHong-yang

Checks

  • I have updated to the lastest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched ./issues and there are no duplicates of my issue

Strands Version

v1.32.0

Python Version

3.11

Operating System

AL2

Installation Method

pip

Steps to Reproduce

I saw this when using GPT-OSS model in Strands:

WARNING:strands.models.openai:reasoningContent is not supported in multi-turn conversations with the Chat Completions API.

Strands throws away reasoning content in codes:

# Check for reasoningContent and warn user
if any("reasoningContent" in content for content in contents):
logger.warning(
"reasoningContent is not supported in multi-turn conversations with the Chat Completions API."
)
# Filter out content blocks that shouldn't be formatted
filtered_contents = []
for content in contents:
if any(block_type in content for block_type in ["toolResult", "toolUse", "reasoningContent"]):
continue
if _has_location_source(content):
logger.warning("Location sources are not supported by OpenAI | skipping content block")
continue
filtered_contents.append(content)

Expected Behavior

To optimize the performance and maximize token efficiency, OpenAI suggests to keep reasoning items in context:

When doing function calling with a reasoning model in the Responses API, we highly recommend you pass back any reasoning items returned with the last function call (in addition to the output of your function). If the model calls multiple functions consecutively, you should pass back all reasoning items, function call items, and function call output items, since the last user message. This allows the model to continue its reasoning process to produce better results in the most token-efficient manner.

Actual Behavior

WARNING:strands.models.openai:reasoningContent is not supported in multi-turn conversations with the Chat Completions API.

Additional Context

No response

Possible Solution

Add back reasoning content in codes:

# Check for reasoningContent and warn user
if any("reasoningContent" in content for content in contents):
logger.warning(
"reasoningContent is not supported in multi-turn conversations with the Chat Completions API."
)
# Filter out content blocks that shouldn't be formatted
filtered_contents = []
for content in contents:
if any(block_type in content for block_type in ["toolResult", "toolUse", "reasoningContent"]):
continue
if _has_location_source(content):
logger.warning("Location sources are not supported by OpenAI | skipping content block")
continue
filtered_contents.append(content)

Related Issues

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions