Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
SDK: openai-python (latest)
Endpoint: client.responses.create(..., stream=True)
Model: gpt-5.3-codex (also reproduced on gpt-5.1-codex, gpt-5.1-codex-max)
When calling the Responses API with:
reasoning={"effort": "high", "summary": "auto"}
stream=True
the codex variants never emit response.reasoning_summary_part.added / response.reasoning_summary_text.delta / response.reasoning_summary_part.done events during code-generation
prompts. The stream goes straight from request to response.output_text.delta with no summary in between.
The exact same calling code on gpt-5-mini emits reasoning summary events reliably.
### To Reproduce
reasoning={"effort": "high", "summary": "auto"}
### Code snippets
```Python
OS
windows
Python version
Python 3.14.2
Library version
2.21.0
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
SDK:
openai-python(latest)Endpoint:
client.responses.create(..., stream=True)Model:
gpt-5.3-codex(also reproduced ongpt-5.1-codex,gpt-5.1-codex-max)When calling the Responses API with:
OS
windows
Python version
Python 3.14.2
Library version
2.21.0