-
-
Notifications
You must be signed in to change notification settings - Fork 511
Description
Is your feature request related to a problem? Please describe.
When using new OpenAI GPT-5 models in the Obsidian Copilot plugin, requests fail with the following error:
Error: Model request failed: 400
Unsupported parameter: 'max_tokens' is not supported with this model.
Use 'max_completion_tokens' instead.
This happens because GPT-5 models no longer support the legacy max_tokens parameter, but the plugin still sends it in the request payload.
Describe the solution you'd like
Update the OpenAI request handling so that:
- GPT-5 models use
max_completion_tokens - Legacy and older models continue using
max_tokens, when required
Ideally, this should be handled automatically based on the selected model, without requiring manual configuration from the user.
Describe alternatives you've considered
- Manually modifying the plugin source code to replace
max_tokenswithmax_completion_tokens - Downgrading to older OpenAI models that still support
max_tokens
Both alternatives are not ideal for long-term maintenance or for users who want access to newer models.
Additional context
According to OpenAI’s updated API behavior, GPT-5 models require max_completion_tokens instead of max_tokens.
This change currently prevents GPT-5 models from being used at all in the Copilot plugin.