Skip to content

[Issue]: <title> #2129

@J-zeze

Description

@J-zeze

Do you need to file an issue?

  • I have searched the existing issues and this bug is not already filed.
  • My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
  • I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.

Describe the issue

🐛 Bug: LiteLLM enable_thinking Parameter Failing to Propagate with DashScope (Qwen) Model

📝 Description

When attempting to run the graphrag.index command using a model hosted on Alibaba Cloud DashScope (specifically Qwen3-32B, accessed via LiteLLM), the process fails with a litellm.BadRequestError.

This error indicates that the enable_thinking parameter, which is apparently required by the underlying API for non-streaming calls to be set to false, is not being successfully passed to the LiteLLM client from the settings.yaml file.

Despite explicitly setting this parameter in the configuration, the error persists.

💻 Steps to Reproduce

Environment:

GraphRAG Version: (Please fill in your current GraphRAG version, e.g., pip show graphrag)

LiteLLM Version: (Please fill in your current LiteLLM version, e.g., pip show litellm)

Python Version: (e.g., Python 3.10)

Operating System: (e.g., Windows 11 / Ubuntu 22.04)

Configuration (settings.yaml):
Configure the llm (and/or embeddings) section to use the DashScope model and explicitly include the necessary LiteLLM parameter:

llm:
model: "dashscope/qwen3-32b" # Or "qwen3-32b" if using the shorter format
api_key: "<YOUR_DASH_SCOPE_API_KEY>"
type: "openai" # Assuming this is the configured model type for LiteLLM routing

Explicitly setting the required parameter as per LiteLLM error message

litellm_params:
enable_thinking: false

Execution:
Run the indexing command:

python -m graphrag.index --root <project_root>

❌ Observed Error

The execution immediately fails with the following traceback snippet:

litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - parameter.enable_thinking must be set to false for non-streaming calls

✅ Expected Behavior

The litellm_params specified in the settings.yaml should be correctly merged into the final LiteLLM API call, allowing the indexing process to proceed without the BadRequestError.

❓ Workarounds Attempted

Explicitly setting litellm_params: {enable_thinking: false} in the llm block of settings.yaml.

(If applicable) Explicitly setting litellm_params: {enable_thinking: false} in the embeddings block of settings.yaml.

[Note to Reporter: Please ensure you replace the placeholder values (e.g., version numbers, OS) with your actual environment details before submitting the issue.]

Steps to reproduce

No response

GraphRAG Config Used

# Paste your config here

Logs and screenshots

No response

Additional Information

  • GraphRAG Version:
  • Operating System:
  • Python Version:
  • Related Issues:

Metadata

Metadata

Assignees

No one assigned

    Labels

    backlogWe've confirmed some action is needed on this and will plan itv3Issues that we know should be closed with the v3 release in late 2025.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions