feat: add MiniMax chat model integration#5959
feat: add MiniMax chat model integration#5959octo-patch wants to merge 1 commit intoFlowiseAI:mainfrom
Conversation
Add MiniMax as a new LLM chat model provider, supporting MiniMax-M2.5 and MiniMax-M2.5-highspeed models via their OpenAI-compatible API. - New ChatMiniMax node using ChatOpenAI with MiniMax base URL - MiniMax API credential for secure key management - Model definitions in models.json for model selection - Custom SVG icon for the MiniMax node
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances Flowise by adding support for MiniMax as a new large language model chat provider. It introduces the necessary components to integrate MiniMax's M2.5 and M2.5-highspeed models, allowing users to leverage these powerful LLMs through a dedicated node. The changes include robust API key management and ensure compatibility with Flowise's existing architecture, broadening the platform's generative AI capabilities. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the MiniMax chat model provider. The implementation correctly follows the existing pattern for OpenAI-compatible providers by reusing the ChatOpenAI class. The changes include a new credential type, the chat model node itself, and additions to the model list.
I've identified two high-severity issues in the ChatMiniMax.ts implementation related to handling the temperature parameter and preventing baseURL overrides, which could lead to runtime errors or incorrect API endpoint usage. My review includes suggestions to fix these issues.
| const cache = nodeData.inputs?.cache as BaseCache | ||
|
|
||
| const obj: ChatOpenAIFields = { | ||
| temperature: parseFloat(temperature), |
There was a problem hiding this comment.
The current implementation parseFloat(temperature) can result in NaN if the temperature input is empty, or 0 if the user enters 0. A NaN value will likely cause an error, and the MiniMax API does not accept 0 for temperature. Using || 0.9 provides a fallback to the default value in these cases, ensuring a valid temperature is always sent.
| temperature: parseFloat(temperature), | |
| temperature: parseFloat(temperature) || 0.9, |
| parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions) | ||
| if (parsedBaseOptions.baseURL) { | ||
| console.warn("The 'baseURL' parameter is not allowed when using the ChatMiniMax node.") | ||
| parsedBaseOptions.baseURL = undefined |
There was a problem hiding this comment.
Setting parsedBaseOptions.baseURL = undefined does not remove the property from the object. When parsedBaseOptions is spread into the configuration, it will override the correct baseURL with undefined, causing API requests to fail. Using delete will properly remove the property, ensuring this.baseURL is not overridden.
| parsedBaseOptions.baseURL = undefined | |
| delete parsedBaseOptions.baseURL |
Summary
Adds MiniMax as a new LLM chat model provider in Flowise, enabling users to use MiniMax's language models (MiniMax-M2.5 and MiniMax-M2.5-highspeed) through the familiar Flowise node interface.
Changes
ChatMiniMaxinpackages/components/nodes/chatmodels/ChatMiniMax/usingChatOpenAIfrom@langchain/openai(MiniMax API is OpenAI-compatible)MiniMaxApi.credential.tsfor secure API key managementmodels.jsonfor model selection dropdownKey Details
https://api.minimax.io/v1MiniMax-M2.5(flagship, 204K context),MiniMax-M2.5-highspeed(optimized for speed)Test Plan