Skip to content

feat: add MiniMax chat model integration#5959

Open
octo-patch wants to merge 1 commit intoFlowiseAI:mainfrom
octo-patch:feat/add-minimax-chat-model
Open

feat: add MiniMax chat model integration#5959
octo-patch wants to merge 1 commit intoFlowiseAI:mainfrom
octo-patch:feat/add-minimax-chat-model

Conversation

@octo-patch
Copy link

Summary

Adds MiniMax as a new LLM chat model provider in Flowise, enabling users to use MiniMax's language models (MiniMax-M2.5 and MiniMax-M2.5-highspeed) through the familiar Flowise node interface.

Changes

  • New chat model node: ChatMiniMax in packages/components/nodes/chatmodels/ChatMiniMax/ using ChatOpenAI from @langchain/openai (MiniMax API is OpenAI-compatible)
  • Credential: MiniMaxApi.credential.ts for secure API key management
  • Model definitions: Added MiniMax models to models.json for model selection dropdown
  • Icon: Custom SVG icon for the MiniMax node

Key Details

  • MiniMax API base: https://api.minimax.io/v1
  • Supported models: MiniMax-M2.5 (flagship, 204K context), MiniMax-M2.5-highspeed (optimized for speed)
  • Temperature range: (0.0, 1.0] (MiniMax does not accept zero)
  • Follows the same pattern as existing OpenAI-compatible providers (Deepseek, Groq, etc.)

Test Plan

  • Verify ChatMiniMax node appears in the Chat Models category
  • Test credential configuration with MiniMax API key
  • Test model selection dropdown loads MiniMax models
  • Test text generation with MiniMax-M2.5
  • Verify streaming works correctly

Add MiniMax as a new LLM chat model provider, supporting MiniMax-M2.5
and MiniMax-M2.5-highspeed models via their OpenAI-compatible API.

- New ChatMiniMax node using ChatOpenAI with MiniMax base URL
- MiniMax API credential for secure key management
- Model definitions in models.json for model selection
- Custom SVG icon for the MiniMax node
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances Flowise by adding support for MiniMax as a new large language model chat provider. It introduces the necessary components to integrate MiniMax's M2.5 and M2.5-highspeed models, allowing users to leverage these powerful LLMs through a dedicated node. The changes include robust API key management and ensure compatibility with Flowise's existing architecture, broadening the platform's generative AI capabilities.

Highlights

  • MiniMax Chat Model Integration: Integrated MiniMax as a new chat model provider, enabling access to MiniMax-M2.5 and MiniMax-M2.5-highspeed models.
  • New ChatMiniMax Node: Introduced a dedicated ChatMiniMax node within Flowise for seamless interaction with MiniMax language models.
  • Credential Management: Implemented a new credential class (MiniMaxApi.credential.ts) for secure handling of MiniMax API keys.
  • Model Definition Updates: Updated models.json to include MiniMax models and standardized cost notations for various existing models.
Changelog
  • packages/components/credentials/MiniMaxApi.credential.ts
    • Added a new credential class for MiniMax API key management.
  • packages/components/models.json
    • Standardized cost values to scientific notation for various models.
    • Added MiniMax chat models (MiniMax-M2.5 and MiniMax-M2.5-highspeed) to the available model list.
  • packages/components/nodes/chatmodels/ChatMiniMax/ChatMiniMax.ts
    • Added a new ChatMiniMax node for integrating MiniMax chat models, utilizing the OpenAI-compatible API.
    • Configured the node with parameters such as temperature, max tokens, streaming, and penalty controls.
  • packages/components/nodes/chatmodels/ChatMiniMax/minimax.svg
    • Added a custom SVG icon for the new ChatMiniMax node.
Activity
  • No specific activity was provided for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the MiniMax chat model provider. The implementation correctly follows the existing pattern for OpenAI-compatible providers by reusing the ChatOpenAI class. The changes include a new credential type, the chat model node itself, and additions to the model list.

I've identified two high-severity issues in the ChatMiniMax.ts implementation related to handling the temperature parameter and preventing baseURL overrides, which could lead to runtime errors or incorrect API endpoint usage. My review includes suggestions to fix these issues.

const cache = nodeData.inputs?.cache as BaseCache

const obj: ChatOpenAIFields = {
temperature: parseFloat(temperature),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation parseFloat(temperature) can result in NaN if the temperature input is empty, or 0 if the user enters 0. A NaN value will likely cause an error, and the MiniMax API does not accept 0 for temperature. Using || 0.9 provides a fallback to the default value in these cases, ensuring a valid temperature is always sent.

Suggested change
temperature: parseFloat(temperature),
temperature: parseFloat(temperature) || 0.9,

parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
if (parsedBaseOptions.baseURL) {
console.warn("The 'baseURL' parameter is not allowed when using the ChatMiniMax node.")
parsedBaseOptions.baseURL = undefined
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Setting parsedBaseOptions.baseURL = undefined does not remove the property from the object. When parsedBaseOptions is spread into the configuration, it will override the correct baseURL with undefined, causing API requests to fail. Using delete will properly remove the property, ensuring this.baseURL is not overridden.

Suggested change
parsedBaseOptions.baseURL = undefined
delete parsedBaseOptions.baseURL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant