Skip to content

support minimax-m2.5#1848

Merged
ovowei merged 1 commit intomainfrom
minimax-m2.5
Feb 13, 2026
Merged

support minimax-m2.5#1848
ovowei merged 1 commit intomainfrom
minimax-m2.5

Conversation

@ovowei
Copy link
Collaborator

@ovowei ovowei commented Feb 13, 2026

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ovowei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces new documentation that enables users to deploy and run the MiniMax-M2.5 model. It outlines the necessary steps and configurations for performing inference with this model, specifically highlighting its integration with KT-Kernel for efficient CPU-GPU heterogeneous processing of large Mixture-of-Experts (MoE) models.

Highlights

  • New Model Support: Added comprehensive documentation for running the MiniMax-M2.5 model.
  • KT-Kernel Integration: Detailed steps for leveraging KT-Kernel with SGLang for CPU-GPU heterogeneous inference, optimizing large MoE models.
  • Inference Guide: Provided a complete guide including hardware requirements, prerequisites, model download, server launch commands, and example API requests for MiniMax-M2.5.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • doc/en/MiniMax-M2.5.md
    • Added a new tutorial for running MiniMax-M2.5 with SGLang and KT-Kernel.
Activity
  • No specific activity recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds documentation for running the MiniMax-M2.5 model. The new markdown file provides a tutorial, but it contains several issues that could confuse users or cause errors, such as incorrect command sequences, broken links, and inconsistencies in the provided examples. I've added comments to address these issues and improve the clarity and correctness of the documentation.

Comment on lines +28 to +30
git clone https://github.com/kvcache-ai/ktransformers.git
git submodule update --init --recursive
cd kt-kernel && ./install.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The installation instructions for KT-Kernel are missing a cd ktransformers command after cloning the repository. The kt-kernel directory is inside the ktransformers repository, so users need to change into it before they can cd kt-kernel. Following the current instructions will result in an error.

Suggested change
git clone https://github.com/kvcache-ai/ktransformers.git
git submodule update --init --recursive
cd kt-kernel && ./install.sh
git clone https://github.com/kvcache-ai/ktransformers.git
cd ktransformers
git submodule update --init --recursive
cd kt-kernel && ./install.sh

cd kt-kernel && ./install.sh
```

2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The relative link to the SGLang integration steps appears to be broken. The kt-kernel_intro.md file is located in a kt-kernel subdirectory. Please correct the path to ensure the link works correctly.

Suggested change
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel/kt-kernel_intro.md#integration-with-sglang)

Comment on lines +54 to +61
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models

# Download MiniMax-M2.5 (FP8 for both CPU and GPU)
huggingface-cli download MiniMaxAI/MiniMax-M2.5 \
--local-dir /path/to/minimax-m2.5
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The instructions for downloading model weights are confusing. The user is instructed to cd /path/to/models, but then the huggingface-cli command uses an absolute path --local-dir /path/to/minimax-m2.5, which makes the cd command irrelevant and may lead to the model being saved in an unexpected location. It would be clearer to use a relative path for the download directory.

Suggested change
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download MiniMax-M2.5 (FP8 for both CPU and GPU)
huggingface-cli download MiniMaxAI/MiniMax-M2.5 \
--local-dir /path/to/minimax-m2.5
```
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download MiniMax-M2.5 (FP8 for both CPU and GPU)
huggingface-cli download MiniMaxAI/MiniMax-M2.5 \
--local-dir minimax-m2.5

## Hardware Requirements

**Minimum Configuration:**
- **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The description "NVIDIA RTX 2x4090 48GB" can be slightly ambiguous. It could be misinterpreted as two 48GB cards. For clarity, it's better to specify the memory per card and the total VRAM.

Suggested change
- **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available)
- **GPU**: 2x NVIDIA RTX 4090 (24GB each, for a total of 48GB VRAM available)

Comment on lines +40 to +41
// maybe need to reinstall cudnn according to the issue when launching SGLang
// pip install nvidia-cudnn-cu12==9.16.0.29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The // syntax is not valid for comments in a bash code block in Markdown. Please use # for comments to ensure correctness and prevent potential copy-paste errors for users. Additionally, the comment "according to the issue" is vague. It would be more helpful to link to the specific issue if possible.

Suggested change
// maybe need to reinstall cudnn according to the issue when launching SGLang
// pip install nvidia-cudnn-cu12==9.16.0.29
# maybe need to reinstall cudnn according to the issue when launching SGLang
# pip install nvidia-cudnn-cu12==9.16.0.29

Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.


### Launch Command (4x RTX 4090 Example)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is an inconsistency between the minimum hardware requirements and the launch command example. The requirements state a minimum of 2x RTX 4090, but the example is for a 4x RTX 4090 setup (as indicated by --tensor-parallel-size 4). This could be confusing for users. Please clarify if 4 GPUs are required for this example, or provide an example that matches the minimum configuration.


It takes about 2~3 minutes to start the server.

See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Instead of linking to the README.md on the main branch of the GitHub repository, consider using a relative link to the documentation file within this repository (e.g., ../kt-kernel/kt-kernel_intro.md#kt-kernel-parameters). This ensures that the documentation remains consistent with the code version and works offline.

Suggested change
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
See [KT-Kernel Parameters](../kt-kernel/kt-kernel_intro.md#kt-kernel-parameters) for detailed parameter tuning guidelines.

@ovowei ovowei merged commit f0e4fc6 into main Feb 13, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant