Conversation
Summary of ChangesHello @ovowei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces new documentation that enables users to deploy and run the MiniMax-M2.5 model. It outlines the necessary steps and configurations for performing inference with this model, specifically highlighting its integration with KT-Kernel for efficient CPU-GPU heterogeneous processing of large Mixture-of-Experts (MoE) models. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds documentation for running the MiniMax-M2.5 model. The new markdown file provides a tutorial, but it contains several issues that could confuse users or cause errors, such as incorrect command sequences, broken links, and inconsistencies in the provided examples. I've added comments to address these issues and improve the clarity and correctness of the documentation.
| git clone https://github.com/kvcache-ai/ktransformers.git | ||
| git submodule update --init --recursive | ||
| cd kt-kernel && ./install.sh |
There was a problem hiding this comment.
The installation instructions for KT-Kernel are missing a cd ktransformers command after cloning the repository. The kt-kernel directory is inside the ktransformers repository, so users need to change into it before they can cd kt-kernel. Following the current instructions will result in an error.
| git clone https://github.com/kvcache-ai/ktransformers.git | |
| git submodule update --init --recursive | |
| cd kt-kernel && ./install.sh | |
| git clone https://github.com/kvcache-ai/ktransformers.git | |
| cd ktransformers | |
| git submodule update --init --recursive | |
| cd kt-kernel && ./install.sh |
| cd kt-kernel && ./install.sh | ||
| ``` | ||
|
|
||
| 2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang) |
There was a problem hiding this comment.
The relative link to the SGLang integration steps appears to be broken. The kt-kernel_intro.md file is located in a kt-kernel subdirectory. Please correct the path to ensure the link works correctly.
| 2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang) | |
| 2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel/kt-kernel_intro.md#integration-with-sglang) |
| # Create a directory for models | ||
| mkdir -p /path/to/models | ||
| cd /path/to/models | ||
|
|
||
| # Download MiniMax-M2.5 (FP8 for both CPU and GPU) | ||
| huggingface-cli download MiniMaxAI/MiniMax-M2.5 \ | ||
| --local-dir /path/to/minimax-m2.5 | ||
| ``` |
There was a problem hiding this comment.
The instructions for downloading model weights are confusing. The user is instructed to cd /path/to/models, but then the huggingface-cli command uses an absolute path --local-dir /path/to/minimax-m2.5, which makes the cd command irrelevant and may lead to the model being saved in an unexpected location. It would be clearer to use a relative path for the download directory.
| # Create a directory for models | |
| mkdir -p /path/to/models | |
| cd /path/to/models | |
| # Download MiniMax-M2.5 (FP8 for both CPU and GPU) | |
| huggingface-cli download MiniMaxAI/MiniMax-M2.5 \ | |
| --local-dir /path/to/minimax-m2.5 | |
| ``` | |
| # Create a directory for models | |
| mkdir -p /path/to/models | |
| cd /path/to/models | |
| # Download MiniMax-M2.5 (FP8 for both CPU and GPU) | |
| huggingface-cli download MiniMaxAI/MiniMax-M2.5 \ | |
| --local-dir minimax-m2.5 |
| ## Hardware Requirements | ||
|
|
||
| **Minimum Configuration:** | ||
| - **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available) |
There was a problem hiding this comment.
The description "NVIDIA RTX 2x4090 48GB" can be slightly ambiguous. It could be misinterpreted as two 48GB cards. For clarity, it's better to specify the memory per card and the total VRAM.
| - **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available) | |
| - **GPU**: 2x NVIDIA RTX 4090 (24GB each, for a total of 48GB VRAM available) |
| // maybe need to reinstall cudnn according to the issue when launching SGLang | ||
| // pip install nvidia-cudnn-cu12==9.16.0.29 |
There was a problem hiding this comment.
The // syntax is not valid for comments in a bash code block in Markdown. Please use # for comments to ensure correctness and prevent potential copy-paste errors for users. Additionally, the comment "according to the issue" is vague. It would be more helpful to link to the specific issue if possible.
| // maybe need to reinstall cudnn according to the issue when launching SGLang | |
| // pip install nvidia-cudnn-cu12==9.16.0.29 | |
| # maybe need to reinstall cudnn according to the issue when launching SGLang | |
| # pip install nvidia-cudnn-cu12==9.16.0.29 |
| Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference. | ||
|
|
||
|
|
||
| ### Launch Command (4x RTX 4090 Example) |
There was a problem hiding this comment.
There is an inconsistency between the minimum hardware requirements and the launch command example. The requirements state a minimum of 2x RTX 4090, but the example is for a 4x RTX 4090 setup (as indicated by --tensor-parallel-size 4). This could be confusing for users. Please clarify if 4 GPUs are required for this example, or provide an example that matches the minimum configuration.
|
|
||
| It takes about 2~3 minutes to start the server. | ||
|
|
||
| See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines. |
There was a problem hiding this comment.
Instead of linking to the README.md on the main branch of the GitHub repository, consider using a relative link to the documentation file within this repository (e.g., ../kt-kernel/kt-kernel_intro.md#kt-kernel-parameters). This ensures that the documentation remains consistent with the code version and works offline.
| See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines. | |
| See [KT-Kernel Parameters](../kt-kernel/kt-kernel_intro.md#kt-kernel-parameters) for detailed parameter tuning guidelines. |
No description provided.