Skip to content

CPU weight conversion for GLM-5 and MiniMax-M2.5#1853

Open
alin899992 wants to merge 2 commits intokvcache-ai:mainfrom
alin899992:main
Open

CPU weight conversion for GLM-5 and MiniMax-M2.5#1853
alin899992 wants to merge 2 commits intokvcache-ai:mainfrom
alin899992:main

Conversation

@alin899992
Copy link

What does this PR do?

Add AMX CPU weight conversion support for GLM-5-FP8 and MiniMax-M2.5.
Tested weight: GLM-5-FP8 Minimax-M2.5 MiniMax-M2.1

Noted: Only GLM-5-FP8 weight conversion has been tested but not BF16.

Also tested the weight conversion of DeepSeek-V3.2 and Qwen-Next-Coder, and the original functionality was not damaged.

Fixes # (issue)
#1818

Before submitting

Add CPU weight conversion support for GLM-5 and Minimax-M2.5
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alin899992, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the CPU weight conversion utility by integrating support for GLM-5 and MiniMax model architectures. The changes involve introducing model-specific configuration detection, refactoring base conversion logic for better reusability, and creating a dedicated converter for MiniMax models to handle their unique weight structures. This ensures that these new models can be efficiently quantized for CPU inference, maintaining compatibility with existing conversion processes for other models like DeepSeek and Qwen-Next-Coder.

Highlights

  • Expanded Model Support: Added AMX CPU weight conversion support for GLM-5-FP8 and MiniMax-M2.5 models, enabling broader compatibility for quantization workflows.
  • Architectural Adaptations for GLM-5 and MiniMax: Introduced model-specific detection (is_minimax, is_glm5) and handling for GLM-5's initial dense layers and MiniMax's unique weight naming conventions (w1, w2, w3 instead of gate_proj, up_proj, down_proj).
  • Code Refactoring and Modularity: Refactored the ConverterBase class to include common utility methods like _load_binary_tensor, _remove_layer_folder, and _load_layer_tensors_from_disk, which were previously duplicated in OnlineQuantConverter. This improves code reusability and maintainability.
  • Improved FP8 Dequantization: Enhanced the weight_dequant function with a detailed docstring and streamlined the FP8 dequantization process within the OnlineQuantConverter.
  • Dedicated MiniMax Converter: Implemented a new MiniMaxConverter class, inheriting from ConverterBase, to specifically manage the weight conversion process for MiniMax models, addressing their distinct structural requirements.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • kt-kernel/scripts/convert_cpu_weights.py
    • Removed unnecessary blank lines for cleaner code.
    • Added module-level constants QUANT_TO_AMX_MAP, PROJ_MAPPINGS, and MINIMAX_WEIGHT_MAP for better organization and reusability.
    • Updated the weight_dequant function with a comprehensive docstring.
    • Modified load_model_config to detect and include is_minimax, is_glm5, and first_k_dense_replace in the model configuration.
    • Introduced _is_dense_layer method in ConverterBase to identify and skip dense MLP layers in GLM-5 models during conversion.
    • Moved _load_binary_tensor, _remove_layer_folder, and _load_layer_tensors_from_disk methods from OnlineQuantConverter to ConverterBase to centralize common functionalities.
    • Updated the convert method in ConverterBase to utilize the new _is_dense_layer check and the expert_key_filter attribute.
    • Refactored OnlineQuantConverter._find_expert_layers to correctly identify expert layers based on fused or base layouts.
    • Updated OnlineQuantConverter._convert_layer_experts to use the new module-level QUANT_TO_AMX_MAP and pass num_experts to _load_layer_tensors_from_disk.
    • Added a new MiniMaxConverter class, inheriting from ConverterBase, to handle the specific weight conversion logic for MiniMax models, including custom _find_expert_layers and _get_tensor_key methods.
    • Modified the main function to dynamically select between OnlineQuantConverter and MiniMaxConverter based on the detected model type.
Activity
  • The pull request aims to fix issue Convert MiniMax M2.1 or DeepSeek V3.2 to CPU weight failed #1818.
  • The author has tested the weight conversion for GLM-5-FP8, MiniMax-M2.5, MiniMax-M2.1, DeepSeek-V3.2, and Qwen-Next-Coder models.
  • Specifically, GLM-5-FP8 weight conversion has been tested, but BF16 conversion for GLM-5 is noted as not yet tested.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for CPU weight conversion for GLM-5 and MiniMax models. The changes include significant refactoring of the conversion script to better handle different model architectures and quantization schemes by introducing a ConverterBase class and specific implementations like MiniMaxConverter. The logic for detecting model types from their configuration has also been added.

The refactoring is well done and improves the structure of the code. I've identified a bug in the logic for selecting the MiniMaxConverter which could cause failures for certain input types. I also found a couple of instances of code duplication that could be cleaned up to improve maintainability. My detailed comments are below.

quant_method,
merge_to_safetensor,
)
if is_minimax and args.input_type == "fp8":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The condition to select MiniMaxConverter is too restrictive. It only triggers for fp8 input type, but the converter itself supports fp16 and bf16 as well. If a MiniMax model with bf16 or fp16 input is provided, it will fall back to OnlineQuantConverter, which will likely fail because it doesn't handle the w1, w2, w3 weight naming scheme of MiniMax models. The condition should be simplified to select MiniMaxConverter whenever is_minimax is true.

Suggested change
if is_minimax and args.input_type == "fp8":
if is_minimax:

)
self.quant_method = quant_method

self.expert_key_filter = ".mlp.experts."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expert_key_filter is initialized here with ".mlp.experts." which is the same as the default value set in the ConverterBase constructor. This line is redundant and can be removed.

Comment on lines +656 to +695
def _find_expert_layers(self) -> Dict[int, List[int]]:
"""Find all layers and experts in the model."""
layers = defaultdict(set)
# detect layout
for key in self.tensor_file_map.keys():
if "mlp.experts" in key and "gate_up" in key:
self.layout = "fused"
break

def _remove_layer_folder(self, layer_idx: int):
"""Remove _layer_{layer_idx} folder and all its contents
if self.layout == "fused":
layers = set()
for key in self.tensor_file_map.keys():
if "model.layers." in key and ".mlp.experts." in key:
parts = key.split(".")
if len(parts) >= 6:
layer_idx = int(parts[2])
layers.add(layer_idx)
result: Dict[int, List[int]] = {}
for layer_idx in sorted(layers):
result[layer_idx] = [-1]
print(f"Found {len(result)} layers with fused MoE experts")
return result

Args:
layer_idx: Layer index
"""
import shutil
# Pattern: model.layers.{layer}.mlp.experts.{expert}.{proj}.{type}
for key in self.tensor_file_map.keys():
if "model.layers." in key and ".mlp.experts." in key:
parts = key.split(".")
if len(parts) >= 6:
layer_idx = int(parts[2])
expert_idx = int(parts[5])
layers[layer_idx].add(expert_idx)

layer_path = os.path.join(self.output_path, f"_layer_{layer_idx}")
if os.path.exists(layer_path):
shutil.rmtree(layer_path)
print(f" Removed temporary folder: {layer_path}")
# Convert to sorted lists
result: Dict[int, List[int]] = {}
for layer_idx, expert_set in layers.items():
result[layer_idx] = sorted(list(expert_set))
print(f"Found {len(result)} layers with MoE experts:")
for layer_idx, experts in sorted(result.items()):
print(f" Layer {layer_idx}: {len(experts)} experts (0-{max(experts)})")
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _find_expert_layers method is identical to the one in the base class ConverterBase. This code duplication can be removed to improve maintainability. The method can be inherited from ConverterBase directly.

Copy link
Collaborator

@ErvinXie ErvinXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding GLM-5 and MiniMax-M2.5 support! The refactoring direction (pulling common methods into ConverterBase) is solid. A few issues need to be addressed before merging:


Must Fix

1. MiniMaxConverter selection condition is too restrictive

# Line 1167
if is_minimax and args.input_type == "fp8":
    converter = MiniMaxConverter(...)

MiniMaxConverter internally supports fp16 and bf16, but the entry condition only routes fp8 MiniMax models to it. If a user passes a bf16 or fp16 MiniMax model, it will fall through to OnlineQuantConverter, which doesn't handle the w1/w2/w3 weight naming — this will fail at runtime.

Fix: remove the fp8 restriction:

if is_minimax:
    converter = MiniMaxConverter(...)

2. OnlineQuantConverter._find_expert_layers duplicates base class

The overridden _find_expert_layers in OnlineQuantConverter (around line 695) is identical to ConverterBase._find_expert_layers. Please remove the override and inherit directly.


Should Fix

3. num_experts inconsistency between converters

MiniMaxConverter passes len(expert_ids) to KTMoEWrapper, while OnlineQuantConverter passes self.num_experts (from config). If any experts are skipped (the missing_keys warning path), len(expert_ids) could differ from config, and the subsequent _load_layer_tensors_from_disk(layer_idx, len(expert_ids)) may iterate over the wrong range. Please verify this is intentional, or unify the behavior.

4. Redundant expert_key_filter assignment

In OnlineQuantConverter.__init__ (line 649):

self.expert_key_filter = ".mlp.experts."

This is the same default already set in ConverterBase.__init__. Can be removed.

5. Missing newline at end of file

The last line exit(main()) is missing a trailing newline — please add one.


Suggestions

  • Separate whitespace changes from functional changes. This PR removes many blank lines throughout the file (PEP 8 recommends 2 blank lines between top-level definitions). Mixing formatting changes with feature work makes the diff harder to review (~650 lines changed, but a significant portion is just blank line removal). Consider reverting the unrelated whitespace changes, or submitting them as a separate PR.

  • amx_method naming ambiguity. In _load_layer_tensors_from_disk, amx_method has "AMX" stripped (for file name matching), while in _convert_layer_experts it keeps the full "AMXINT4" form (for wrapper init). Same variable name, different semantics. Consider renaming one (e.g., amx_file_prefix) to avoid future confusion.


Overall the design is good — just needs the bug fix in the converter selection logic and the code dedup cleanup. Looking forward to the updated version!

@ErvinXie
Copy link
Collaborator

Or you can let me to do this for you.

@alin899992
Copy link
Author

Or you can let me to do this for you.

Thank you for the review and the kind offer! Please go ahead and make the changes.

- Remove `args.input_type == "fp8"` from MiniMaxConverter selection so
  bf16/fp16 MiniMax models no longer fall through to OnlineQuantConverter
  (which doesn't handle w1/w2/w3 naming and would fail).
- Remove OnlineQuantConverter._find_expert_layers() which is identical
  to the inherited ConverterBase._find_expert_layers().
- Remove redundant expert_key_filter assignment (same as base default).
@ErvinXie
Copy link
Collaborator

I've pushed a fix commit (9a3043a) addressing the review issues:

  1. Removed args.input_type == "fp8" from MiniMaxConverter selectionMiniMaxConverter._convert_layer_experts already handles fp8/fp16/bf16 (lines 983–1014), so the condition was unnecessarily restrictive. Without this fix, bf16/fp16 MiniMax models would fall through to OnlineQuantConverter, which doesn't handle w1/w2/w3 naming and would fail.

  2. Removed OnlineQuantConverter._find_expert_layers() — it was identical to the inherited ConverterBase._find_expert_layers().

  3. Removed redundant self.expert_key_filter = ".mlp.experts." in OnlineQuantConverter — same as the base class default.

Net change: +1 / -43 lines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants