CPU weight conversion for GLM-5 and MiniMax-M2.5#1853
CPU weight conversion for GLM-5 and MiniMax-M2.5#1853alin899992 wants to merge 2 commits intokvcache-ai:mainfrom
Conversation
Add CPU weight conversion support for GLM-5 and Minimax-M2.5
Summary of ChangesHello @alin899992, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly extends the CPU weight conversion utility by integrating support for GLM-5 and MiniMax model architectures. The changes involve introducing model-specific configuration detection, refactoring base conversion logic for better reusability, and creating a dedicated converter for MiniMax models to handle their unique weight structures. This ensures that these new models can be efficiently quantized for CPU inference, maintaining compatibility with existing conversion processes for other models like DeepSeek and Qwen-Next-Coder. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for CPU weight conversion for GLM-5 and MiniMax models. The changes include significant refactoring of the conversion script to better handle different model architectures and quantization schemes by introducing a ConverterBase class and specific implementations like MiniMaxConverter. The logic for detecting model types from their configuration has also been added.
The refactoring is well done and improves the structure of the code. I've identified a bug in the logic for selecting the MiniMaxConverter which could cause failures for certain input types. I also found a couple of instances of code duplication that could be cleaned up to improve maintainability. My detailed comments are below.
| quant_method, | ||
| merge_to_safetensor, | ||
| ) | ||
| if is_minimax and args.input_type == "fp8": |
There was a problem hiding this comment.
The condition to select MiniMaxConverter is too restrictive. It only triggers for fp8 input type, but the converter itself supports fp16 and bf16 as well. If a MiniMax model with bf16 or fp16 input is provided, it will fall back to OnlineQuantConverter, which will likely fail because it doesn't handle the w1, w2, w3 weight naming scheme of MiniMax models. The condition should be simplified to select MiniMaxConverter whenever is_minimax is true.
| if is_minimax and args.input_type == "fp8": | |
| if is_minimax: |
| ) | ||
| self.quant_method = quant_method | ||
|
|
||
| self.expert_key_filter = ".mlp.experts." |
| def _find_expert_layers(self) -> Dict[int, List[int]]: | ||
| """Find all layers and experts in the model.""" | ||
| layers = defaultdict(set) | ||
| # detect layout | ||
| for key in self.tensor_file_map.keys(): | ||
| if "mlp.experts" in key and "gate_up" in key: | ||
| self.layout = "fused" | ||
| break | ||
|
|
||
| def _remove_layer_folder(self, layer_idx: int): | ||
| """Remove _layer_{layer_idx} folder and all its contents | ||
| if self.layout == "fused": | ||
| layers = set() | ||
| for key in self.tensor_file_map.keys(): | ||
| if "model.layers." in key and ".mlp.experts." in key: | ||
| parts = key.split(".") | ||
| if len(parts) >= 6: | ||
| layer_idx = int(parts[2]) | ||
| layers.add(layer_idx) | ||
| result: Dict[int, List[int]] = {} | ||
| for layer_idx in sorted(layers): | ||
| result[layer_idx] = [-1] | ||
| print(f"Found {len(result)} layers with fused MoE experts") | ||
| return result | ||
|
|
||
| Args: | ||
| layer_idx: Layer index | ||
| """ | ||
| import shutil | ||
| # Pattern: model.layers.{layer}.mlp.experts.{expert}.{proj}.{type} | ||
| for key in self.tensor_file_map.keys(): | ||
| if "model.layers." in key and ".mlp.experts." in key: | ||
| parts = key.split(".") | ||
| if len(parts) >= 6: | ||
| layer_idx = int(parts[2]) | ||
| expert_idx = int(parts[5]) | ||
| layers[layer_idx].add(expert_idx) | ||
|
|
||
| layer_path = os.path.join(self.output_path, f"_layer_{layer_idx}") | ||
| if os.path.exists(layer_path): | ||
| shutil.rmtree(layer_path) | ||
| print(f" Removed temporary folder: {layer_path}") | ||
| # Convert to sorted lists | ||
| result: Dict[int, List[int]] = {} | ||
| for layer_idx, expert_set in layers.items(): | ||
| result[layer_idx] = sorted(list(expert_set)) | ||
| print(f"Found {len(result)} layers with MoE experts:") | ||
| for layer_idx, experts in sorted(result.items()): | ||
| print(f" Layer {layer_idx}: {len(experts)} experts (0-{max(experts)})") | ||
| return result |
ErvinXie
left a comment
There was a problem hiding this comment.
Thanks for adding GLM-5 and MiniMax-M2.5 support! The refactoring direction (pulling common methods into ConverterBase) is solid. A few issues need to be addressed before merging:
Must Fix
1. MiniMaxConverter selection condition is too restrictive
# Line 1167
if is_minimax and args.input_type == "fp8":
converter = MiniMaxConverter(...)MiniMaxConverter internally supports fp16 and bf16, but the entry condition only routes fp8 MiniMax models to it. If a user passes a bf16 or fp16 MiniMax model, it will fall through to OnlineQuantConverter, which doesn't handle the w1/w2/w3 weight naming — this will fail at runtime.
Fix: remove the fp8 restriction:
if is_minimax:
converter = MiniMaxConverter(...)2. OnlineQuantConverter._find_expert_layers duplicates base class
The overridden _find_expert_layers in OnlineQuantConverter (around line 695) is identical to ConverterBase._find_expert_layers. Please remove the override and inherit directly.
Should Fix
3. num_experts inconsistency between converters
MiniMaxConverter passes len(expert_ids) to KTMoEWrapper, while OnlineQuantConverter passes self.num_experts (from config). If any experts are skipped (the missing_keys warning path), len(expert_ids) could differ from config, and the subsequent _load_layer_tensors_from_disk(layer_idx, len(expert_ids)) may iterate over the wrong range. Please verify this is intentional, or unify the behavior.
4. Redundant expert_key_filter assignment
In OnlineQuantConverter.__init__ (line 649):
self.expert_key_filter = ".mlp.experts."This is the same default already set in ConverterBase.__init__. Can be removed.
5. Missing newline at end of file
The last line exit(main()) is missing a trailing newline — please add one.
Suggestions
-
Separate whitespace changes from functional changes. This PR removes many blank lines throughout the file (PEP 8 recommends 2 blank lines between top-level definitions). Mixing formatting changes with feature work makes the diff harder to review (~650 lines changed, but a significant portion is just blank line removal). Consider reverting the unrelated whitespace changes, or submitting them as a separate PR.
-
amx_methodnaming ambiguity. In_load_layer_tensors_from_disk,amx_methodhas"AMX"stripped (for file name matching), while in_convert_layer_expertsit keeps the full"AMXINT4"form (for wrapper init). Same variable name, different semantics. Consider renaming one (e.g.,amx_file_prefix) to avoid future confusion.
Overall the design is good — just needs the bug fix in the converter selection logic and the code dedup cleanup. Looking forward to the updated version!
|
Or you can let me to do this for you. |
Thank you for the review and the kind offer! Please go ahead and make the changes. |
- Remove `args.input_type == "fp8"` from MiniMaxConverter selection so bf16/fp16 MiniMax models no longer fall through to OnlineQuantConverter (which doesn't handle w1/w2/w3 naming and would fail). - Remove OnlineQuantConverter._find_expert_layers() which is identical to the inherited ConverterBase._find_expert_layers(). - Remove redundant expert_key_filter assignment (same as base default).
|
I've pushed a fix commit (9a3043a) addressing the review issues:
Net change: +1 / -43 lines. |
What does this PR do?
Add AMX CPU weight conversion support for GLM-5-FP8 and MiniMax-M2.5.
Tested weight: GLM-5-FP8 Minimax-M2.5 MiniMax-M2.1
Noted: Only GLM-5-FP8 weight conversion has been tested but not BF16.
Also tested the weight conversion of DeepSeek-V3.2 and Qwen-Next-Coder, and the original functionality was not damaged.
Fixes # (issue)
#1818
Before submitting