Skip to content

Fix moe bug.#1783

Merged
ouqingliang merged 4 commits intomainfrom
fix-bug-oql
Jan 5, 2026
Merged

Fix moe bug.#1783
ouqingliang merged 4 commits intomainfrom
fix-bug-oql

Conversation

@ouqingliang
Copy link
Contributor

What does this PR do?

Fixes # (issue)

Before submitting

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ouqingliang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an underlying bug in the Mixture of Experts (MoE) implementation by refactoring the object initialization process across various AMX-optimized MoE classes. By introducing a derived_init() method in the base class, it establishes a more robust and predictable initialization sequence for specialized MoE variants like AWQ, FP8, and K2. This change ensures that derived classes perform their setup at the appropriate stage of construction, enhancing the stability and maintainability of the MoE kernel operators.

Highlights

  • Refactored MoE Initialization: Introduced a derived_init() method in the AMX_MOE_BASE class, which is called by the base constructor. This provides a standardized hook for derived MoE classes to perform their specific initialization logic after the base class is fully constructed.
  • Streamlined Derived Class Constructors: The constructors of AMX_AWQ_MOE_TP, AMX_FP8_MOE_TP, AMX_K2_MOE_TP, and AMX_MOE_TP have been simplified. Their specific initialization steps, such as configuration checks and logging, are now moved into their respective derived_init() methods.
  • Improved Variable Scoping and Management: The prefix path variable in AMX_AWQ_MOE_TP and AMX_MOE_TP is no longer a class member but is now declared locally within the derived_init() and load_weights_impl() methods, improving encapsulation. Additionally, gate_proj_, up_proj_, and down_proj_ member variables were removed from AMX_MOE_TP.
  • Minor Code Cleanups: Unnecessary #include directives were removed from fp8-moe.hpp, buffer_required_size_impl methods were condensed, and a printf format string was corrected in fp8-moe.hpp.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Mixture of Experts (MoE) operators by introducing a derived_init() method in the AMX_MOE_BASE class. This change centralizes common initialization logic and improves the overall structure of the MoE classes. The implementation correctly moves derived-class-specific initialization from constructors into the new derived_init() method. My review includes suggestions to address some code duplication for file path construction that was introduced during this refactoring. Additionally, the PR fixes a bug in an error message within fp8-moe.hpp where std::runtime_error was used with incorrect arguments.

Comment on lines +499 to +500
std::filesystem::path prefix = config_.path;
prefix = prefix / ("_layer_" + std::to_string(config_.layer_idx)) / ("_numa_" + std::to_string(tp_part_idx));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for constructing the prefix path is a duplicate of the code in derived_init() (lines 407-408). This code duplication can lead to maintenance issues, as any change to the path structure would need to be updated in two places.

To improve this, you could re-introduce prefix as a private member variable, initialize it once in derived_init(), and then reuse it in this load_weights() method. This would centralize the path construction logic.

Comment on lines +248 to +249
std::filesystem::path prefix = config_.path;
prefix = prefix / ("_layer_" + std::to_string(config_.layer_idx)) / ("_numa_" + std::to_string(tp_part_idx));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for constructing the prefix path is a duplicate of the code in derived_init() (lines 142-143). This code duplication can lead to maintenance issues.

To improve this, you could re-introduce prefix as a private member variable, initialize it once in derived_init(), and then reuse it here. If you do this, the lambda below can capture this and access this->prefix (or just prefix), simplifying the capture list from [this, physical_to_logical_map, prefix] to [this, physical_to_logical_map].

@ouqingliang ouqingliang closed this Jan 5, 2026
@ouqingliang ouqingliang deleted the fix-bug-oql branch January 5, 2026 08:48
@ouqingliang ouqingliang restored the fix-bug-oql branch January 5, 2026 08:57
@ouqingliang ouqingliang reopened this Jan 5, 2026
@ouqingliang ouqingliang merged commit ddb9575 into main Jan 5, 2026
7 of 9 checks passed
JimmyPeilinLi added a commit that referenced this pull request Jan 20, 2026
* fix pypi cuda install (#1763)

* Update release-pypi.yml (#1764)

* fix cuda wheel build (#1766)

* Cli (#1765)

* [feat]: add custom option for kt run

* [feat]: depth 3

* [docs]: add kt-cli doc and update corresponding website (#1768)

* Remove kt-kernel-cuda, kt-kernel uses the version with cuda (#1769)

* Update release-pypi.yml (#1770)

* bump to 0.5.0.post1 (#1771)

* [ci]: Patch ci (#1772)

* [docs]: add kt-cli doc and update corresponding website

* [feat]: update issue template

* [fix]: fix moe hpp bug. (#1780)

fix moe hpp init bug.

* Fix moe bug. (#1783)

* [fix]: fix moe.hpp load from file bug.

* [fix]: fix all moe hpp init bug.

* [fix]: fix moe & awq-moe ug.

* [feat](kt-sft-refactor): load from huggingface safetensor file

* [fix]: fix bugs for activation, sft forward and backward

---------

Co-authored-by: Jianwei Dong <dongjw24@mails.tsinghua.edu.cn>
Co-authored-by: ErvinXie <ervinxie@qq.com>
Co-authored-by: ZiWei Yuan <yzwliam@126.com>
Co-authored-by: Oql <1692110604@qq.com>
Co-authored-by: mrhaoxx <mr.haoxx@gmail.com>
@ouqingliang ouqingliang deleted the fix-bug-oql branch February 3, 2026 06:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant