Conversation
Summary of ChangesHello @ouqingliang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses an underlying bug in the Mixture of Experts (MoE) implementation by refactoring the object initialization process across various AMX-optimized MoE classes. By introducing a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the Mixture of Experts (MoE) operators by introducing a derived_init() method in the AMX_MOE_BASE class. This change centralizes common initialization logic and improves the overall structure of the MoE classes. The implementation correctly moves derived-class-specific initialization from constructors into the new derived_init() method. My review includes suggestions to address some code duplication for file path construction that was introduced during this refactoring. Additionally, the PR fixes a bug in an error message within fp8-moe.hpp where std::runtime_error was used with incorrect arguments.
| std::filesystem::path prefix = config_.path; | ||
| prefix = prefix / ("_layer_" + std::to_string(config_.layer_idx)) / ("_numa_" + std::to_string(tp_part_idx)); |
There was a problem hiding this comment.
This logic for constructing the prefix path is a duplicate of the code in derived_init() (lines 407-408). This code duplication can lead to maintenance issues, as any change to the path structure would need to be updated in two places.
To improve this, you could re-introduce prefix as a private member variable, initialize it once in derived_init(), and then reuse it in this load_weights() method. This would centralize the path construction logic.
| std::filesystem::path prefix = config_.path; | ||
| prefix = prefix / ("_layer_" + std::to_string(config_.layer_idx)) / ("_numa_" + std::to_string(tp_part_idx)); |
There was a problem hiding this comment.
This logic for constructing the prefix path is a duplicate of the code in derived_init() (lines 142-143). This code duplication can lead to maintenance issues.
To improve this, you could re-introduce prefix as a private member variable, initialize it once in derived_init(), and then reuse it here. If you do this, the lambda below can capture this and access this->prefix (or just prefix), simplifying the capture list from [this, physical_to_logical_map, prefix] to [this, physical_to_logical_map].
* fix pypi cuda install (#1763) * Update release-pypi.yml (#1764) * fix cuda wheel build (#1766) * Cli (#1765) * [feat]: add custom option for kt run * [feat]: depth 3 * [docs]: add kt-cli doc and update corresponding website (#1768) * Remove kt-kernel-cuda, kt-kernel uses the version with cuda (#1769) * Update release-pypi.yml (#1770) * bump to 0.5.0.post1 (#1771) * [ci]: Patch ci (#1772) * [docs]: add kt-cli doc and update corresponding website * [feat]: update issue template * [fix]: fix moe hpp bug. (#1780) fix moe hpp init bug. * Fix moe bug. (#1783) * [fix]: fix moe.hpp load from file bug. * [fix]: fix all moe hpp init bug. * [fix]: fix moe & awq-moe ug. * [feat](kt-sft-refactor): load from huggingface safetensor file * [fix]: fix bugs for activation, sft forward and backward --------- Co-authored-by: Jianwei Dong <dongjw24@mails.tsinghua.edu.cn> Co-authored-by: ErvinXie <ervinxie@qq.com> Co-authored-by: ZiWei Yuan <yzwliam@126.com> Co-authored-by: Oql <1692110604@qq.com> Co-authored-by: mrhaoxx <mr.haoxx@gmail.com>
What does this PR do?
Fixes # (issue)
Before submitting