Skip to content

[EPLB] Display the expert hotness comparison before and after eplb.#6877

Merged
weijinqian0 merged 1 commit intovllm-project:mainfrom
shenchuxiaofugui:hotness
Mar 6, 2026
Merged

[EPLB] Display the expert hotness comparison before and after eplb.#6877
weijinqian0 merged 1 commit intovllm-project:mainfrom
shenchuxiaofugui:hotness

Conversation

@shenchuxiaofugui
Copy link
Collaborator

@shenchuxiaofugui shenchuxiaofugui commented Feb 28, 2026

What this PR does / why we need it?

To intuitively show the effect of the eplb algorithm, we print the expert heat before and after eplb.

Does this PR introduce any user-facing change?

How was this patch tested?

Snipaste_2026-02-28_17-23-42

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the EPLB system by adding clear, real-time feedback on its performance. By displaying expert hotness metrics before and after rebalancing, it allows developers to immediately observe the impact of the load balancing algorithm, making its effects more transparent and easier to evaluate. This change focuses on improving observability rather than altering core functionality.

Highlights

  • Expert Hotness Comparison: The pull request introduces logging to display the expert hotness comparison, showing the mean and maximum imbalance before and after the EPLB (Expert Parallel Load Balancing) algorithm is applied. This provides intuitive feedback on the algorithm's effectiveness.
  • Refactored Imbalance Calculation: New static methods, _compute_imbalance and _calculate_hotness, have been added to eplb_worker.py to centralize and standardize the logic for calculating expert load imbalance and hotness.
  • Removed Redundant Logic: The previous compute_moe_imbalance and summarize_moe_imbalance methods, along with the moe_imbalance_dict, have been removed from eplb_updator.py as their functionality is now handled by the new hotness comparison mechanism.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/eplb/core/eplb_worker.py
    • Imported the numpy library for numerical operations.
    • Added logic to log the mean and max expert hotness imbalance before and after rebalancing, specifically for rank 0.
    • Implemented a new static method _compute_imbalance to calculate mean and max imbalance from deployment and hotness data.
    • Implemented a new static method _calculate_hotness to compute expert hotness based on deployment and MOE load.
  • vllm_ascend/eplb/eplb_updator.py
    • Removed the moe_imbalance_dict attribute, which stored MOE imbalance statistics.
    • Removed the compute_moe_imbalance method, which previously calculated MOE imbalance.
    • Removed the summarize_moe_imbalance method, which previously logged MOE imbalance summaries.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds logging to display the expert hotness comparison before and after the EPLB algorithm runs, which is a great addition for observability. However, the new helper functions _calculate_hotness and _compute_imbalance contain several critical bugs that could lead to application crashes due to unhandled edge cases, such as negative indices for unassigned experts and division by zero. I've provided suggestions to make these calculations more robust.

Comment on lines +265 to +277
imbalance_list = []
for deployment, hotness in zip(deployment_all_layer, hotness_all_layer):
counts = np.bincount(deployment.reshape(-1), minlength=hotness.shape[0])

unit_hotness = np.divide(hotness, counts, out=np.zeros_like(hotness, dtype=float), where=counts != 0)

stage_load = unit_hotness[deployment].sum(-1)
stage_par = stage_load.max() / stage_load.mean()
imbalance_list.append(stage_par)

max_val = max(imbalance_list)
mean_val = sum(imbalance_list) / len(imbalance_list)
return mean_val, max_val
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This method has several critical issues that can lead to crashes or incorrect calculations:

  1. np.bincount will raise a ValueError if deployment contains negative values, which it can for unassigned experts.
  2. unit_hotness[deployment] will incorrectly index from the end of the array if deployment contains -1.
  3. stage_load.mean() can be zero, leading to a ZeroDivisionError when calculating stage_par.
  4. If deployment_all_layer is empty, imbalance_list will be empty, causing max() to raise a ValueError.

Please refactor this method to handle these edge cases gracefully.

        imbalance_list = []
        for deployment, hotness in zip(deployment_all_layer, hotness_all_layer):
            deployment_flat = deployment.ravel()
            valid_mask = deployment_flat >= 0
            if not np.any(valid_mask):
                imbalance_list.append(1.0)
                continue

            counts = np.bincount(deployment_flat[valid_mask], minlength=hotness.shape[0])
            unit_hotness = np.divide(hotness, counts, out=np.zeros_like(hotness, dtype=float), where=counts != 0)

            temp_deployment = np.where(deployment >= 0, deployment, 0)
            stage_load_per_expert = unit_hotness[temp_deployment]
            stage_load_per_expert[deployment < 0] = 0
            stage_load = stage_load_per_expert.sum(-1)

            mean_load = stage_load.mean()
            stage_par = stage_load.max() / mean_load if mean_load > 0 else 1.0
            imbalance_list.append(stage_par)

        if not imbalance_list:
            return 0.0, 0.0

        max_val = np.max(imbalance_list)
        mean_val = np.mean(imbalance_list)
        return mean_val, max_val

hotness = np.zeros(num_of_expert, dtype=rank_load.dtype)
deployment_flat = deployment.ravel()
rank_load_flat = rank_load.ravel()
np.add.at(hotness, deployment_flat, rank_load_flat)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The deployment_flat array can contain -1 for unassigned expert slots. Using np.add.at with negative indices will cause it to wrap around and incorrectly add to the hotness of the wrong expert. Please filter for non-negative indices before this operation.

Suggested change
np.add.at(hotness, deployment_flat, rank_load_flat)
valid_mask = deployment_flat >= 0
np.add.at(hotness, deployment_flat[valid_mask], rank_load_flat[valid_mask])

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

update_mean, update_max = self._compute_imbalance(new_placement, hotness)
logger.info(
f"[Expert Hotness] Current: mean={current_mean:.3f}, max={current_max:.3f}, "
f"Updated: mean={update_mean:.3f}, max={update_max:.3f}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't use f-string in logger.

@shenchuxiaofugui shenchuxiaofugui force-pushed the hotness branch 3 times, most recently from f0fdb15 to 3023769 Compare March 3, 2026 06:27
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
@weijinqian0 weijinqian0 merged commit ccd0079 into vllm-project:main Mar 6, 2026
37 of 38 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants