[EPLB] Display the expert hotness comparison before and after eplb.#6877
[EPLB] Display the expert hotness comparison before and after eplb.#6877weijinqian0 merged 1 commit intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the EPLB system by adding clear, real-time feedback on its performance. By displaying expert hotness metrics before and after rebalancing, it allows developers to immediately observe the impact of the load balancing algorithm, making its effects more transparent and easier to evaluate. This change focuses on improving observability rather than altering core functionality. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds logging to display the expert hotness comparison before and after the EPLB algorithm runs, which is a great addition for observability. However, the new helper functions _calculate_hotness and _compute_imbalance contain several critical bugs that could lead to application crashes due to unhandled edge cases, such as negative indices for unassigned experts and division by zero. I've provided suggestions to make these calculations more robust.
| imbalance_list = [] | ||
| for deployment, hotness in zip(deployment_all_layer, hotness_all_layer): | ||
| counts = np.bincount(deployment.reshape(-1), minlength=hotness.shape[0]) | ||
|
|
||
| unit_hotness = np.divide(hotness, counts, out=np.zeros_like(hotness, dtype=float), where=counts != 0) | ||
|
|
||
| stage_load = unit_hotness[deployment].sum(-1) | ||
| stage_par = stage_load.max() / stage_load.mean() | ||
| imbalance_list.append(stage_par) | ||
|
|
||
| max_val = max(imbalance_list) | ||
| mean_val = sum(imbalance_list) / len(imbalance_list) | ||
| return mean_val, max_val |
There was a problem hiding this comment.
This method has several critical issues that can lead to crashes or incorrect calculations:
np.bincountwill raise aValueErrorifdeploymentcontains negative values, which it can for unassigned experts.unit_hotness[deployment]will incorrectly index from the end of the array ifdeploymentcontains-1.stage_load.mean()can be zero, leading to aZeroDivisionErrorwhen calculatingstage_par.- If
deployment_all_layeris empty,imbalance_listwill be empty, causingmax()to raise aValueError.
Please refactor this method to handle these edge cases gracefully.
imbalance_list = []
for deployment, hotness in zip(deployment_all_layer, hotness_all_layer):
deployment_flat = deployment.ravel()
valid_mask = deployment_flat >= 0
if not np.any(valid_mask):
imbalance_list.append(1.0)
continue
counts = np.bincount(deployment_flat[valid_mask], minlength=hotness.shape[0])
unit_hotness = np.divide(hotness, counts, out=np.zeros_like(hotness, dtype=float), where=counts != 0)
temp_deployment = np.where(deployment >= 0, deployment, 0)
stage_load_per_expert = unit_hotness[temp_deployment]
stage_load_per_expert[deployment < 0] = 0
stage_load = stage_load_per_expert.sum(-1)
mean_load = stage_load.mean()
stage_par = stage_load.max() / mean_load if mean_load > 0 else 1.0
imbalance_list.append(stage_par)
if not imbalance_list:
return 0.0, 0.0
max_val = np.max(imbalance_list)
mean_val = np.mean(imbalance_list)
return mean_val, max_val| hotness = np.zeros(num_of_expert, dtype=rank_load.dtype) | ||
| deployment_flat = deployment.ravel() | ||
| rank_load_flat = rank_load.ravel() | ||
| np.add.at(hotness, deployment_flat, rank_load_flat) |
There was a problem hiding this comment.
The deployment_flat array can contain -1 for unassigned expert slots. Using np.add.at with negative indices will cause it to wrap around and incorrectly add to the hotness of the wrong expert. Please filter for non-negative indices before this operation.
| np.add.at(hotness, deployment_flat, rank_load_flat) | |
| valid_mask = deployment_flat >= 0 | |
| np.add.at(hotness, deployment_flat[valid_mask], rank_load_flat[valid_mask]) |
baa4fb4 to
99eb867
Compare
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
99eb867 to
60caeb3
Compare
60caeb3 to
fb54fb5
Compare
vllm_ascend/eplb/core/eplb_worker.py
Outdated
| update_mean, update_max = self._compute_imbalance(new_placement, hotness) | ||
| logger.info( | ||
| f"[Expert Hotness] Current: mean={current_mean:.3f}, max={current_max:.3f}, " | ||
| f"Updated: mean={update_mean:.3f}, max={update_max:.3f}" |
There was a problem hiding this comment.
don't use f-string in logger.
f0fdb15 to
3023769
Compare
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
062b8d0 to
8084b51
Compare
What this PR does / why we need it?
To intuitively show the effect of the eplb algorithm, we print the expert heat before and after eplb.
Does this PR introduce any user-facing change?
How was this patch tested?