Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantization and MoE configs for GH200 machines #12717

Merged
merged 0 commits into from
Feb 6, 2025

Conversation

arvindsun
Copy link

@arvindsun arvindsun commented Feb 4, 2025

The H200 configs work for the GH200 (it has 92GB gpu memory).

Copy link

github-actions bot commented Feb 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@robertgshaw2-redhat
Copy link
Collaborator

robertgshaw2-redhat commented Feb 4, 2025

Any end-to-end performance benchmark?

Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you post an end-to-end performance benchmark?

These configs are usually tuned on a gpu by gpu basis.

@mgoin
Copy link
Member

mgoin commented Feb 4, 2025

Is it true that GH200 only comes in single or dual device configurations? So in order to fit deepseek v3 (assuming this since you specify block_shape 128x128) you would need 5 nodes (2xGH200) or 10 nodes (1xGH200) distributed?

@arvindsun
Copy link
Author

Is it true that GH200 only comes in single or dual device configurations? So in order to fit deepseek v3 (assuming this since you specify block_shape 128x128) you would need 5 nodes (2xGH200) or 10 nodes (1xGH200) distributed?

We have a configuration with multiple nodes connected over Inifinband. Tested this on a 8 nodex (each with 1xGH200) - can get around 25-27 tokens per second

@arvindsun
Copy link
Author

Can you post an end-to-end performance benchmark?

These configs are usually tuned on a gpu by gpu basis.

I am not sure I can do a full e2e perf for all the configs. I am only able to test for 2 of them - the fused_moe and the N=576,K=7168 for the quant config. Should I revert the rest from this PR?

@simon-mo simon-mo merged commit bf3b79e into vllm-project:main Feb 6, 2025
48 of 57 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants