-
Notifications
You must be signed in to change notification settings - Fork 74
fix(docs): update DeepGEMM usage instructions for H100 GPUs and MiniMax-M2 #112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: minatoaquaMK2 <[email protected]>
Summary of ChangesHello @minatoaquaMK2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the documentation for DeepGEMM usage, specifically for H100 GPUs and the MiniMax-M2 model. The change is driven by extensive benchmark results showing that disabling DeepGEMM on H100 GPUs leads to substantial performance improvements, including nearly double the token generation speed and throughput. This update provides crucial guidance for optimizing performance in production environments utilizing H100 hardware. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the documentation for MiniMax-M2 to recommend disabling DeepGEMM on H100 GPUs, in addition to H20 GPUs. The change is well-supported by the extensive benchmark data provided in the description, which clearly shows significant performance improvements. The documentation update is clear and direct. I have one minor suggestion to add a newline at the end of the file, which is a common best practice for file formatting and compatibility with various tools.
| ### DeepGEMM Usage | ||
|
|
||
| vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`. | ||
| vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good practice for text files to end with a newline character. This is recommended by POSIX standards and can prevent issues with some tools (like cat) and version control systems. Please add a newline at the end of this file.
| vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`. | |
| vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`. | |
Performance Improvements
Based on comprehensive benchmark testing with 100 requests at concurrency level 10, disabling DeepGEMM shows substantial performance gains:
Key Metrics Comparison
Latency Metrics
TTFT (Time to First Token)
Detailed Benchmark Results
Configuration Without DeepGEMM (Recommended)
Configuration With DeepGEMM (Baseline)
Conclusion
The benchmark results clearly demonstrate that disabling DeepGEMM on H100 GPUs provides nearly 2x performance improvement across all key metrics:
While P99 TTFT shows some regression, the overall performance gains in throughput and latency make this change highly beneficial for production workloads on H100 hardware.