Skip to content

Conversation

@minatoaquaMK2
Copy link

Performance Improvements

Based on comprehensive benchmark testing with 100 requests at concurrency level 10, disabling DeepGEMM shows substantial performance gains:

Key Metrics Comparison

Metric With DeepGEMM Without DeepGEMM Improvement
Output Token Throughput 288.31 tok/s 566.84 tok/s +96.6%
Peak Output Throughput 320.00 tok/s 650.00 tok/s +103.1%
Request Throughput 0.30 req/s 0.60 req/s +100%
Total Token Throughput 907.58 tok/s 1798.96 tok/s +98.2%
Benchmark Duration 330.71s 166.22s -49.7%

Latency Metrics

Metric With DeepGEMM Without DeepGEMM Improvement
Mean TPOT 32.21 ms 16.39 ms -49.1%
Median TPOT 32.22 ms 16.36 ms -49.2%
P99 TPOT 33.43 ms 17.60 ms -47.4%
Mean ITL 32.15 ms 16.35 ms -49.2%
Median ITL 31.87 ms 15.90 ms -50.1%
P99 ITL 33.33 ms 17.54 ms -47.4%

TTFT (Time to First Token)

Metric With DeepGEMM Without DeepGEMM
Mean TTFT 244.24 ms 258.76 ms
Median TTFT 216.79 ms 161.44 ms
P99 TTFT 639.05 ms 1569.77 ms

Detailed Benchmark Results

Configuration Without DeepGEMM (Recommended)

============ Serving Benchmark Result ============
Successful requests:                     100
Failed requests:                         0
Maximum request concurrency:             10
Benchmark duration (s):                  166.22
Total input tokens:                      204800
Total generated tokens:                  94218
Request throughput (req/s):              0.60
Output token throughput (tok/s):         566.84
Peak output token throughput (tok/s):    650.00
Peak concurrent requests:                20.00
Total Token throughput (tok/s):          1798.96

---------------Time to First Token----------------
Mean TTFT (ms):                          258.76
Median TTFT (ms):                        161.44
P99 TTFT (ms):                           1569.77

-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          16.39
Median TPOT (ms):                        16.36
P99 TPOT (ms):                           17.60

---------------Inter-token Latency----------------
Mean ITL (ms):                           16.35
Median ITL (ms):                         15.90
P99 ITL (ms):                            17.54
==================================================

Configuration With DeepGEMM (Baseline)

============ Serving Benchmark Result ============
Successful requests:                     100
Failed requests:                         0
Maximum request concurrency:             10
Benchmark duration (s):                  330.71
Total input tokens:                      204800
Total generated tokens:                  95346
Request throughput (req/s):              0.30
Output token throughput (tok/s):         288.31
Peak output token throughput (tok/s):    320.00
Peak concurrent requests:                20.00
Total Token throughput (tok/s):          907.58

---------------Time to First Token----------------
Mean TTFT (ms):                          244.24
Median TTFT (ms):                        216.79
P99 TTFT (ms):                           639.05

-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          32.21
Median TPOT (ms):                        32.22
P99 TPOT (ms):                           33.43

---------------Inter-token Latency----------------
Mean ITL (ms):                           32.15
Median ITL (ms):                         31.87
P99 ITL (ms):                            33.33
==================================================

Conclusion

The benchmark results clearly demonstrate that disabling DeepGEMM on H100 GPUs provides nearly 2x performance improvement across all key metrics:

  • 2x faster token generation (TPOT reduced by ~50%)
  • 2x higher throughput (both request and token throughput doubled)
  • 50% reduction in benchmark duration
  • Significantly lower inter-token latency

While P99 TTFT shows some regression, the overall performance gains in throughput and latency make this change highly beneficial for production workloads on H100 hardware.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @minatoaquaMK2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the documentation for DeepGEMM usage, specifically for H100 GPUs and the MiniMax-M2 model. The change is driven by extensive benchmark results showing that disabling DeepGEMM on H100 GPUs leads to substantial performance improvements, including nearly double the token generation speed and throughput. This update provides crucial guidance for optimizing performance in production environments utilizing H100 hardware.

Highlights

  • Documentation Update: The DeepGEMM usage instructions within the MiniMax-M2.md documentation have been updated to reflect new performance findings.
  • H100 GPU Performance: The documentation now explicitly states that disabling DeepGEMM on H100 GPUs, similar to H20 GPUs, yields significantly better performance.
  • Performance Gains: Comprehensive benchmarks demonstrate that disabling DeepGEMM on H100 GPUs results in nearly a 2x improvement in token generation speed and overall throughput, with a 50% reduction in benchmark duration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for MiniMax-M2 to recommend disabling DeepGEMM on H100 GPUs, in addition to H20 GPUs. The change is well-supported by the extensive benchmark data provided in the description, which clearly shows significant performance improvements. The documentation update is clear and direct. I have one minor suggestion to add a newline at the end of the file, which is a common best practice for file formatting and compatibility with various tools.

### DeepGEMM Usage

vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`.
vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice for text files to end with a newline character. This is recommended by POSIX standards and can prevent issues with some tools (like cat) and version control systems. Please add a newline at the end of this file.

Suggested change
vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`.
vLLM has DeepGEMM enabled by default, follow the [setup instructions](https://github.com/vllm-project/vllm/blob/v0.11.0/benchmarks/kernels/deepgemm/README.md#setup) to install it. However, on H100 and H20 GPUs, we've found that disabling DeepGEMM yields better performance for this model. To disable `DeepGEMM`, set the environment variable `VLLM_USE_DEEP_GEMM=0`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant