Skip to content

fix: Fix memory bandwidth calculation in MLA benchmarks#2479

Merged
yzh119 merged 6 commits intoflashinfer-ai:mainfrom
bkryu:bench_mla_fix
Feb 4, 2026
Merged

fix: Fix memory bandwidth calculation in MLA benchmarks#2479
yzh119 merged 6 commits intoflashinfer-ai:mainfrom
bkryu:bench_mla_fix

Conversation

@bkryu
Copy link
Copy Markdown
Collaborator

@bkryu bkryu commented Feb 3, 2026

📌 Description

Summary

  • Fixed incorrect memory bandwidth calculation in testBatchMLAPagedAttentionWrapper that was using full tensor allocations instead of actual bytes accessed based on sequence lengths
  • Updated bench_trtllm_gen_mla.py to use the unified bench_gpu_time() utility with CUPTI for consistent timing with the benchmark framework

cc @hypdeb

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Chores
    • Improved benchmarking: switched to CUDA/CUPTI-based timing with refined iteration controls (dry/run and repeat by iterations) and optional CUDA graph support.
    • Updated performance reporting to use explicit memory accounting from actual token usage (query, KV, output), and adjusted bandwidth and FLOPs printouts for clearer, more accurate throughput metrics.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 3, 2026

📝 Walkthrough

Walkthrough

Replaces CUDA-graph timing with bench_gpu_time and CUPTI-style timing parameters; updates benchmark calls and attention routine to compute explicit q/kv/output memory bytes, use actual accessed KV tokens for FLOPs/bandwidth, and print updated throughput metrics.

Changes

Cohort / File(s) Summary
TensorRT-LLM MLA Benchmark
benchmarks/bench_trtllm_gen_mla.py
Replaced bench_gpu_time_with_cudagraph with bench_gpu_time; switched timing args to dry_run_iters, repeat_iters, enable_cupti, use_cuda_graph, cold_l2_cache; added explicit q_mem_bytes, kv_mem_bytes, o_mem_bytes, total_mem_bytes; adjusted FLOPs and bandwidth reporting to use actual_kv_tokens.
Attention Benchmarking Routine
benchmarks/routines/attention.py
Refactored memory accounting to compute q_mem_bytes, kv_mem_bytes, and o_mem_bytes from actual token counts and dtype sizes; introduced total_mem_bytes and updated TB/s calculation to use it (now based on accessed KV tokens rather than previous allocations).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • Anerudhan
  • yzh119
  • cyx-6
  • kahyunnam
  • jiahanc
  • nvmbreughe

Poem

🐰 I nibble at bytes and count each token,
CUPTI hums while kernels are awoken.
Query, KV, output — tallies in line,
Benchmarks hum throughput, FLOPs align.
The rabbit hops, metrics refined 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately describes the main fix: correcting memory bandwidth calculation in MLA benchmarks, which aligns with both file changes.
Description check ✅ Passed The description includes all key template sections with substantive content: summary explaining the fixes, pre-commit checks marked complete, and tests confirmed as updated and passing.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the accuracy of performance metrics within the MLA benchmarks by addressing incorrect memory bandwidth and FLOPs calculations. It ensures that these metrics reflect actual data access patterns and computational work, providing a more realistic assessment of performance. Additionally, the benchmarking process in one script is standardized through the adoption of a unified GPU timing utility, enhancing consistency across benchmarks.

Highlights

  • Memory Bandwidth Calculation Fix: The memory bandwidth calculation in MLA benchmarks has been corrected to use actual bytes accessed based on sequence lengths, rather than full tensor allocations. This provides a more accurate representation of memory utilization for query, KV cache, and output tensors.
  • Unified Benchmarking Utility Adoption: The bench_trtllm_gen_mla.py script now utilizes the unified bench_gpu_time() utility, which incorporates CUPTI for consistent and precise GPU timing. This replaces the previous bench_gpu_time_with_cudagraph function.
  • FLOPs Calculation Refinement: The FLOPs calculation in bench_trtllm_gen_mla.py has been refined to use actual_kv_tokens, ensuring a more accurate measure of floating-point operations based on the actual number of KV tokens processed.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • benchmarks/bench_trtllm_gen_mla.py
    • Replaced bench_gpu_time_with_cudagraph with bench_gpu_time for GPU timing.
    • Updated benchmarking parameters for bench_gpu_time to include enable_cupti=True, use_cuda_graph=False, and cold_l2_cache=True for more controlled measurements.
    • Revised memory bandwidth calculation to sum q_mem_bytes, kv_mem_bytes (based on actual_kv_tokens), and o_mem_bytes.
    • Adjusted FLOPs calculation to use actual_kv_tokens instead of sum(seq_lens).
    • Updated print statements for memory bandwidth and FLOPs to reflect new calculations and units (TB/s for memory bandwidth, TFLOPs/s for FLOPs).
  • benchmarks/routines/attention.py
    • Corrected memory bandwidth calculation for q_mem_bytes, kv_mem_bytes (based on actual_kv_tokens), and o_mem_bytes to accurately reflect actual data accessed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes the memory bandwidth calculation in MLA benchmarks by using the actual number of accessed bytes instead of the full tensor allocations. It also standardizes the benchmark timing by switching to the unified bench_gpu_time utility. The changes in benchmarks/routines/attention.py are accurate. However, I've found a small issue in the updated bandwidth calculation in benchmarks/bench_trtllm_gen_mla.py which I've commented on.

print(f"execution time: {ms} ms")
print(f"memory bandwidth: {io / ms / 1024 / 1024:.2f} GB/s")
print(f"FLOPs: {flops * 1e-9 / ms:.2f} TFLOPs/s")
print(f"memory bandwidth: {total_mem_bytes / ms / 1e12:.2f} TB/s")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The memory bandwidth calculation appears to have an incorrect divisor. To convert from bytes per millisecond to terabytes per second, the divisor should be 1e9, not 1e12.

The conversion is: bytes / (ms * 1e-3 s/ms) / (1e12 B/TB) = bytes / (ms * 1e9) TB/s.

Suggested change
print(f"memory bandwidth: {total_mem_bytes / ms / 1e12:.2f} TB/s")
print(f"memory bandwidth: {total_mem_bytes / ms / 1e9:.2f} TB/s")

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@benchmarks/bench_trtllm_gen_mla.py`:
- Around line 132-133: The memory-bandwidth print uses the wrong unit divisor:
change the computation that prints bandwidth (the line using total_mem_bytes and
ms) to divide by 1e9 instead of 1e12 (or equivalently compute
total_mem_bytes/(ms*1e-3)/1e12) so it matches the logic used in attention.py;
update the print statement that references total_mem_bytes / ms / 1e12 to use
total_mem_bytes / ms / 1e9 (keeping the same f-string and formatting) while
leaving the FLOPs print (flops / ms / 1e9) untouched.

@bkryu bkryu self-assigned this Feb 3, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
benchmarks/bench_trtllm_gen_mla.py (1)

85-105: ⚠️ Potential issue | 🟡 Minor

Comment contradicts the actual configuration.

Line 85 states "benchmark using CUPTI" but enable_cupti=False on line 102. With the current settings, this uses CUDA graph timing, not CUPTI.

Proposed fix
-    # benchmark using CUPTI
+    # benchmark using CUDA graphs
     measurements = bench_gpu_time(

@yzh119 yzh119 merged commit f84ac1c into flashinfer-ai:main Feb 4, 2026
28 of 33 checks passed
@bkryu bkryu deleted the bench_mla_fix branch February 4, 2026 18:23
raayandhar pushed a commit to raayandhar/flashinfer that referenced this pull request Feb 5, 2026
…i#2479)

<!-- .github/pull_request_template.md -->

## 📌 Description
Summary
* Fixed incorrect memory bandwidth calculation in
`testBatchMLAPagedAttentionWrapper` that was using full tensor
allocations instead of actual bytes accessed based on sequence lengths
* Updated `bench_trtllm_gen_mla.py` to use the unified
`bench_gpu_time()` utility with CUPTI for consistent timing with the
benchmark framework

cc @hypdeb 
<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Improved benchmarking: switched to CUDA/CUPTI-based timing with
refined iteration controls (dry/run and repeat by iterations) and
optional CUDA graph support.
* Updated performance reporting to use explicit memory accounting from
actual token usage (query, KV, output), and adjusted bandwidth and FLOPs
printouts for clearer, more accurate throughput metrics.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants