Conversation
Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughReplaces manual per-iteration CUDA-graph timing in the GEMM benchmark with centralized Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the TGV GEMM benchmarking script by integrating a new, centralized utility function for GPU time measurement. The change aims to standardize the benchmarking process, improve code readability, and ensure consistent performance evaluation across CUBLAS, TGV, and PDL GEMM implementations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the bench_tgv_gemm.py benchmark script to use the bench_gpu_time_with_cudagraph utility function, which simplifies the code and improves maintainability by removing boilerplate CUDA graph benchmarking logic.
While the refactoring is a good improvement, I've noticed a change in the benchmarking methodology. The number of iterations captured within the CUDA graph has been implicitly changed from 100 to the default of 10. This can affect the benchmark results by changing how kernel launch overhead is amortized. I've added comments with suggestions to restore the original number of iterations to ensure benchmark consistency.
benchmarks/bench_tgv_gemm.py
Outdated
| cublas_times = bench_gpu_time_with_cudagraph( | ||
| lambda: F.linear(A, B.T, bias), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| cublas_times = bench_gpu_time_with_cudagraph( | |
| lambda: F.linear(A, B.T, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| cublas_times = bench_gpu_time_with_cudagraph( | |
| lambda: F.linear(A, B.T, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
benchmarks/bench_tgv_gemm.py
Outdated
| tgv_times = bench_gpu_time_with_cudagraph( | ||
| lambda: tgv_gemm_sm100(A, B, bias), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| tgv_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| tgv_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
benchmarks/bench_tgv_gemm.py
Outdated
| pdl_times = bench_gpu_time_with_cudagraph( | ||
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| pdl_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| pdl_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
benchmarks/bench_tgv_gemm.py (3)
83-89: Consider usinginput_argsto avoid capturing loop variables in lambda.The lambda captures
A,B, andbiasfrom the loop scope. While this works becausebench_gpu_time_with_cudagraphexecutes immediately, usinginput_argswould be more explicit and eliminate the static analysis warning.🔎 Suggested refactor using input_args
Per the
bench_gpu_time_with_cudagraphdocstring, you can pass arguments explicitly:-cublas_times = bench_gpu_time_with_cudagraph( - lambda: F.linear(A, B.T, bias), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +cublas_times = bench_gpu_time_with_cudagraph( + F.linear, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B.T, bias), +)
101-107: Consider usinginput_argsto avoid capturing loop variables in lambda.Same pattern as the CUBLAS benchmark: the lambda captures loop-scoped variables. Using
input_argswould eliminate the static analysis warning.🔎 Suggested refactor using input_args
-tgv_times = bench_gpu_time_with_cudagraph( - lambda: tgv_gemm_sm100(A, B, bias), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +tgv_times = bench_gpu_time_with_cudagraph( + tgv_gemm_sm100, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B, bias), +)
114-120: Consider usinginput_argsandinput_kwargsto avoid capturing loop variables in lambda.Same lambda closure pattern, but with a keyword argument. Using
input_argsandinput_kwargswould eliminate the static analysis warning.🔎 Suggested refactor using input_args and input_kwargs
-pdl_times = bench_gpu_time_with_cudagraph( - lambda: tgv_gemm_sm100(A, B, bias, pdl=True), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +pdl_times = bench_gpu_time_with_cudagraph( + tgv_gemm_sm100, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B, bias), + input_kwargs={"pdl": True}, +)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
benchmarks/bench_tgv_gemm.py
🧰 Additional context used
🧬 Code graph analysis (1)
benchmarks/bench_tgv_gemm.py (1)
flashinfer/testing/utils.py (1)
bench_gpu_time_with_cudagraph(1259-1481)
🪛 Ruff (0.14.10)
benchmarks/bench_tgv_gemm.py
84-84: Function definition does not bind loop variable A
(B023)
84-84: Function definition does not bind loop variable B
(B023)
84-84: Function definition does not bind loop variable bias
(B023)
102-102: Function definition does not bind loop variable A
(B023)
102-102: Function definition does not bind loop variable B
(B023)
102-102: Function definition does not bind loop variable bias
(B023)
115-115: Function definition does not bind loop variable A
(B023)
115-115: Function definition does not bind loop variable B
(B023)
115-115: Function definition does not bind loop variable bias
(B023)
🔇 Additional comments (1)
benchmarks/bench_tgv_gemm.py (1)
10-10: LGTM!The import of
bench_gpu_time_with_cudagraphenables cleaner timing logic by replacing manual CUDA graph capture and replay.
yzh119
left a comment
There was a problem hiding this comment.
Shall we switch to bench_gpu_time_with_cupti? I suppose the motivation is to get kernel duration close to nsys measured results in end-to-end serving.
Yes |
Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
|
TGV + cupti ~ cold l2 |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
flashinfer/testing/utils.py (1)
1232-1247: Minor: Ambiguous variable name and import location.Per the static analysis hint (Ruff E741), the variable
lin the lambda and list comprehension can be confused with1(one) in some fonts. Consider using a more descriptive name likelaunch.Additionally, the
bisectimport inside the function is unconventional for a standard library module. Consider moving it to the top of the file with other imports for consistency.🔎 Proposed fix
Move the import to the top of the file (around line 17-20):
import bisectThen update lines 1235-1237:
- import bisect - # Step 1: Sort launches by start timestamp - O(M log M) - sorted_launches = sorted(launches, key=lambda l: l[0]) - launch_starts = [l[0] for l in sorted_launches] + sorted_launches = sorted(launches, key=lambda launch: launch[0]) + launch_starts = [launch[0] for launch in sorted_launches]
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
flashinfer/testing/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
flashinfer/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
flashinfer/**/*.py: Use@functools.cachedecorator on Python API functions to implement module-level caching and avoid recompilation
Use@flashinfer_apidecorator for debugging API calls, enable viaFLASHINFER_LOGLEVELenvironment variable (0=off, 1=basic, 3=detailed, 5=with stats)
Files:
flashinfer/testing/utils.py
🪛 Ruff (0.14.10)
flashinfer/testing/utils.py
1236-1236: Ambiguous variable name: l
(E741)
1237-1237: Ambiguous variable name: l
(E741)
🔇 Additional comments (1)
flashinfer/testing/utils.py (1)
1249-1264: LGTM! Binary search optimization is correct.The algorithm correctly uses:
bisect_leftto find the first launch withstart >= start_cpubisect_rightto find the position after the last launch withstart <= end_cpuThis gives O(log M) lookup per iteration instead of O(M) linear scan, which addresses the performance concern from the commit message ("basically scan all cupti... too slow").
📌 Description
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
Chores
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.