[Performance] Qwen3-Next: speed up update_mamba_state_after_mtp_verify by 10x; e2e up to 3.54% faster#10586
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @byjiang1996, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on a critical performance optimization for the Qwen3-Next model by refactoring the update_mamba_state_after_mtp_verify function. The primary goal is to accelerate the state update mechanism, which has been achieved by transitioning from a loop-based, element-wise processing approach to highly efficient vectorized tensor operations. This optimization not only drastically speeds up the specific function but also contributes to a measurable improvement in the model's overall inference speed, all while ensuring that the model's accuracy remains unaffected.
Highlights
- Performance Improvement: The
update_mamba_state_after_mtp_verifyfunction has been optimized to run 10 times faster, reducing its average execution time from 3157.98 us to 307.05 us. - End-to-End Speedup: The overall end-to-end performance of the Qwen3-Next model has improved by up to 3.54% (specifically 3.53% for TP4 DP2 MTP configuration) on the GSM8k benchmark.
- Accuracy Maintained: Accuracy tests on GSM8k confirm that the model's performance remains consistent at approximately 0.950, indicating no degradation due to the speed optimizations.
- Vectorized State Updates: The core change involves replacing iterative, chunk-based updates for
ssm_statesandconv_stateswith direct, vectorized tensor assignments, significantly reducing Python overhead.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request significantly improves the performance of update_mamba_state_after_mtp_verify by replacing iterative, element-wise state updates with vectorized PyTorch operations. This is a great optimization that leads to a more concise and much faster implementation. My only concern is the potential increase in peak memory usage, as the previous chunking mechanism, which was likely in place to manage memory, has been removed. I've added a comment with a suggestion to reintroduce chunking in a vectorized way, which could provide a good balance between performance and memory consumption if needed.
* origin/qwen3: (30 commits) chore: bump sgl-kernel 0.3.11 (sgl-project#10630) feat: add fused moe config for Qwen3-Next-80B-A3B-Instruct on B200 (sgl-project#10631) model support: Sarashina2VisionForCausalLM (sgl-project#10632) [Performance] Qwen3-Next: speed up update_mamba_state_after_mtp_verify by 10x; e2e up to 3.54% faster (sgl-project#10586) [Performance] Qwen3-Next: replace arange to cached query_start_loc_li… (sgl-project#10553) [Feature] Speculative decoding support lookahead (sgl-project#9873) refactor: use registry for _get_attention_backend_from_str (sgl-project#10629) [router] refactor worker to builder pattern 1/n (sgl-project#10628) Garbage collector regression in the online server (sgl-project#10621) feat: Add FlexAttention Backend for Efficient Sparse Attention (sgl-project#9947) Fix bias handling in TritonMoeQuantInfo within quantization/mxfp4.py (sgl-project#10579) [Performance] qwen3-next improve causal conv1d in prefill phase (sgl-project#10595) Fix sgl_kernel import failure on devices other than CUDA (sgl-project#10610) support qwen3-next-fp8 deepep (sgl-project#10622) update deepep version for qwen3-next deepep moe (sgl-project#10624) Feat/add heartbeat mechanism for nixl conn (sgl-project#10222) [RL] Add destroy process group api (sgl-project#9979) fix deepep assert when PD disaggregation == null (sgl-project#8274) Scale kkt after reduction (sgl-project#10604) [improvement] add average input/output token length for hicache benchmark stats output (sgl-project#10525) ...
…y by 10x; e2e up to 3.54% faster (#10586)
…y by 10x; e2e up to 3.54% faster (sgl-project#10586)
Modifications
Before: avg 3157.98 us
After: avg 307.05 us - 10X FASTER
Accuracy Tests - GSM8k
Accuracy remains ~0.950 before and after
Benchmark - GSM8k
Checklist