[bugfix] avoid attention padding tokens computation in pcg#17706
[bugfix] avoid attention padding tokens computation in pcg#17706Chen-0210 wants to merge 15 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @Chen-0210, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a bug fix to prevent padding tokens from being included in attention calculations within the Piecewise CUDA Graph (PCG) execution. By introducing a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a bugfix for handling padding tokens within the Piecewise CUDA Graph (PCG) execution path. The core change is the addition of a real_num_tokens field to the ForwardBatch dataclass, which allows distinguishing actual tokens from padding. This field is then utilized in custom attention operations (unified_attention_with_output and gdn_with_output) to correctly slice input tensors, ensuring that only real tokens are processed during attention computation. Consequently, the logic in PiecewiseCudaGraphRunner for handling out_cache_loc has been simplified by removing pre-allocated tensors and passing them directly from the forward_batch.
The changes appear correct and effectively address the padding issue in PCG mode. I have one minor suggestion to improve code clarity by correcting a duplicated comment.
Oasis-Git
left a comment
There was a problem hiding this comment.
Since these changes affect a broad range of functionality, the full unit test suite should be carefully validated before approving this PR.
d56649a to
644da7a
Compare
|
/tag-run-ci-label |
| if model_runner.is_hybrid_swa | ||
| else None | ||
| ) | ||
| self.mamba_track_indices = ( |
There was a problem hiding this comment.
why we need to remove these code?
There was a problem hiding this comment.
The relevant part is not included in PCG, so it does not require a fixed memory address.
|
@zminglei The occasional “all requests invalid” fail was already fixed by another PR #17404 related to mamba cache. this pr mainly fix the rare precision issue related to padding token. Running the CI multiple times would be great, but I’m not sure if the current CI can accept 10 runs due to time and resource constraints.. |
Thanks, that makes sense, as we've already locally run it multiple times and verified it's fixed, it should be good enough. And for CI it's ok to only run once as others. |
|
/rerun-failed-ci again |
|
@zminglei It seems the re-run didn’t take effect |
|
/rerun-failed-ci again |
1 similar comment
|
/rerun-failed-ci again |


Motivation
fix #17330.
When PCG is enabled, the attention metadata is initialized with real_num_tokens, but the input tensor still contains padded tokens. Attention backends such as FlashInfer can not handle this well, which can lead to undefined behavior, including nan value, corrupted outputs (repeated !!!!!), result in abnormally long output lengths.
To fix this, exclude the padded tokens and make PCG more robust.
Modifications
Accuracy Tests
python3 -m sglang.launch_server --model Qwen/Qwen3-Next-80B-A3B-Instruct --tp 2 --enable-piecewise-cuda-graph --piecewise-cuda-graph-compiler eager --port 60002 --skip-server-warmup --log-requests --log-requests-level 3 --attention-backend flashinfer --mamba-scheduler-strategy extra_bufferpython3 benchmark/gsm8k/bench_sglang.py --parallel 1319 --num-questions 1319 --host http://127.0.0.1 --port 60002Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci