[main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY#7290
[main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY#7290MengqingCao merged 1 commit intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug causing the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to fix a crash in eagle3 when using FULL_DECODE_ONLY. The changes in vllm_ascend/spec_decode/eagle_proposer.py correctly add missing seq_lens metadata for the CUDA graph path. However, the modification in vllm_ascend/worker/model_runner_v1.py introduces a critical bug by reordering operations, which causes a GPU tensor fill to be immediately overwritten. I have provided a comment with a suggested fix for this issue.
| self.seq_lens.gpu[num_reqs:].fill_(0) | ||
| self.seq_lens.copy_to_gpu() |
There was a problem hiding this comment.
The fill_(0) operation on the GPU tensor self.seq_lens.gpu is immediately overwritten by self.seq_lens.copy_to_gpu(), which copies the entire CPU buffer to the GPU. This makes the fill_ operation ineffective.
To correctly zero out the tail of the sequence lengths tensor, you should modify the CPU buffer before copying it to the GPU. This is also more efficient as it avoids an extra GPU kernel launch.
| self.seq_lens.gpu[num_reqs:].fill_(0) | |
| self.seq_lens.copy_to_gpu() | |
| self.seq_lens.np[num_reqs:] = 0 | |
| self.seq_lens.copy_to_gpu() |
…E_ONLY Signed-off-by: drslark <slarksblood@qq.com>
…E_ONLY (vllm-project#7290) ### What this PR does / why we need it? Two problems have been solved in this pr. These problems occur in the `FULL_DECODE_ONLY` mode that `num_tokens` should be padded to some value in `cudagraph_capture_sizes`. 1. We found the length of `seq_lens_list` in drafter's `attn_metadata` is 1 shorter than expected. It will raise a kernel exception to make vllm crash. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996], it is not padded. 3. Though the length of `seq_lens_list` in target's `attn_metadata` is the same as expected in `FULL_DECODE_ONLY`, some data are corrupted at the end of the list. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996, 5738], it has corrupted at the end of the list. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: drslark <slarksblood@qq.com>
…E_ONLY (vllm-project#7290) ### What this PR does / why we need it? Two problems have been solved in this pr. These problems occur in the `FULL_DECODE_ONLY` mode that `num_tokens` should be padded to some value in `cudagraph_capture_sizes`. 1. We found the length of `seq_lens_list` in drafter's `attn_metadata` is 1 shorter than expected. It will raise a kernel exception to make vllm crash. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996], it is not padded. 3. Though the length of `seq_lens_list` in target's `attn_metadata` is the same as expected in `FULL_DECODE_ONLY`, some data are corrupted at the end of the list. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996, 5738], it has corrupted at the end of the list. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: drslark <slarksblood@qq.com> Signed-off-by: xutianyi <xutianyi5@huawei.com>
…scend into qwen3next_graph * 'qwen3next_graph' of https://github.com/845473182/vllm-ascend: (62 commits) [doc] Refresh the documentation for DeepSeek-V3.2 (vllm-project#7403) [bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope (vllm-project#7341) [P/D] LayerwiseConnector supports the virtual push functionality on node D. (vllm-project#7361) [CI] Add PAT_TOKEN when checkout (vllm-project#7400) [main2main] upgrade vllm to 0308 (vllm-project#7213) [CI] add scheduled stale issue management (vllm-project#7354) [CI] expand issue labeler rules for feature/model triage (vllm-project#7356) [Bugfix] Assertion error when decode prefix cache fully hits (vllm-project#7236) [doc] Refresh the documentation for GLM-4.7 (vllm-project#7292) [BugFix]A2 MOE method&& layerwise MTP bugfix && Mamba gdn_metadata bugfix (vllm-project#7364) [doc] Upload doc for qwen3.5-27B and qwen3.5-397B-A17B on Ascend (vllm-project#7313) [bugfix]Enable dispatch_ffn_combine feature for qwen3.5 (vllm-project#7066) [bugfix] fix unzip file path for fia operator (vllm-project#7367) [Perf] Optimize bias handling in AscendRMSNorm (vllm-project#7226) [eagle3][pcp] fix bug for eagle3 and cp enable (vllm-project#7309) [Bugfix] fix TransposeKvCacheByBlock op error report in plog (vllm-project#7235) [Feature]Supports DSv3.1 PD separation and C8 quantization (vllm-project#7222) [main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY (vllm-project#7290) [xlite][Bugfix] Support mrope and deepstack features in xlite backend (vllm-project#7295) [model_runner_v2]optimize the performance of the _topk_log_softmax_kernel (vllm-project#7221) ...
### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - #7235 - #7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
…E_ONLY (vllm-project#7290) ### What this PR does / why we need it? Two problems have been solved in this pr. These problems occur in the `FULL_DECODE_ONLY` mode that `num_tokens` should be padded to some value in `cudagraph_capture_sizes`. 1. We found the length of `seq_lens_list` in drafter's `attn_metadata` is 1 shorter than expected. It will raise a kernel exception to make vllm crash. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996], it is not padded. 3. Though the length of `seq_lens_list` in target's `attn_metadata` is the same as expected in `FULL_DECODE_ONLY`, some data are corrupted at the end of the list. e.g., `num_reqs` = 3, `cudagraph_capture_sizes` = [20], `actual_seq_lengths_q` is padded well to [4, 8, 12, 20]. But `seq_lens_list` = [5742, 4700, 7996, 5738], it has corrupted at the end of the list. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: drslark <slarksblood@qq.com>
…roject#7436) ### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - vllm-project#7235 - vllm-project#7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
What this PR does / why we need it?
Two problems have been solved in this pr.
These problems occur in the
FULL_DECODE_ONLYmode thatnum_tokensshould be padded to some value incudagraph_capture_sizes.We found the length of
seq_lens_listin drafter'sattn_metadatais 1 shorter than expected. It will raise a kernel exception to make vllm crash.e.g.,
num_reqs= 3,cudagraph_capture_sizes= [20],actual_seq_lengths_qis padded well to [4, 8, 12, 20]. Butseq_lens_list= [5742, 4700, 7996], it is not padded.Though the length of
seq_lens_listin target'sattn_metadatais the same as expected inFULL_DECODE_ONLY, some data are corrupted at the end of the list.e.g.,
num_reqs= 3,cudagraph_capture_sizes= [20],actual_seq_lengths_qis padded well to [4, 8, 12, 20]. Butseq_lens_list= [5742, 4700, 7996, 5738], it has corrupted at the end of the list.Does this PR introduce any user-facing change?
N/A
How was this patch tested?
The codes to reproduce:
The program will crash, exception is shown as below:
After changes in this pr:
The result of codes is shown:
Unfortunately, the data for
ciis too short to reproduce the crash.If we increase the data length of
ciby 1000 times, it will only reproduce with a very low probability.