[Bugfix] Fix potential EAGLE spec decode segfault during graph capture#32818
Merged
LucasWilkinson merged 1 commit intovllm-project:mainfrom Jan 22, 2026
Merged
[Bugfix] Fix potential EAGLE spec decode segfault during graph capture#32818LucasWilkinson merged 1 commit intovllm-project:mainfrom
LucasWilkinson merged 1 commit intovllm-project:mainfrom
Conversation
Signed-off-by: Matthew Wong <Matthew.Wong2@amd.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request addresses a bug in SpecDecodeBaseProposer.dummy_run where the use_cudagraphs parameter was being ignored, leading to unconditional CUDA graph dispatch and potential segmentation faults. The change correctly restores the intended logic by wrapping the cudagraph_dispatcher.dispatch call in a conditional block that respects the use_cudagraphs flag. When use_cudagraphs is false, it now correctly sets cudagraph_runtime_mode to CUDAGraphMode.NONE. The fix is correct, well-contained, and directly addresses the described bug. The code is clear and I have no further suggestions.
LucasWilkinson
approved these changes
Jan 22, 2026
Collaborator
LucasWilkinson
left a comment
There was a problem hiding this comment.
LGTM thanks for the fix!
5 tasks
monajafi-amd
pushed a commit
to monajafi-amd/vllm
that referenced
this pull request
Jan 23, 2026
vllm-project#32818) Signed-off-by: Matthew Wong <Matthew.Wong2@amd.com> Signed-off-by: mohammad najafi <mohammad.najafi@amd.com>
cwazai
pushed a commit
to cwazai/vllm
that referenced
this pull request
Jan 25, 2026
vllm-project#32818) Signed-off-by: Matthew Wong <Matthew.Wong2@amd.com> Signed-off-by: 陈建华 <1647430658@qq.com>
lapy
pushed a commit
to lapy/vllm
that referenced
this pull request
Jan 27, 2026
vllm-project#32818) Signed-off-by: Matthew Wong <Matthew.Wong2@amd.com>
ItzDEXX
pushed a commit
to ItzDEXX/vllm
that referenced
this pull request
Feb 19, 2026
vllm-project#32818) Signed-off-by: Matthew Wong <Matthew.Wong2@amd.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
This PR restores the original logic in
SpecDecodeBaseProposer'sdummy_runfunction, which was changed after the refactor in #30143. Credits also to @micah-wil for the investigation and fix.After the refactor, the
use_cudagraphsparameter is left unused which incorrectly leads to CUDA graph dispatch where they previously would not have occurred. We are seeing this manifest on MI300X as a segfault during CUDA graph capture with EAGLE spec decode on TP > 1, e.g. by runningvllm serve meta-llama/Llama-3.1-8B-Instruct --max-model-len 2048 -tp 2 --max-num-batched-tokens 2048 --speculative-config='{"method": "eagle", "model": "yuhuili/EAGLE-LLaMA3.1-Instruct-8B", "num_speculative_tokens": 3, "max_model_len": 2048}'or
vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruct --max-model-len 2048 -tp 2 --max-num-batched-tokens 2048 --speculative-config='{"method": "eagle", "model": "morgendave/EAGLE-Llama-4-Scout-17B-16E-Instruct", "num_speculative_tokens": 3, "max_model_len": 2048}' --load-format dummy --max-num-seqs 1 --compilation-config='{"cudagraph_capture_sizes": [1]}'This is also leading to failures inside tests like
tests/v1/e2e/test_spec_decode.py::test_eagle_correctness.Test Plan
pytest -sv tests/v1/e2e/test_spec_decode.py -k test_eagle_correctnessTest Result
The test above should pass.
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.