[Model Runner V2] Fix draft logits not populated during cudagraph replay#37639
Merged
WoosukKwon merged 1 commit intovllm-project:mainfrom Mar 20, 2026
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request fixes an issue with draft logits not being populated during CUDA graph replay in Eagle speculative decoding. The fix involves moving draft_logits from RequestState to EagleSpeculator, which is a sound approach. The implementation correctly updates the code to reflect this change. However, I've identified a potential precision issue with the data type of the newly located draft_logits tensor that could affect the correctness of probabilistic rejection sampling.
WoosukKwon
reviewed
Mar 20, 2026
Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai>
18c2f79 to
33b9e44
Compare
chooper26
pushed a commit
to intellistream/vllm-hust
that referenced
this pull request
Mar 21, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai>
SouthWest7
pushed a commit
to SouthWest7/vllm
that referenced
this pull request
Mar 27, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai>
khairulkabir1661
pushed a commit
to khairulkabir1661/vllm
that referenced
this pull request
Mar 27, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai>
Monishver11
pushed a commit
to Monishver11/vllm
that referenced
this pull request
Mar 27, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai> Signed-off-by: Monishver Chandrasekaran <monishverchandrasekaran@gmail.com>
nithinvc
pushed a commit
to nithinvc/vllm
that referenced
this pull request
Mar 27, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai> Signed-off-by: Nithin Chalapathi <nithin.ch10@gmail.com>
JiantaoXu
pushed a commit
to JiantaoXu/vllm
that referenced
this pull request
Mar 28, 2026
…lay (vllm-project#37639) Signed-off-by: Giancarlo Delfin <gdelfin@inferact.ai>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
TLDR
When using probabilistic rejection sampling with Eagle speculative decoding and CUDA graphs enabled, the draft logits for speculative steps 1+ were not being written, causing incorrect rejection sampling behavior.
Root Cause
The draft logits (
draft_logits_out) passed intoEagleSpeculatorwas not being passed intoEagleCudaGraphManager, and thus not included in the CUDA graph capture.Fix
Move
draft_logitsfromRequestStatetoEagleSpeculator, matching the existing pattern used by draft_tokens. This approach also makes it easier for my upcoming PR to add FULL cudagraph for Eagle prefill: #37588Benchmark
Comprehensive accuracy & performance benchmark results across multiple models (llama3, mimo, qwen, glm), spec decode methods (eagle-1, eagle-3, MTP), parallelisms (TP, EP), output lengths (1, 1024), and concurrencies (1, 8, 64): https://gistpreview.github.io/?1dc71a0fa70ae78b1aa2a70e635a5a01.
We see clear improvements in acceptance rates, and increases in output token throughput for some models. However for some models, we see that the bump in acceptance rates doesn't offset the per-step overhead of probabilistic rejection sampling.
Server
Client
MRV2 + probabilistic rejection sampling yield better acceptances for all draft steps compared to MRV2 + strict and MRV1. The improvement is not as dramatic as before, due to the incorrect draft logits for steps 1+. In retrospect, the draft acceptances for positions 1 and 2 in my previous PR (#37364) were sus... I should probably add some accuracy tests soon.