[V1][spec decode] return logprobs for spec decoding#26060
Merged
22quinn merged 3 commits intovllm-project:mainfrom Oct 23, 2025
Merged
[V1][spec decode] return logprobs for spec decoding#2606022quinn merged 3 commits intovllm-project:mainfrom
22quinn merged 3 commits intovllm-project:mainfrom
Conversation
|
This pull request has merge conflicts that must be resolved before it can be |
48bc380 to
cc8bc92
Compare
2cf8c8b to
3969a0c
Compare
2124147 to
5797161
Compare
8fe10f3 to
fa5c0da
Compare
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com>
fa5c0da to
8636461
Compare
Collaborator
Author
|
@njhill i resolved the failing tests :) |
22quinn
approved these changes
Oct 23, 2025
Collaborator
|
Thanks! |
usberkeley
pushed a commit
to usberkeley/vllm
that referenced
this pull request
Oct 23, 2025
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com>
845473182
pushed a commit
to raindaywhu/vllm
that referenced
this pull request
Oct 24, 2025
…o step_forward * 'step_forward' of https://github.com/raindaywhu/vllm: (148 commits) [Model] Add MoE support for NemotronH (vllm-project#25863) [Metrics] [KVConnector] Add connector prefix cache hit rate stats (vllm-project#26245) [CI] Reorganize entrypoints tests (vllm-project#27403) add SLA information into comparison graph for vLLM Benchmark Suite (vllm-project#25525) [CI/Build] Fix AMD CI: test_cpu_gpu.py (vllm-project#27388) [Bugfix] Fix args settings for guided decoding args (vllm-project#27375) [CI/Build] Fix Prithvi plugin test (vllm-project#27393) [Chore] Remove duplicate `has_` functions in vllm.utils (vllm-project#27372) [Model] Add num_cached_tokens for PoolingRequestOutput (vllm-project#27378) [V1][spec decode] return logprobs for spec decoding (vllm-project#26060) [CORE] Support Prefix Caching with Prompt Embeds (vllm-project#27219) [Bugfix][Core] running queue index leakage exception (vllm-project#26754) [Bugfix] Fix incorrect kv cache metrics in grafana.json (vllm-project#27133) [Bugfix] Fix SLA tuner initialization (vllm-project#27355) [Bugfix] Fix deepseek-ocr multi-image inference and add `merge_by_field_config=True` with tensor schema support (vllm-project#27361) [MLA] Bump FlashMLA (vllm-project#27354) [Chore] Separate out system utilities from vllm.utils (vllm-project#27201) [BugFix] bugfix for Flash Attention MLA with full cuda graph IMA following pr-25490 (vllm-project#27128) [Feature] publisher default set zmq in kv_event config (vllm-project#26915) [Prefix Cache] Use LoRA name for consistent KV-cache block hashing (vllm-project#27211) ...
0xrushi
pushed a commit
to 0xrushi/vllm
that referenced
this pull request
Oct 26, 2025
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi
pushed a commit
to 0xrushi/vllm
that referenced
this pull request
Oct 26, 2025
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
5 tasks
wangxiyuan
pushed a commit
to vllm-project/vllm-ascend
that referenced
this pull request
Oct 28, 2025
### What this PR does / why we need it? vllm-project/vllm@c9461e0 Fix ```spec decode rejection sampler```, caused by vllm-project/vllm#26060 Fix some ```import```, caused by vllm-project/vllm#27374 Fix ```scheduler_config.send_delta_data```, caused by #3719 Fix ```init_with_cudagraph_sizes```, caused by vllm-project/vllm#26016 Fix ```vl model```of replacing PatchEmbed's conv3d to linear layer, caused by vllm-project/vllm#27418 ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with new added/existing test. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@c9461e0 --------- Signed-off-by: Icey <1790571317@qq.com>
ilmarkov
pushed a commit
to neuralmagic/vllm
that referenced
this pull request
Nov 7, 2025
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com>
rtourgeman
pushed a commit
to rtourgeman/vllm
that referenced
this pull request
Nov 10, 2025
Signed-off-by: Giancarlo Delfin <gdelfin@meta.com> Co-authored-by: Nick Hill <nhill@redhat.com>
luolun
pushed a commit
to luolun/vllm-ascend
that referenced
this pull request
Nov 19, 2025
### What this PR does / why we need it? vllm-project/vllm@c9461e0 Fix ```spec decode rejection sampler```, caused by vllm-project/vllm#26060 Fix some ```import```, caused by vllm-project/vllm#27374 Fix ```scheduler_config.send_delta_data```, caused by vllm-project#3719 Fix ```init_with_cudagraph_sizes```, caused by vllm-project/vllm#26016 Fix ```vl model```of replacing PatchEmbed's conv3d to linear layer, caused by vllm-project/vllm#27418 ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with new added/existing test. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@c9461e0 --------- Signed-off-by: Icey <1790571317@qq.com> Signed-off-by: luolun <luolun1995@cmbchina.com>
hwhaokun
pushed a commit
to hwhaokun/vllm-ascend
that referenced
this pull request
Nov 19, 2025
### What this PR does / why we need it? vllm-project/vllm@c9461e0 Fix ```spec decode rejection sampler```, caused by vllm-project/vllm#26060 Fix some ```import```, caused by vllm-project/vllm#27374 Fix ```scheduler_config.send_delta_data```, caused by vllm-project#3719 Fix ```init_with_cudagraph_sizes```, caused by vllm-project/vllm#26016 Fix ```vl model```of replacing PatchEmbed's conv3d to linear layer, caused by vllm-project/vllm#27418 ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with new added/existing test. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@c9461e0 --------- Signed-off-by: Icey <1790571317@qq.com> Signed-off-by: hwhaokun <haokun0405@163.com>
This was referenced Dec 4, 2025
3 tasks
1 task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Add support for returning logprobs for v1 spec decoding.
Test Plan
Automated Tests
Added automated testing for logprobs which compares spec decode LLM per-token output logprobs with those of a reference model. The comparison is done for:
raw_logits,raw_logprobs,processed_logits, andprocessed_logprobs.Rejection sampler still works as expected:
Manual Test
Setup
Ran the following for spec decode LLM server:
And the following for standard decode LLM server:
Single Request Test
Use the following to send a request:
Verified that both standard LLM and spec decode LLM return identical logprobs outputs:
Multiple Request Test
Used the following script to send 4 concurrent requests:
The output logprobs for both standard and spec decode LLMs are approximately the same.
Standard decode LLM logprobs:
Spec decode LLM logprobs: