[Core] Support logprobs with spec decode + async scheduling #29223
[Core] Support logprobs with spec decode + async scheduling #29223njhill merged 1 commit intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request adds support for logprobs with speculative decoding and asynchronous scheduling. The changes involve refactoring how cumulative token counts are calculated and passed to correctly process logprobs in these scenarios. The modifications in vllm/v1/sample/rejection_sampler.py and vllm/v1/worker/gpu_model_runner.py seem correct and well-structured. New tests are added to cover these cases. My main concern is the significant increase in tolerance in tests/v1/sample/test_logprobs.py for comparing logprobs, which might hide numerical precision issues. Please see the specific comment for details.
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Nick Hill <nhill@redhat.com>
3caee35 to
cc55c14
Compare
| cu_num_tokens = None | ||
| if return_cu_num_tokens: | ||
| cu_num_tokens = [0] + valid_mask.sum(axis=1).cumsum().tolist() | ||
| if len(discard_req_indices) > 0: |
There was a problem hiding this comment.
Why is this done after computing cu_num_tokens?
There was a problem hiding this comment.
Because cu_num_tokens is used to index into the logprobs tensors that don't take the discarded indices into account, so doing it beforehand results in incorrect output.
Originally it was done before which was a bug, fixed by #29216.
…ject#29223) Signed-off-by: Nick Hill <nhill@redhat.com>
…ject#29223) Signed-off-by: Nick Hill <nhill@redhat.com>
### What this PR does / why we need it? Currently, we are using `AscendRejctionSampler` that extends from `RejctionSampler` in spec decoding. `AscendRejctionSampler` override `forward` of `RejctionSampler`, only aming to replace `rejection_sample` func. This causes a lot of code of `RejctionSampler` cannot be reused, for example: - vllm-project/vllm#19482 - vllm-project/vllm#26060 - vllm-project/vllm#29223 #### Proposed Change: - Delete `AscendRejctionSampler` and use `RejctionSampler` directly in model runner. - Patch `RejctionSampler.expand_batch_to_tokens` and `RejctionSampler.rejection_sample`, maybe a better way is to make them as custom ops. - Modify `NPUModelRunner` following vllm-project/vllm#26060 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - [x] test logits processor for spec decoding - [x] test logprobs for spec decoding - [x] test logprobs for spec decoding + async shcheduling (test with #4893) - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e --------- Signed-off-by: realliujiaxu <realliujiaxu@163.com>
### What this PR does / why we need it? Currently, we are using `AscendRejctionSampler` that extends from `RejctionSampler` in spec decoding. `AscendRejctionSampler` override `forward` of `RejctionSampler`, only aming to replace `rejection_sample` func. This causes a lot of code of `RejctionSampler` cannot be reused, for example: - vllm-project/vllm#19482 - vllm-project/vllm#26060 - vllm-project/vllm#29223 #### Proposed Change: - Delete `AscendRejctionSampler` and use `RejctionSampler` directly in model runner. - Patch `RejctionSampler.expand_batch_to_tokens` and `RejctionSampler.rejection_sample`, maybe a better way is to make them as custom ops. - Modify `NPUModelRunner` following vllm-project/vllm#26060 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - [x] test logits processor for spec decoding - [x] test logprobs for spec decoding - [x] test logprobs for spec decoding + async shcheduling (test with vllm-project#4893) - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e --------- Signed-off-by: realliujiaxu <realliujiaxu@163.com>
…ject#29223) Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
### What this PR does / why we need it? Currently, we are using `AscendRejctionSampler` that extends from `RejctionSampler` in spec decoding. `AscendRejctionSampler` override `forward` of `RejctionSampler`, only aming to replace `rejection_sample` func. This causes a lot of code of `RejctionSampler` cannot be reused, for example: - vllm-project/vllm#19482 - vllm-project/vllm#26060 - vllm-project/vllm#29223 #### Proposed Change: - Delete `AscendRejctionSampler` and use `RejctionSampler` directly in model runner. - Patch `RejctionSampler.expand_batch_to_tokens` and `RejctionSampler.rejection_sample`, maybe a better way is to make them as custom ops. - Modify `NPUModelRunner` following vllm-project/vllm#26060 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - [x] test logits processor for spec decoding - [x] test logprobs for spec decoding - [x] test logprobs for spec decoding + async shcheduling (test with vllm-project#4893) - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e --------- Signed-off-by: realliujiaxu <realliujiaxu@163.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
### What this PR does / why we need it? Currently, we are using `AscendRejctionSampler` that extends from `RejctionSampler` in spec decoding. `AscendRejctionSampler` override `forward` of `RejctionSampler`, only aming to replace `rejection_sample` func. This causes a lot of code of `RejctionSampler` cannot be reused, for example: - vllm-project/vllm#19482 - vllm-project/vllm#26060 - vllm-project/vllm#29223 #### Proposed Change: - Delete `AscendRejctionSampler` and use `RejctionSampler` directly in model runner. - Patch `RejctionSampler.expand_batch_to_tokens` and `RejctionSampler.rejection_sample`, maybe a better way is to make them as custom ops. - Modify `NPUModelRunner` following vllm-project/vllm#26060 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - [x] test logits processor for spec decoding - [x] test logprobs for spec decoding - [x] test logprobs for spec decoding + async shcheduling (test with vllm-project#4893) - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e --------- Signed-off-by: realliujiaxu <realliujiaxu@163.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
No description provided.