Skip to content

[Bugfix] Update target probs to target logits in rejection sample#6685

Merged
whx-sjtu merged 1 commit intovllm-project:mainfrom
slippersss:bugfix_prob_logit
Feb 11, 2026
Merged

[Bugfix] Update target probs to target logits in rejection sample#6685
whx-sjtu merged 1 commit intovllm-project:mainfrom
slippersss:bugfix_prob_logit

Conversation

@slippersss
Copy link
Copy Markdown
Contributor

@slippersss slippersss commented Feb 11, 2026

What this PR does / why we need it?

This PR aims to update target_probs to target_logits in rejection_sample, for catching up with vllm-project/vllm#32852. Otherwise, sampling with temperature will incur accuracy problem where tokens can be accepted or rejected unreasonably.

Does this PR introduce any user-facing change?

N/A

How was this patch tested?

by ci

Signed-off-by: Zetong Li <slippersss@126.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @slippersss, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the rejection_sample function to correctly handle target distributions as logits rather than probabilities. This change resolves an accuracy issue that could arise during temperature sampling by ensuring that the function's inputs and internal calculations are consistent with the expected logarithmic scale, thereby preventing unreasonable token acceptance or rejection.

Highlights

  • Parameter Renaming: Renamed the target_probs parameter to target_logits in the rejection_sample function signature and updated all corresponding internal references and assertions.
  • Logic Correction: Modified the greedy sampling logic to correctly use target_logits.argmax and introduced an explicit softmax conversion from target_logits to target_probs before further probability-based operations.
  • Accuracy Fix: Addressed an accuracy problem in temperature sampling by ensuring the rejection_sample function operates with logits as intended, aligning with recent upstream changes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/sample/rejection_sampler.py
    • Updated the rejection_sample function signature to accept target_logits instead of target_probs.
    • Adjusted all internal references and assertions within rejection_sample to use target_logits.
    • Modified the greedy sampling path to calculate target_argmax directly from target_logits.
    • Introduced a softmax operation to convert target_logits to target_probs before subsequent probability-dependent steps.
Activity
  • The pull request was tested by CI.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the rejection_sample function to accept target_logits instead of target_probs. This is a crucial bugfix that allows for the correct application of temperature scaling on logits, addressing a potential accuracy issue in sampling. The implementation is sound: it correctly uses argmax on logits for the greedy path and applies softmax to derive probabilities for the random sampling path. The changes are consistent and well-contained. I have added one comment to suggest an improvement for robustness by checking for NaN values in the input logits.

assert draft_probs is None or draft_probs.ndim == 2
assert cu_num_draft_tokens.ndim == 1
assert target_probs.ndim == 2
assert target_logits.ndim == 2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To improve robustness, it's good practice to check for NaN values in input tensors. Logits can sometimes become NaN due to numerical instability in upstream computations. If target_logits contains NaNs, they will propagate through the softmax operation and subsequent calculations, leading to incorrect sampling results (e.g., argmax on a NaN tensor often returns 0, a silent failure). Adding an assertion here would help catch such issues early.

Suggested change
assert target_logits.ndim == 2
assert target_logits.ndim == 2
assert not torch.isnan(target_logits).any(), "target_logits should not contain NaNs"

@whx-sjtu whx-sjtu added ready read for review ready-for-test start test by label for PR labels Feb 11, 2026
@whx-sjtu whx-sjtu added ready-for-test start test by label for PR and removed ready-for-test start test by label for PR labels Feb 11, 2026
@whx-sjtu whx-sjtu merged commit 140fcaf into vllm-project:main Feb 11, 2026
65 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 12, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Docs] Fix GLM-5 deploy command (vllm-project#6711)
  [npugraph_ex]enable npugraph_ex by default (vllm-project#6664)
  [doc]add GLM5.md (vllm-project#6709)
  [Model] GLM5 adaptation (vllm-project#6642)
  [Bugfix] Update target probs to target logits in rejection sample (vllm-project#6685)
  [Main][Ops] Make triton rope support index_selecting from cos_sin_cache (vllm-project#5450)
  [CI]fix nightly multi node test error for wait for pod ready (vllm-project#6675)
  [main  to main] upgrade main 0210 (vllm-project#6673)
  [main][Quant] Remove unused rotation functions and parameters from W4A4 LAOS quantization (vllm-project#6648)
  [Test][BugFix] Fix torch.rand usage in triton penalty test (vllm-project#6680)
  Add Worker Interface:check_health (vllm-project#6681)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@wangxiyuan wangxiyuan mentioned this pull request Feb 24, 2026
banxiaduhuo pushed a commit to banxiaduhuo/vllm-ascend that referenced this pull request Feb 26, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
yangzhe-2026 pushed a commit to yangzhe-2026/vllm-ascend that referenced this pull request May 6, 2026
…lm-project#6685)

### What this PR does / why we need it?
This PR aims to update `target_probs` to `target_logits` in
`rejection_sample`, for catching up with
vllm-project/vllm#32852. Otherwise, sampling
with temperature will incur accuracy problem where tokens can be
accepted or rejected unreasonably.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: Zetong Li <slippersss@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants