-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
[V1][Spec Decode][Feature] Spec decode with probs #20459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @andylolu2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the speculative decoding feature, particularly for the Eagle model, by introducing the capability to generate and utilize probability distributions alongside proposed draft tokens. This allows for more sophisticated and accurate sampling during the drafting phase, improving the overall efficiency and quality of the speculative decoding process. Additionally, new tests have been introduced to ensure the stability of speculative decoding under complex scheduling conditions.
Highlights
- Speculative Decoding with Probabilities: The core change enables the Eagle speculative decoding method to not only propose draft tokens but also return their associated probability distributions. This moves beyond simple greedy (argmax) token generation to allow for more nuanced sampling strategies during drafting.
- Integration of Draft Probabilities: The
GPUModelRunnernow stores these draft probabilities per request in theCachedRequestStateand passes them to the rejection sampler. This is crucial for the main model to efficiently verify and accept/reject the drafted tokens based on their likelihoods. - Enhanced Testing for Speculative Decoding: New comprehensive tests have been added to ensure the robustness of speculative decoding under various scheduling scenarios, including partial scheduling and request preemption. This validates the stability of the new probability-aware drafting mechanism.
- Refactored Draft Proposal Interface: The
propose_draft_token_idsmethod inGPUModelRunnerhas been renamed topropose_draftand its signature updated to explicitly return both the drafted token IDs and their probabilities, standardizing the interface for different speculative decoding methods.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for returning probabilities along with tokens in speculative decoding, which is a key requirement for rejection sampling. The changes are well-implemented across the gpu_model_runner and eagle proposer, with corresponding updates to tests. A new integration test for scheduling with speculative decoding is also a great addition. I have one minor suggestion to improve comment clarity for future maintainability.
cbf0f7f to
f79b62b
Compare
Signed-off-by: Andy Lo <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Andy Lo <[email protected]>
f79b62b to
5937e7b
Compare
|
@WoosukKwon PR is ready for review |
|
Hi @andylolu2 - thanks for this PR! The AL improvement looks good. One reason for using the argmax was to reduce the overhead of sampling from draft. TRTLLM and TGI also uses argmax. Could you also run the online benchmark on MTBench to see how the e2e gains in TPOT metric due to EAGLE look like with and without this PR? Example cmd: #18847 |
Signed-off-by: Andy Lo <[email protected]>
@ekagra-ranjan I've added online benchmark numbers. The difference is not massive (results a bit noisy in general) at low temperature. I think it will still be good to merge this since:
|
|
@andylolu2 - thanks for adding the plot with TPOT. Can you also share the absolute numbers and the relative gain/degradation for different Temp? |
The absolute numbers are in the "Raw numbers" collapse under the plot. |
|
The drop in gains on low temp like 0.3 is ~10% which is a lot. Cohere for e.g., uses 0.3 as the default temperature. Coding and reasoning tasks usually use lower temperature which becomes even more important with thinking/reasoning models. Can we have add an if-else to use the argmax method when the engine is using temp is < 0.75 to preserve the perf for these scenarios? |
I think in general it's highly model-specific choice of what sampling temperature you should use for the draft model. I suggest we make the threshold T configurable with the following heuristic:
Does that sound reasonable to you? |
|
Yes, we can have the threshold T as a parameter. Perhaps the default value should be 0.75 based on your results instead of having it a required param.
Oh, maybe I missed it but in your experiment are you using different temp for draft from target? |
I'm using the same temperature for both. |
Signed-off-by: Andy Lo <[email protected]>
|
@ekagra-ranjan I realised it's actually quite difficult to make my proposal work. Problem is the rejection sampler does not allow both:
To make this work would require some large-ish amount of change to the rejection sampler, and that would be a rabbit hole to make sure I don't introduce unwanted overheads. Instead I've optimized the draft model sampler a bit, the overhead is in the worst case ~4% now. New numbers updated in the PR description. |
Do you think its simpler to select the old argmax approach only if all of the req in batch have temp below T OR instead select the new approach if all req in batch have temp above T?
Nice, Could you pls share which line of code/commit does this? |
I don't see an easy way. Due to the fact that drafts are not used immediately (usually the step right after, but can be in theory arbitrarily later on due to preemption), even if we do the fallback argmax approach during drafting, we might still end up with some requests with and some requests without draft probs during verification. A change to the rejection sampler would be needed.
Forgot to push hahaa. It's this commit 20e43fd. Also would like to add that the sampling overhead will be very negligible for larger models (e.g. DeepSeekV3/R1), so they should benefit a lot more from the increase in AL. |
|
@jmamou, how does rebasing your eagle-frspec changes onto this branch affect eagle-frspec’s acceptance? |
|
@andylolu2 - I was comparing the old and the new bench numbers
Can you share why the new numbers look worse than the older ones? |
|
@ekagra-ranjan I think the data I used And for the same reason I switched the online benchmark numbers to use I've also implemented the "kill switch" mechanism |
@andylolu2, I believe the acceptance should be monotonically increasing regardless of the data (for exact expected acceptance rates, see sgl-project/sglang#9877). Edit: Non-greedy sampling should have monotonically increasing acceptance over the acceptance of greedy sampling. |
|
Thanks for adding I think its better to use |
Yes agree, my numbers agree with that too. |
Makes sense, let me re-run the numbers for easier comparison across PRs. |
Signed-off-by: Andy Lo <[email protected]>
I've reran the benchmarks with |
LiuXiaoxuanPKU
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!! It's a great feature and promising numbers! I just have a concern about the seeding here, see detailed comment below.
|
|
||
| # TODO(woosuk): Consider seeds. | ||
| q = torch.empty_like(probs) | ||
| q.exponential_() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a concern about this line:
The line q.exponential_() fills the tensor q with samples from an exponential distribution using PyTorch's global random number generator. This affects the global random seed and will advance the state of PyTorch's RNG. If the main model or other parts of code rely on reproducibility through seeding (e.g., torch.manual_seed(seed)), calling q.exponential_() here will consume random numbers from the global RNG, which may change subsequent random samples elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@LiuXiaoxuanPKU do you have a proposed fix?
And I'm also not exactly sure in what circumstances it can break
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I'm also not exactly sure in what circumstances it can break
it will break reproducibility. vLLM users provide seed to the llm engine so that they can obtain the same result during reruns. This helps in investigating issues and putting llm in CI testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like similar code exists already https://github.com/vllm-project/vllm/blob/main/vllm/v1/sample/rejection_sampler.py#L591. Does this mean this line is in rejection_sampler.py is problematic too?
| draft_probs = torch.randn(num_tokens, | ||
| logits.shape[-1], | ||
| device=self.device, | ||
| dtype=logits.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ngram, we can keep passing draft_probs as None to reduce the overhead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is inside _dummy_sampler_run, do you mean the warm up path might be different?
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Andy Lo <[email protected]>
|
@ekagra-ranjan @LiuXiaoxuanPKU Any updates? Can we make this PR merge-ready? |
@keyboardAnt |
Signed-off-by: Andy Lo <[email protected]>
8112c45 to
46465c0
Compare
Signed-off-by: Andy Lo <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
|
It is a nice change, would be nice to make this feature working. |
|
Documentation preview: https://vllm--20459.org.readthedocs.build/en/20459/ |
Summary
This PR enables speculative decoding with draft probabilities in V1. A partial revert of #16899.
Implementation
Blocker of #16899 was the draft probabilities aren't used immediately, so we need to keep them around for the next iteration. In this PR, I propose we add draft probabilities as part of the
CachedRequestState. On the next iteration where the draft token ids of a request is used, we fetch the draft probs from the cached state. This greatly simplifies the matter:CachedRequestStatealready encapsulates the logic that data related to a request isn't necessarily used immediately. For example, this handles preemption / the problem where requests might not be scheduled on every step.enable_draft_probsinSpeculativeConfigfor those cases. I recommend still enabling draft probs by default since it's robust to a wider range of temperatures (i.e. less likely to regress vs standard sampling).Benchmark
Numbers obtained by running the follow on current main (14601f5) vs this branch:
VLLM_USE_V1=1 python3 examples/offline_inference/spec_decode.py \ --method eagle \ --num_spec_tokens 4 \ --dataset-name hf \ --dataset-path philschmid/mt-bench \ --num_prompts 100 \ --temp <T>Old numbers w/ https://raw.githubusercontent.com/SafeAILab/EAGLE/49b67c8d2dc6d8154eab6ef6899e4b9cc4cd3e06/eagle/data/mt_bench/question.jsonl
Old old numbers with https://raw.githubusercontent.com/SafeAILab/EAGLE/49b67c8d2dc6d8154eab6ef6899e4b9cc4cd3e06/eagle/data/mt_bench/question.jsonl
Online benchmarks
Ran with:
(I got some request-id-already-running error when using
--num-prompts 100.)Result
Old numbers with https://raw.githubusercontent.com/SafeAILab/EAGLE/49b67c8d2dc6d8154eab6ef6899e4b9cc4cd3e06/eagle/data/mt_bench/question.jsonl
Explanation:
Tests
I've mainly added two tests:
test_propose_randomthat checks the draft model is sampling truthfully according to the underlying logits. I do so by sampling N times and ensure the observed distribution statistically don't deviate too much from the target.TestSpecDecodeSchedulingwhich checks for edge cases like preemption and ensure that the draft probs are still cached properly and there's no things like index-OOB errors.