[Bugfix][Speculative Decoding] Extend Eagle quantization config fix to llama_eagle.py#26590
Merged
robertgshaw2-redhat merged 3 commits intovllm-project:mainfrom Oct 13, 2025
Merged
Conversation
Signed-off-by: Rahul Tuli <rtuli@redhat.com>
5 tasks
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Signed-off-by: Rahul Tuli <rtuli@redhat.com>
Contributor
|
Thx @rahul-tuli , the fix LGTM and solves #26402 I wouldn't expect this to correctly handle cases where the eagle head is actually quantized. For example, I beleive the ignore list would need to be written taking into account that the layers are registered as further layers in the original model etc.. But that is beyond the pressing issue. Thanks! |
DarkLight1337
approved these changes
Oct 13, 2025
yewentao256
approved these changes
Oct 13, 2025
Member
yewentao256
left a comment
There was a problem hiding this comment.
LGTM, thanks for the work!
Contributor
|
I tested these changes with quantized Eagle head. LGTM! |
1 task
1994
pushed a commit
to 1994/vllm
that referenced
this pull request
Oct 14, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com> Signed-off-by: 1994 <1994@users.noreply.github.com>
Dhruvilbhatt
pushed a commit
to Dhruvilbhatt/vllm
that referenced
this pull request
Oct 14, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com> Signed-off-by: Dhruvil Bhatt <bhattdbh@amazon.com>
bbartels
pushed a commit
to bbartels/vllm
that referenced
this pull request
Oct 16, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com> Signed-off-by: bbartels <benjamin@bartels.dev>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com>
5 tasks
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com>
0xrushi
pushed a commit
to 0xrushi/vllm
that referenced
this pull request
Oct 26, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi
pushed a commit
to 0xrushi/vllm
that referenced
this pull request
Oct 26, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com> Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
rtourgeman
pushed a commit
to rtourgeman/vllm
that referenced
this pull request
Nov 10, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com>
5 tasks
devpatelio
pushed a commit
to SumanthRH/vllm
that referenced
this pull request
Nov 29, 2025
…o llama_eagle.py (vllm-project#26590) Signed-off-by: Rahul Tuli <rtuli@redhat.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR extends the quantization config fix from #25883 to
llama_eagle.pyBackground
PR #25883 fixed an issue where Eagle3 drafter models were incorrectly using the verifier model's quantization config instead of their own. This caused problems when the drafter and verifier models had different quantization configurations.
The fix introduced a
get_quant_config()method inLlamaDecoderLayerthat can be overridden by Eagle subclasses to use the draft model's quantization config.Changes
This PR applies the same pattern to additional Eagle drafters:
llama_eagle.py
get_quant_config()method to theLlamaDecoderLayersubclass inllama_eagle.pyVllmConfig.get_quantization_config()to properly obtain the draft model's quantization configImpact
The fix ensures that Eagle drafter models correctly use their own quantization configuration, preventing quantization mismatches when used with differently quantized verifier models.
Notes
llama_eagle3.pywas already fixed in [Bugfix][Speculative Decoding] Fix Eagle3 quantization config issue #25883llama4_eagle.pyhandles this differently by explicitly passing quantization config as a parameter, so no changes are needed thereminicpm_eagle.pyaccepts a separatequant_configparameter in its decoder layer, so it doesn't need this fixFixes: Extends fix from #25883
Related: #25883
Verification
Output: