Skip to content

[Quantization] [Eagle] Add complete quantization support to the draft model in Eagle#28435

Merged
vllm-bot merged 4 commits intovllm-project:mainfrom
capitalone-contributions:vllm-eagle-quantize-shreyas269
Nov 17, 2025
Merged

[Quantization] [Eagle] Add complete quantization support to the draft model in Eagle#28435
vllm-bot merged 4 commits intovllm-project:mainfrom
capitalone-contributions:vllm-eagle-quantize-shreyas269

Conversation

@shreyas269
Copy link
Copy Markdown
Contributor

@shreyas269 shreyas269 commented Nov 11, 2025

Purpose

This PR adds comprehensive quantization support for Eagle and Eagle3 draft models in speculative decoding, including full KV cache quantization support. Previously, Eagle draft models could not use quantized weights in fully connected layers, or quantized KV caches.

Recently, #26590 was merged to properly obtain the draft model's quantization config but it doesn't address the case where the entire draft model is quantized and we want to read input and weight scales of fc layer along with KV cache quantization scales.

This PR addresses the following:

  • Define get_draft_quant_config in utils to avoid duplication of code in llama_eagle.py and llama_eagle3.py.
  • Add ReplicatedLinear class to make fc layer in drafters quantizable and handle input and weight quantization scales (in llama with eagle/eagle3).
  • Handle and load KV cache quantization scales. Additionally, attempts to remap it to the expected name format in the model.

This PR is a duplicate to #27434 with an additional smoke test.

Test Plan

Tested with a base llama3 instruct model with a quantized Eagle draft model (one decoder layer + one FC layer) with static fp8 quantization. The quantization of the base/verifier and Eagle draft model was performed using ModelOpt.

The non-quantized models work exactly the same as before (no changes to behavior).

  • Added smoke tests
    • New test file test_eagle_quantization.py with unit tests for draft model quantization
    • Tests cover get_draft_quant_config() with and without draft models
    • Tests verify FC layer behavior with quantization configs
    • Tests check KV cache scale name handling and remapping

Test Result

Before:
KeyError: 'fc.input_scale'

After:

(Worker_TP1 pid=1060248) WARNING 10-21 15:19:08 [modelopt.py:103] Detected ModelOpt fp8 checkpoint. Please note that the format is experimental and could change.
(Worker_TP1 pid=1060248) INFO 10-21 15:19:08 [default_loader.py:309] Loading weights took 0.03 seconds
(Worker_TP1 pid=1060248) INFO 10-21 15:19:08 [eagle.py:973] Assuming the EAGLE head shares the same vocab embedding with the target model.
(Worker_TP1 pid=1060248) INFO 10-21 15:19:08 [eagle.py:995] Loading EAGLE LM head weights from the target model.
(Worker_TP0 pid=1060247) WARNING 10-21 15:19:08 [modelopt.py:103] Detected ModelOpt fp8 checkpoint. Please note that the format is experimental and could change.
(Worker_TP0 pid=1060247) WARNING 10-21 15:19:08 [modelopt.py:103] Detected ModelOpt fp8 checkpoint. Please note that the format is experimental and could change.
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 36.17it/s]

Acceptance rate of drafter:

Per-position acceptance rate: 0.830, 0.675, 0.452, 0.309, Avg Draft acceptance rate: 56.6%

The test_eagle_quantization and all pre-commit hooks pass successfully.

tests/model_executor/test_eagle_quantization.py::test_get_draft_quant_config_with_draft_model PASSED
tests/model_executor/test_eagle_quantization.py::test_get_draft_quant_config_without_draft_model PASSED
tests/model_executor/test_eagle_quantization.py::test_fc_layer_quant_config_usage[cuda:0] WARNING 10-30 16:20:51 [vllm.py:1005] Current vLLM config is not set.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
WARNING 10-30 16:20:54 [vllm.py:1005] Current vLLM config is not set.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
INFO 10-30 16:20:54 [parallel_state.py:1325] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
WARNING 10-30 16:20:54 [vllm.py:1005] Current vLLM config is not set.
PASSED
tests/model_executor/test_eagle_quantization.py::test_fc_layer_quant_config_usage[cuda:1] WARNING 10-30 16:20:55 [vllm.py:1005] Current vLLM config is not set.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
WARNING 10-30 16:20:55 [vllm.py:1005] Current vLLM config is not set.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
PASSED
tests/model_executor/test_eagle_quantization.py::test_kv_cache_scale_name_handling PASSED
tests/model_executor/test_eagle_quantization.py::test_kv_cache_scale_name_no_scale PASSED
tests/model_executor/test_eagle_quantization.py::test_maybe_remap_kv_scale_name PASSED
tests/model_executor/test_eagle_quantization.py::test_load_weights_kv_scale_handling PASSED
========================================== 8 passed, 2 warnings in 7.70s ==========================================

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

… model in Eagle

Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
@mergify mergify bot added llama Related to Llama models speculative-decoding labels Nov 11, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive quantization support for Eagle and Eagle3 draft models, which is a significant enhancement. The changes are well-structured, including the refactoring of get_draft_quant_config into a shared utility to reduce code duplication. The replacement of torch.nn.Linear with ReplicatedLinear is correctly implemented to enable quantization for the fully connected layers. The addition of a dedicated test file with smoke tests for the new quantization features is also a great contribution that improves the robustness of the codebase.

I have one suggestion regarding code duplication in the load_weights methods of llama_eagle.py and llama_eagle3.py. Extracting the logic for handling KV cache scales into a shared function would further improve maintainability.

Overall, this is a well-executed PR that successfully adds an important feature.

Comment on lines +126 to +143
# Handle kv cache quantization scales
if self.quant_config is not None and (
scale_name := self.quant_config.get_cache_scale(name)
):
# Loading kv cache quantization scales
param = params_dict[scale_name]
weight_loader = getattr(param, "weight_loader", default_weight_loader)
loaded_weight = (
loaded_weight if loaded_weight.dim() == 0 else loaded_weight[0]
)
weight_loader(param, loaded_weight)
loaded_params.add(scale_name)
continue
# Remapping the name FP8 kv-scale
if "scale" in name:
name = maybe_remap_kv_scale_name(name, params_dict)
if name is None:
continue
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This block of code for handling KV cache quantization scales and remapping FP8 kv-scale names is duplicated in vllm/model_executor/models/llama_eagle3.py (lines 220-237). To improve maintainability and avoid potential inconsistencies in the future, consider refactoring this logic into a shared utility function in vllm/model_executor/models/utils.py. You've already done this for get_draft_quant_config, and a similar approach would be beneficial here.

@heheda12345
Copy link
Copy Markdown
Collaborator

CC @mgoin @Isotr0py @yewentao256 for quantization and @luccafong @benchislett for eagle

Copy link
Copy Markdown
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The quantization part looks reasonable to me. See if others are fine with quantzaion for eagle. (I don't have much knowledge about spec decode)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems that we should put this test under tests/models/quantization or tests/quantization?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Isotr0py, Thanks for the review! I placed the test in tests/model_executor because this is testing the model executor's quantization integration for the Eagle model architecture, rather than end-to-end inference with quantized models.

That said, I'm happy to move it if you think tests/models/quantization or tests/quantization would be better for discoverability.

@shreyas269
Copy link
Copy Markdown
Contributor Author

Hey folks, just a quick follow-up on this PR. The reviewers were tagged previously (tagging them again below), and I’m happy to incorporate any changes needed. Whenever someone has time to take a look, I’d appreciate it! :)

CC @mgoin @Isotr0py @yewentao256 @luccafong @benchislett @rahul-tuli

@mgoin mgoin added quantization ready ONLY add when PR is ready to merge/full CI is needed labels Nov 14, 2025
Copy link
Copy Markdown
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, although I think the test can be improved to use a real model

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love all this Mocking and working with low-level classes and configs. I'd like to rather replace this with a single model e2e smoke test so we see that the config parsing, weight loading, and such all work together with a known model

Copy link
Copy Markdown
Contributor Author

@shreyas269 shreyas269 Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that using a single real model would be cleaner. The only reason I didn’t do that here is because our internal model isn’t publicly shareable, so I couldn’t include it in the test. Although we could get a public drafter model released but could take time to go through approval since I'm contributing via capital one).

Given that constraint, this felt like the most straightforward solution. Happy to update this if you have a suggestion for a better approach.

Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Nov 15, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @shreyas269.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 15, 2025
Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
@mergify mergify bot removed the needs-rebase label Nov 15, 2025
@rahul-tuli
Copy link
Copy Markdown
Contributor

The PR looks good considering the constraints for using a real model, a follow up PR with a real model would be nice. Could you also fix the pre-commit checks?

@mgoin
Copy link
Copy Markdown
Member

mgoin commented Nov 17, 2025

@shreyas269 please fix the pre-commit as I cannot do it for you due to branch permissions, thank you!

Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
@shreyas269
Copy link
Copy Markdown
Contributor Author

@mgoin, @rahul-tuli, fixed the pre-commit.

@vllm-bot vllm-bot merged commit 95ae50b into vllm-project:main Nov 17, 2025
48 of 50 checks passed
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
… model in Eagle (vllm-project#28435)

Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
… model in Eagle (vllm-project#28435)

Signed-off-by: Shreyas Kulkarni <shreyas.gp269@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants