[Bug] Add e_score_correction_bias to SKIP_TENSORS#38746
Merged
Conversation
robertgshaw2-redhat
approved these changes
Apr 1, 2026
Contributor
There was a problem hiding this comment.
Code Review
This pull request updates the layer-wise model loading logic to skip specific tensors during online processing initialization. Specifically, it adds "e_score_correction_bias" to the SKIP_TENSORS set and implements a check to bypass these tensors in the initialization loop. I have no feedback to provide as there are no review comments.
SumanthRH
approved these changes
Apr 1, 2026
kylesayrs
approved these changes
Apr 1, 2026
| # Note that nested wrapping will occur for shared tensors | ||
| for name, tensor in get_layer_tensors(layer).items(): | ||
| if name in SKIP_TENSORS: | ||
| continue |
Contributor
There was a problem hiding this comment.
This part is technically unnecessary, wrapping the weight loader is irrelevant if the weight is never loaded, but still a good change to make
HenryTangDev
pushed a commit
to HenryTangMain/vllm
that referenced
this pull request
Apr 6, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
puririshi98
pushed a commit
to puririshi98/vllm
that referenced
this pull request
Apr 7, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com> Signed-off-by: Rishi Puri <riship@nvidia.com>
mtparet
pushed a commit
to blackfuel-ai/vllm
that referenced
this pull request
Apr 9, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
1 task
alankessler
added a commit
to alankessler/vllm
that referenced
this pull request
Apr 13, 2026
…True The layerwise reload mechanism wraps weight loaders for all tensors not in SKIP_TENSORS. This prevents bias parameters from loading correctly during online FP8 quantization, leaving them as zeros. Qwen2 is the most visible case (bias=True on qkv_proj), but any architecture with biased linear layers is affected. Fixes: vllm-project#39663 Related: vllm-project#37334, vllm-project#38746 Signed-off-by: Alan Kessler <alankessler@gmail.com>
mystous
pushed a commit
to mystous/vllm_hybrid
that referenced
this pull request
May 10, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Weight reloading is failing for
moonshotai/Moonlight-16B-A3Bdue toe_score_correction_biasbeing counted as part of layer size twice,restore_layer_on_metacreates separate meta copies ofe_score_correction_biasfor gate and FusedMoE. FusedMoE's meta copy is never loaded (counted but never reached → 64-element shortfall). Thus, layerwise reload never fires since moe layer is always short.Test Plan
used prints to ensure that moe expert layers loading is complete when
finalize_layerwise_reloadis called, before this fix it was not reaching theload_numel_totalEssential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.