Skip to content

[Bug] Add e_score_correction_bias to SKIP_TENSORS#38746

Merged
vllm-bot merged 1 commit into
vllm-project:mainfrom
hao-aaron:layerwise-fix
Apr 3, 2026
Merged

[Bug] Add e_score_correction_bias to SKIP_TENSORS#38746
vllm-bot merged 1 commit into
vllm-project:mainfrom
hao-aaron:layerwise-fix

Conversation

@hao-aaron
Copy link
Copy Markdown
Contributor

@hao-aaron hao-aaron commented Apr 1, 2026

Purpose

Weight reloading is failing for moonshotai/Moonlight-16B-A3B due to e_score_correction_bias being counted as part of layer size twice, restore_layer_on_meta creates separate meta copies of e_score_correction_bias for gate and FusedMoE. FusedMoE's meta copy is never loaded (counted but never reached → 64-element shortfall). Thus, layerwise reload never fires since moe layer is always short.

Test Plan

used prints to ensure that moe expert layers loading is complete when finalize_layerwise_reload is called, before this fix it was not reaching the load_numel_total


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

x
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
@hao-aaron hao-aaron requested a review from 22quinn as a code owner April 1, 2026 18:56
@mergify mergify Bot added the bug Something isn't working label Apr 1, 2026
@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) April 1, 2026 18:57
@github-actions github-actions Bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 1, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the layer-wise model loading logic to skip specific tensors during online processing initialization. Specifically, it adds "e_score_correction_bias" to the SKIP_TENSORS set and implements a check to bypass these tensors in the initialization loop. I have no feedback to provide as there are no review comments.

Copy link
Copy Markdown
Contributor

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

# Note that nested wrapping will occur for shared tensors
for name, tensor in get_layer_tensors(layer).items():
if name in SKIP_TENSORS:
continue
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part is technically unnecessary, wrapping the weight loader is irrelevant if the weight is never loaded, but still a good change to make

@vllm-bot vllm-bot merged commit 4729b90 into vllm-project:main Apr 3, 2026
52 of 55 checks passed
HenryTangDev pushed a commit to HenryTangMain/vllm that referenced this pull request Apr 6, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
puririshi98 pushed a commit to puririshi98/vllm that referenced this pull request Apr 7, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Signed-off-by: Rishi Puri <riship@nvidia.com>
mtparet pushed a commit to blackfuel-ai/vllm that referenced this pull request Apr 9, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
alankessler added a commit to alankessler/vllm that referenced this pull request Apr 13, 2026
…True

The layerwise reload mechanism wraps weight loaders for all tensors
not in SKIP_TENSORS. This prevents bias parameters from loading
correctly during online FP8 quantization, leaving them as zeros.

Qwen2 is the most visible case (bias=True on qkv_proj), but any
architecture with biased linear layers is affected.

Fixes: vllm-project#39663
Related: vllm-project#37334, vllm-project#38746

Signed-off-by: Alan Kessler <alankessler@gmail.com>
mystous pushed a commit to mystous/vllm_hybrid that referenced this pull request May 10, 2026
Signed-off-by: ahao-anyscale <ahao@anyscale.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants