Fix missing restore_weights_before_loading in CompressedTensorsFusedMoEMethod#21795
Merged
yueming-yuan merged 1 commit intoMar 31, 2026
Merged
Conversation
…oEMethod The quantization refactor in #17503 introduced CompressedTensorsFusedMoEMethod as a unified quant_method for all compressed-tensors MoE schemes, delegating to layer.scheme. However, restore_weights_before_loading was not forwarded. This causes INT4 weight update to fail in RL training: post_process_weights with restore_weights_before_load=True skips FusedMoE modules because hasattr(quant_method, "restore_weights_before_loading") returns False. The INT4 packed weights (768) are never restored to full size, so load_weights tries to narrow full-size HF weights (1536) into packed parameters and crashes with: RuntimeError: start (0) + length (1536) exceeds dimension size (768) Add the missing delegation with a hasattr guard since only CompressedTensorsWNA16MoE implements restore_weights_before_loading.
Contributor
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
CompressedTensorsFusedMoEMethodas a unifiedquant_methodfor all compressed-tensors MoE schemes, delegating tolayer.scheme. However,restore_weights_before_loadingwas not forwarded.post_process_weightswithrestore_weights_before_load=Trueskips FusedMoE modules becausehasattr(quant_method, "restore_weights_before_loading")returnsFalse. The INT4 packed weights are never restored to full size before loading new weights.RuntimeError: start (0) + length (1536) exceeds dimension size (768)inFusedMoE._load_w13Fix
Add
restore_weights_before_loadingtoCompressedTensorsFusedMoEMethodthat delegates tolayer.scheme, with ahasattrguard since onlyCompressedTensorsWNA16MoEimplements this method.Test plan
e2e/megatron/test_qwen3_30B_A3B.pywithMILES_TEST_USE_INT4_ROLLOUT=1)