fix match_modules_set to work with MoE#524
Merged
Merged
Conversation
Summary: match_modules_set isn't currently as useful as it could be because it lacks the ability to match multiple results for each set like in the case of a moe model where you have 128 experts. ``` [`layers.32.mlp.experts.0.gate_up_proj`, ..., `layers.32.mlp.experts.127.gate_up_proj`] ``` In order to make is so this can still work for matching simple cases and moe cases we use the following approach. 1) match modules until we have at least 1 match per target 2) when we have 1 match per target, our set is 'full' and we calculate the common parent context 3) continue matching and for each match, check if parent context would change given the new match 4) if we find a match that changes the parent context, this is the first element of the next set. yield the existing matched set and then reset, using the current match as the first element of the new set. To facilitate this algorithm i also added get_lowest_common_module_name which basically copies a similar function in llm-compressor though significantly simpler. Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
kylesayrs
previously approved these changes
Dec 2, 2025
Collaborator
kylesayrs
left a comment
There was a problem hiding this comment.
Consider adding a test for the MoE case. Also remember to change the existing use case after landing.
Really impressive work, thanks
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
fynnsu
previously approved these changes
Dec 3, 2025
Collaborator
fynnsu
left a comment
There was a problem hiding this comment.
Nice algorithm! Left a couple comments below
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
kylesayrs
previously approved these changes
Dec 3, 2025
Collaborator
kylesayrs
left a comment
There was a problem hiding this comment.
Neat and cool and neat and cool
Summary Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
fynnsu
approved these changes
Dec 3, 2025
Collaborator
fynnsu
left a comment
There was a problem hiding this comment.
Approved but with an optional suggestion
kylesayrs
approved these changes
Dec 3, 2025
Collaborator
Author
I would have fixed it if it wouldn't get rid of the review! |
fynnsu
added a commit
to vllm-project/llm-compressor
that referenced
this pull request
Dec 10, 2025
Depends on vllm-project/compressed-tensors#524 Summary: - modified AWQ _set_resolved_mappings - get smoothing and balance layers at same time using match_modules_set - (bugfix) correct logic so that if any balance layers are incompatible, that matching is skipped - added warnings - get rid of tqdm and skip counting @kylesayrs - added helper for module_to_name - remove hardcoded handling for single balance layer by updating get_lowest_common_module to handle that - modified SmoothQuant _resolve_mappings - brought into alignment with AWQ - this is largely a horizontal move though there is handling for situations that would have been missed before like - multiple smooth layer matches in a single set - parent contexts further than 1 layer away. - updated mapping definitions to always be tuple(list[str],str) which is always the case but wasn't required unlike in AWQ - removed get_lowest_common_parent - now we can use CT's get_lowest_common_ancestor_name so only need to check for module_list (it has a lot of bugfixes compared to the get_lowest_common_parent implementation in LLMC) - updated test_base for AWQ and smoothquant - added test case for _set_resolved_mappings to check that partially skipped matches are handled correctly - added tests for MoE matching being handled correctly - added test cases for get_lowest_non_module_list_ancestor - imported Linear and used that instead of torch.nn.Linear - reverted test_pytorch.py for logarithmic_equalizations and smoothquant - The test was updated in #2084 by @rahul-tuli to ignore some modules but in general because of the way the new logic works, you need to ignore the whole set. - if you only ignore one element the matching logic would need to determine whether there's a full set or not *somehow* which it doesn't do. In the previous logic, this was possible because it was assumed the whole set had to be siblings of the smooth_layer, but the new util is trying to be more flexible and so relaxes this assumption which prevents the same approach from working. If this is a common need, perhaps we can add a util that checks for a context parent context of size N or something. TEST PLAN: pytest /home/HDCharles/repos/llm-compressor/tests/llmcompressor/modifiers/awq/test_base.py pytest /home/HDCharles/repos/llm-compressor/tests/llmcompressor/modifiers/smoothquant/test_base.py --------- Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com> Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com> Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Co-authored-by: Fynn Schmitt-Ulms <fynnsu@outlook.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary:
match_modules_set isn't currently as useful as it could be because it lacks the ability to match multiple results for each target like in the case of a moe model where you have multiple experts.
all need to be matched when doing something like AWQ or spin quant.
In order to make is so this can still work for matching simple cases like qkv and moe cases, we use the following approach:
Algorithm
the common parent context
new match
element of the next set. yield the existing matched set and then
reset, using the current match as the first element of the new set.
Requirements
A) one element from each target is enough to define the parent context of a set, i.e. adding more elements from the same set wouldn't change the parent context once its initially set.
B) each set would have a different parent context, or else we'll just put everything into one huge set.
Other
To facilitate this algorithm i also added get_lowest_common_ancestor_name which basically copies a similar function in llm-compressor though significantly simpler.