Skip to content

fix match_modules_set to work with MoE#524

Merged
HDCharles merged 15 commits into
mainfrom
098_fix_match_modules_set
Dec 3, 2025
Merged

fix match_modules_set to work with MoE#524
HDCharles merged 15 commits into
mainfrom
098_fix_match_modules_set

Conversation

@HDCharles
Copy link
Copy Markdown
Collaborator

@HDCharles HDCharles commented Dec 2, 2025

Summary:

match_modules_set isn't currently as useful as it could be because it lacks the ability to match multiple results for each target like in the case of a moe model where you have multiple experts.

all need to be matched when doing something like AWQ or spin quant.

```python3
targets = [
    "post_attention_layernorm", 
    "up_proj", 
    "down_proj"
]
match_modules_set(model, targets) == (
    [
        [layers.0.post_attention_layernorm],
        [
            `layers.0.mlp.experts.0.up_proj`,
            `layers.0.mlp.experts.1.up_proj`,
            ...

        ],
        [
            `layers.0.mlp.experts.0.down_proj`,
            `layers.0.mlp.experts.1.down_proj`,
            ...
        ]
    ], # <- first yield
    [<same but for layers.1>] # <- second yield
    ...
)
```

In order to make is so this can still work for matching simple cases like qkv and moe cases, we use the following approach:

Algorithm

  1. match modules until we have at least 1 match per target
  2. when we have 1 match per target, our set is 'full' and we calculate
    the common parent context
  3. continue matching and for each match, check if parent context would change given the
    new match
  4. if we find a match that changes the parent context, this is the first
    element of the next set. yield the existing matched set and then
    reset, using the current match as the first element of the new set.

Requirements

A) one element from each target is enough to define the parent context of a set, i.e. adding more elements from the same set wouldn't change the parent context once its initially set.
B) each set would have a different parent context, or else we'll just put everything into one huge set.

Other

To facilitate this algorithm i also added get_lowest_common_ancestor_name which basically copies a similar function in llm-compressor though significantly simpler.

Summary:

match_modules_set isn't currently as useful as it could be because it
lacks the ability to match multiple results for each set like in the
case of a moe model where you have 128 experts.

```
[`layers.32.mlp.experts.0.gate_up_proj`, ..., `layers.32.mlp.experts.127.gate_up_proj`]
```

In order to make is so this can still work for matching simple cases and
moe cases we use the following approach.

1) match modules until we have at least 1 match per target
2) when we have 1 match per target, our set is 'full' and we calculate
   the common parent context
3) continue matching and for each match, check if parent context would change given the
   new match
4) if we find a match that changes the parent context, this is the first
   element of the next set. yield the existing matched set and then
reset, using the current match as the first element of the new set.

To facilitate this algorithm i also added get_lowest_common_module_name
which basically copies a similar function in llm-compressor though
significantly simpler.

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@HDCharles HDCharles requested review from fynnsu and kylesayrs December 2, 2025 22:09
kylesayrs
kylesayrs previously approved these changes Dec 2, 2025
Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a test for the MoE case. Also remember to change the existing use case after landing.

Really impressive work, thanks

Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
HDCharles and others added 6 commits December 2, 2025 22:09
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@HDCharles HDCharles requested a review from kylesayrs December 3, 2025 05:12
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
fynnsu
fynnsu previously approved these changes Dec 3, 2025
Copy link
Copy Markdown
Collaborator

@fynnsu fynnsu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice algorithm! Left a couple comments below

Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Comment thread src/compressed_tensors/utils/match.py Outdated
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@HDCharles HDCharles requested a review from fynnsu December 3, 2025 16:59
kylesayrs
kylesayrs previously approved these changes Dec 3, 2025
Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat and cool and neat and cool

Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
fynnsu
fynnsu previously approved these changes Dec 3, 2025
Copy link
Copy Markdown
Collaborator

@fynnsu fynnsu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@HDCharles HDCharles requested a review from kylesayrs December 3, 2025 18:24
Copy link
Copy Markdown
Collaborator

@fynnsu fynnsu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved but with an optional suggestion

Comment thread src/compressed_tensors/utils/match.py Outdated
@HDCharles HDCharles added the bug Something isn't working label Dec 3, 2025
@kylesayrs kylesayrs removed the bug Something isn't working label Dec 3, 2025
@HDCharles HDCharles merged commit 1ec8bb6 into main Dec 3, 2025
3 checks passed
@HDCharles HDCharles deleted the 098_fix_match_modules_set branch December 3, 2025 21:42
@HDCharles
Copy link
Copy Markdown
Collaborator Author

Approved but with an optional suggestion

I would have fixed it if it wouldn't get rid of the review!

fynnsu added a commit to vllm-project/llm-compressor that referenced this pull request Dec 10, 2025
Depends on vllm-project/compressed-tensors#524

Summary:
- modified AWQ _set_resolved_mappings
- get smoothing and balance layers at same time using match_modules_set
- (bugfix) correct logic so that if any balance layers are incompatible,
that matching is skipped
  -  added warnings
  -  get rid of tqdm and skip counting @kylesayrs 
  -  added helper for module_to_name
- remove hardcoded handling for single balance layer by updating
get_lowest_common_module to handle that
- modified SmoothQuant _resolve_mappings
  - brought into alignment with AWQ
- this is largely a horizontal move though there is handling for
situations that would have been missed before like
      - multiple smooth layer matches in a single set 
      - parent contexts further than 1 layer away.
- updated mapping definitions to always be tuple(list[str],str) which is
always the case but wasn't required unlike in AWQ
- removed get_lowest_common_parent
- now we can use CT's get_lowest_common_ancestor_name so only need to
check for module_list (it has a lot of bugfixes compared to the
get_lowest_common_parent implementation in LLMC)
- updated test_base for AWQ and smoothquant
- added test case for _set_resolved_mappings to check that partially
skipped matches are handled correctly
  - added tests for MoE matching being handled correctly
  - added test cases for get_lowest_non_module_list_ancestor
  - imported Linear and used that instead of torch.nn.Linear
- reverted test_pytorch.py for logarithmic_equalizations and smoothquant
- The test was updated in
#2084 by @rahul-tuli
to ignore some modules but in general because of the way the new logic
works, you need to ignore the whole set.
- if you only ignore one element the matching logic would need to
determine whether there's a full set or not *somehow* which it doesn't
do. In the previous logic, this was possible because it was assumed the
whole set had to be siblings of the smooth_layer, but the new util is
trying to be more flexible and so relaxes this assumption which prevents
the same approach from working. If this is a common need, perhaps we can
add a util that checks for a context parent context of size N or
something.

TEST PLAN:
pytest
/home/HDCharles/repos/llm-compressor/tests/llmcompressor/modifiers/awq/test_base.py
pytest
/home/HDCharles/repos/llm-compressor/tests/llmcompressor/modifiers/smoothquant/test_base.py

---------

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Signed-off-by: HDCharles <39544797+HDCharles@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Fynn Schmitt-Ulms <fynnsu@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants