-
-
Notifications
You must be signed in to change notification settings - Fork 11.5k
[FEAT] [ROCm] Upgrade AITER Fused MoE kernels. #18271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 4 commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
a989ce0
use single AITER fmoe module
vllmellm da0622c
bugfixes: remove the weight scale expension and set correct layout size
vllmellm 350bf88
clean code in aiter fmoe module
vllmellm 2b40c72
udpate docker file for new AITER package
vllmellm 85a7151
update shuffle weights documentation.
vllmellm fbaa890
Merge remote-tracking branch 'origin/main' into update-aiter-fmoe
vllmellm 33e36d6
only enable per tensor quantization for fp8 w8a8
vllmellm aa9e31d
fix precommit error
vllmellm ee9506f
fix precommit error
vllmellm 55e11f7
avoid unnecessary multiplication of hidden_states and topk_weights in…
vllmellm 730fb77
remove unnecessary layout argument
vllmellm 11fb95d
Merge remote-tracking branch 'origin/main' into update-aiter-fmoe
vllmellm File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this be made into something static that clarifies what it is (e.g.
AITER_XXX=16). This constant in the middle of the function is just confusingUh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robertgshaw2-redhat updated the documentation for shuffle_weights as below. which explains and clarifies the arguments of the function.
https://github.com/EmbeddedLLM/vllm/blob/85a7151d180de411d9593f7831a5f1d8c437685f/vllm/model_executor/layers/fused_moe/rocm_aiter_fused_moe.py#L351-L372
Since currently the best optimal layout size for all kernels are the same as (16, 16) we kept that as default and in future updates if the layout meant to change for any kernel then we can introduce the constant variables based on individual kernels in
rocm_aiter_fused_moe.pyand use them where needed.