[CPU] Optimize Qwen3-next model on CPU#12525
Merged
Kangyan-Zhou merged 50 commits intosgl-project:mainfrom Jan 30, 2026
Merged
Conversation
Collaborator
|
Collaborator
|
@jianan-gu rebase. |
2 tasks
yizhang2077
approved these changes
Jan 21, 2026
Collaborator
yizhang2077
left a comment
There was a problem hiding this comment.
As long as ci is passed and tiny suggestions are resolved, it can be merged
yizhang2077
approved these changes
Jan 21, 2026
Collaborator
|
/rerun-failed-ci |
Contributor
Author
|
Checked Xeon/XPU CI failures are not related to this PR and due to known issue on main branch (link: #17460) |
Contributor
Author
|
Checked CI failures are not related to this PR changes. |
Contributor
Author
|
/rerun-failed-ci |
4 tasks
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
1 similar comment
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
6 similar comments
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
Contributor
Author
|
/rerun-failed-ci |
charlesHsuGG
pushed a commit
to charlesHsuGG/sglang
that referenced
this pull request
Jan 30, 2026
Co-authored-by: Ma Mingfei <mingfei.ma@intel.com> Co-authored-by: Fan Yin <1106310035@qq.com>
sfiisf
pushed a commit
to sfiisf/sglang
that referenced
this pull request
Feb 5, 2026
Co-authored-by: Ma Mingfei <mingfei.ma@intel.com> Co-authored-by: Fan Yin <1106310035@qq.com>
Johnsonms
pushed a commit
to Johnsonms/sglang
that referenced
this pull request
Feb 14, 2026
Co-authored-by: Ma Mingfei <mingfei.ma@intel.com> Co-authored-by: Fan Yin <1106310035@qq.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds unified CPU optimizations for Qwen3-next models, including:
Add CPU paths to call optimized kernels, which is depending on below sgl-kernels:
a. chunk_gated_delta_rule [CPU] Support chunk_gated_delta_rule kernel for Qwen3-Next #12441
b. fused_sigmoid_gating_delta_rule_update and fused_gdn_gating [CPU] add mamba fla kernels for Qwen3-next #12324
c. fused_qkvzba_split_reshape_cat [CPU] add fused_qkvzba_split_reshape_cat kernel for Qwen3-next #12330
d. Conv1d (fn/update) [CPU] add support for mamba causal conv1d for qwen3-next #12309
e. rmsnorm Add fused_rmsnorm_gated_cpu kernel for CPU to support Qwen3-Next #11577
Fix TP odd size padding issue (like TP3/6), including padding for: (1) conv1d weight (2) linear attention QK and V num heads. (3) dt_bias and A_log (4) shared_expert_intermediate_size
fix issues in amx backend (port from [CPU] Add native support for Qwen3-next #12305):
a. Weight packing dtype check: weight packing did not support torch.float. This pr adds dtype validation before packing weight
b. HybridLinearKVPool layer ID handling: Only full attention layers can access get_value_buffer, but layer_id = 0 is not always a full attention layer. This PR updates the logic to handle such cases correctly.
c. Top-k kernel support: Top-k related kernels lacked support for num_experts = 512. This PR adds support for this configuration.