Skip to content

Conversation

@chunyuan-w
Copy link
Contributor

Motivation

When CPU has AMX support, replace F.linear and torch.matmul with weight_packed_linear to optimize the performance.

#6408, #6614, #6641 need to be landed first and the current PR will work then.

Modifications

When CPU has AMX support,

  • pack the weight of linear, MoEGate and lm_head
  • Add a PackWeightMethod to handle weight packing
  • use the weight_packed_linear kernel for better performance

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @chunyuan-w, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, gemini-code-assist here to provide a summary of this pull request. This PR aims to optimize performance on CPUs with AMX support by replacing standard F.linear and torch.matmul operations with a weight_packed_linear kernel. This involves pre-packing the weights of Linear layers, MoEGate layers, and the language model head (lm_head) when running on a compatible CPU. A new utility method and class are introduced to handle this weight packing and backend selection process.

Highlights

  • CPU Performance Optimization: Introduces performance optimizations for CPU inference by leveraging Intel AMX instructions via a weight_packed_linear kernel.
  • Conditional Kernel Usage: Modifies Linear, MoEGate, and lm_head layers to conditionally use the optimized weight_packed_linear or fused_experts_cpu kernels when running on a CPU with AMX support.
  • Weight Packing Mechanism: Adds a new utility function _process_weight_after_loading and a PackWeightMethod class to handle the pre-packing of weights for relevant layers during model loading.
  • Layer Integration: Integrates the new weight packing and conditional kernel logic into the Linear, LogitsProcessor (for lm_head), FusedMoETritonLayer (for MoE), VocabParallelEmbedding (likely for lm_head), and DeepseekV2 (for MoEGate) components.

Changelog

Click here to see the changelog
  • python/sglang/srt/layers/linear.py
    • Imported utility functions for weight processing and AMX detection (lines 33-38).
    • Added process_weights_after_loading method to pack the 'weight' parameter (lines 173-174).
    • Modified the apply method to use torch.ops.sgl_kernel.weight_packed_linear if AMX backend is enabled, otherwise use F.linear (lines 183-188).
  • python/sglang/srt/layers/logits_processor.py
    • Modified the _get_logits function to use torch.ops.sgl_kernel.weight_packed_linear for the lm_head if AMX backend is enabled, falling back to torch.matmul otherwise (lines 457-467).
    • Added a TODO comment regarding using weight_packed_linear for GGUF models (line 470).
  • python/sglang/srt/layers/moe/fused_moe_triton/layer.py
    • Imported utility function _process_weight_after_loading (line 22).
    • Added a call to _process_weight_after_loading for 'w13_weight' and 'w2_weight' in process_weights_after_loading (line 124).
    • Modified forward_cpu to conditionally use torch.ops.sgl_kernel.fused_experts_cpu if AMX backend is enabled, including expert selection logic, otherwise use moe_forward_native (lines 250-298).
  • python/sglang/srt/layers/vocab_parallel_embedding.py
    • Imported PackWeightMethod (line 553).
    • Initialized self.quant_method with PackWeightMethod for the 'weight' parameter in __init__ (line 555).
  • python/sglang/srt/models/deepseek_v2.py
    • Imported PackWeightMethod (line 93).
    • Initialized self.quant_method with PackWeightMethod for the 'weight' parameter in the MoEGate __init__ (line 205).
    • Modified the MoEGate forward method to use torch.ops.sgl_kernel.weight_packed_linear if AMX backend is enabled, otherwise use F.linear (lines 208-216).
  • python/sglang/srt/utils.py
    • Added the _process_weight_after_loading function to handle weight packing and set the use_intel_amx_backend flag (lines 2174-2192).
    • Added the PackWeightMethod class which wraps _process_weight_after_loading for use as a quantization method (lines 2195-2200).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


CPU gets a boost,
Packed weights make numbers fly,
AMX speeds the way.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively integrates the new weight_packed_linear and fused_experts_cpu kernels for AMX-enabled CPU performance optimization. The changes are well-structured, particularly the use of PackWeightMethod and _process_weight_after_loading to encapsulate the weight packing logic.

The main areas for attention are ensuring compatibility with dependent PRs, especially regarding function signatures, and clarifying a few hardcoded parameters. Overall, this is a good step towards leveraging AMX capabilities.

Summary of Findings

  • Parameter Mismatch in moe_forward_native Call: In python/sglang/srt/layers/moe/fused_moe_triton/layer.py, the else branch of forward_cpu calls moe_forward_native with apply_router_weight_on_input, inplace, and no_combine arguments. These parameters are not present in the current signature of moe_forward_native found in python/sglang/srt/layers/moe/fused_moe_native.py. This needs clarification regarding updates from dependent PRs.
  • Hardcoded is_vnni=True Parameter: Across multiple files (linear.py, logits_processor.py, fused_moe_triton/layer.py, deepseek_v2.py), the is_vnni parameter for AMX-specific kernels (weight_packed_linear, fused_experts_cpu) is hardcoded to True. While likely correct, confirmation or a clarifying comment would improve maintainability.
  • Activation Function Restriction for MoE AMX Path: In python/sglang/srt/layers/moe/fused_moe_triton/layer.py, an assert activation == "silu" limits the AMX-optimized MoE path to SiLU. This is likely intentional due to kernel capabilities but worth noting.
  • Testing Strategy: Consider clarifying if new tests are added or existing tests are updated to specifically cover these new AMX-optimized code paths within the model layers, beyond direct kernel tests.

Merge Readiness

This pull request makes significant strides in enabling AMX optimizations for CPU execution. However, there is a critical issue regarding a potential parameter mismatch in the moe_forward_native function call that needs to be addressed or clarified based on its dependencies. Additionally, a few medium-severity questions about hardcoded parameters and testing would benefit from clarification.

Given the critical issue, I recommend that changes be made to address it before merging. I am not authorized to approve pull requests, so please ensure further review and approval from other maintainers once the concerns are resolved.

Comment on lines +282 to +298
return moe_forward_native(
layer,
x,
use_grouped_topk,
top_k,
router_logits,
renormalize,
topk_group,
num_expert_group,
custom_routing_function,
correction_bias,
activation,
apply_router_weight_on_input,
inplace,
no_combine,
routed_scaling_factor,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to moe_forward_native here includes apply_router_weight_on_input, inplace, and no_combine as arguments. However, the moe_forward_native function defined in python/sglang/srt/layers/moe/fused_moe_native.py (as per the full file context) does not seem to accept these parameters.

Its signature is:

def moe_forward_native(
    layer: torch.nn.Module,
    x: torch.Tensor,
    use_grouped_topk: bool,
    # ... other params ...
    activation: str = "silu",
    routed_scaling_factor: Optional[float] = None,
) -> torch.Tensor:

Could you clarify if moe_forward_native's signature is expected to be updated in one of the prerequisite PRs (e.g., #6641)? If not, this call would lead to a runtime error.

Comment on lines +183 to +186
if layer.use_intel_amx_backend:
return torch.ops.sgl_kernel.weight_packed_linear(
x, layer.weight, bias, True # is_vnni
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The is_vnni parameter for weight_packed_linear is hardcoded to True. Could you confirm if this is always the case when use_intel_amx_backend is true? It's likely correct given that AMX usage often implies VNNI-packed weights, but a confirmation or a brief comment explaining this assumption would be helpful for future maintainability.

Comment on lines +457 to +463
if lm_head.use_intel_amx_backend:
logits = torch.ops.sgl_kernel.weight_packed_linear(
hidden_states.to(lm_head.weight.dtype),
lm_head.weight,
None, # bias
True, # is_vnni
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to other weight_packed_linear calls, is_vnni is hardcoded to True. Is this assumption universally valid when lm_head.use_intel_amx_backend is true? A brief comment clarifying this would be beneficial.

custom_routing_function,
correction_bias,
)
assert activation == "silu", f"activation = {activation} is not supported."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assert activation == "silu" restricts this optimized forward_cpu path (and consequently the AMX path for MoE) to SiLU activation. Is this an intended limitation for the initial AMX support, perhaps due to the fused_experts_cpu kernel's capabilities? If so, it might be worth a comment.

Comment on lines +265 to +279
return torch.ops.sgl_kernel.fused_experts_cpu(
x,
layer.w13_weight,
layer.w2_weight,
topk_weights,
topk_ids,
True, # inplace
False, # use_int8_w8a8
False, # use_fp8_w8a16
None, # w1_scale
None, # w2_scale
None, # block_size
None, # a1_scale
None, # a2_scale
True, # is_vnni
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The is_vnni parameter for fused_experts_cpu is hardcoded to True. Is this always the correct setting when use_intel_amx_backend is active? This seems consistent with the other AMX kernel calls.

Comment on lines +208 to +214
if self.use_intel_amx_backend:
return torch.ops.sgl_kernel.weight_packed_linear(
hidden_states,
self.weight,
None, # bias
True, # is_vnni
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The is_vnni parameter is hardcoded to True for the weight_packed_linear call. Is this always the case for DeepSeekV2MoEGate when AMX is used? A clarifying comment could be helpful.

@mingfeima mingfeima added intel cpu cpu backend performance optimization labels May 28, 2025
@chunyuan-w
Copy link
Contributor Author

Moved the change to #6641

@chunyuan-w chunyuan-w closed this May 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cpu cpu backend performance optimization intel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants