Skip to content

Add initial support for gpt-oss#8824

Merged
zhyncs merged 6 commits intomainfrom
oss-oai
Aug 5, 2025
Merged

Add initial support for gpt-oss#8824
zhyncs merged 6 commits intomainfrom
oss-oai

Conversation

@Ying1123
Copy link
Copy Markdown
Contributor

@Ying1123 Ying1123 commented Aug 5, 2025

Future progress will be tracked here: #8833

This PR only works for FP8/BF16 ckpt. The FP8/BF16 ckpt has been uploaded to:
lmsys/gpt-oss-20b-bf16 and lmsys/gpt-oss-120b-bf16

Install SGLang:

  1. local build: pip install -e "python[all]"
  2. normal install should also be fine (you might see version conflict complaints, but should be fine)

Additional install for gpt-oss:

pip3 install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126
pip3 install sgl-kernel==0.3.2

Launch server examples:

python3 -m sglang.launch_server --model-path lmsys/gpt-oss-20b-bf16
python3 -m sglang.launch_server --model-path lmsys/gpt-oss-120b-bf16 --tp 4

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Ying1123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented initial support for the gpt-oss model architecture. This involved defining the core components of the model, such as its attention and Mixture-of-Experts (MoE) layers, and integrating them into the SGLang framework. A key part of this work was enabling fused MoE operations with bias support and enhancing attention with 'sinks' parameters and refined sliding window behavior. Additionally, I've added capabilities for MXFP4 dequantization, which is essential for loading certain gpt-oss model weights. These changes lay the groundwork for running gpt-oss models efficiently within SGLang.

Highlights

  • Initial gpt-oss Model Support: I've added the complete model definition for gpt-oss, including its unique attention mechanisms and MoE block structure. This new model integrates seamlessly with the existing SGLang framework.
  • Fused MoE with Bias Support: The MoE layers have been significantly enhanced to support fused operations and the inclusion of biases. This leverages new Triton kernels for improved performance and flexibility in handling complex model architectures.
  • Enhanced Attention Mechanisms: I've updated the attention mechanisms to incorporate 'sinks' parameters and refined the sliding window logic. This allows for more precise control over attention patterns and better handling of long sequences.
  • MXFP4 Quantization and Dequantization: A new MXFP4 quantization utility has been introduced, along with dequantization logic for MLP weights during model loading. This is crucial for supporting models that utilize this specific quantization scheme.
  • Improved Weight Loading: The weight loading process has been updated to correctly handle the new gpt-oss model's specific weight structures, including qkv splitting and the fused MoE weight mappings with biases.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds initial support for the gpt-oss model. The changes include a new model definition file, modifications to attention kernels to support sinks and sliding windows, and updates to MoE layers to handle fused weights and biases. The review identified a couple of issues: one related to undefined symbols in a utility function which could cause problems in the future, and a more critical issue with inconsistent weight name mappings that would likely prevent the model from loading correctly. Suggestions for fixing the weight mapping have been provided.

Comment on lines +588 to +653
def _get_default_weight_mapping(self):
"""Generate default weight name mapping for GptOss safetensors."""
weight_mapping = {}

# Map router weights to gate
weight_mapping["embedding.weight"] = "model.embed_tokens.weight"
weight_mapping["unembedding.weight"] = "lm_head.weight"
weight_mapping["norm.scale"] = "model.norm.weight"
for layer_id in range(self.config.num_hidden_layers):
weight_mapping[f"block.{layer_id}.attn.q_proj.weight"] = (
f"model.layers.{layer_id}.self_attn.q_proj.weight"
)
weight_mapping[f"block.{layer_id}.attn.q_proj.bias"] = (
f"model.layers.{layer_id}.self_attn.q_proj.bias"
)

weight_mapping[f"block.{layer_id}.attn.k_proj.weight"] = (
f"model.layers.{layer_id}.self_attn.k_proj.weight"
)
weight_mapping[f"block.{layer_id}.attn.k_proj.bias"] = (
f"model.layers.{layer_id}.self_attn.k_proj.bias"
)

weight_mapping[f"block.{layer_id}.attn.v_proj.weight"] = (
f"model.layers.{layer_id}.self_attn.v_proj.weight"
)
weight_mapping[f"block.{layer_id}.attn.v_proj.bias"] = (
f"model.layers.{layer_id}.self_attn.v_proj.bias"
)

weight_mapping[f"block.{layer_id}.attn.out.weight"] = (
f"model.layers.{layer_id}.self_attn.o_proj.weight"
)
weight_mapping[f"block.{layer_id}.attn.out.bias"] = (
f"model.layers.{layer_id}.self_attn.o_proj.bias"
)
weight_mapping[f"block.{layer_id}.attn.sinks"] = (
f"model.layers.{layer_id}.self_attn.sinks"
)
weight_mapping[f"block.{layer_id}.attn.norm.scale"] = (
f"model.layers.{layer_id}.input_layernorm.weight"
)

weight_mapping[f"block.{layer_id}.mlp.gate.weight"] = (
f"model.layers.{layer_id}.mlp.router.weight"
)
weight_mapping[f"block.{layer_id}.mlp.gate.bias"] = (
f"model.layers.{layer_id}.mlp.router.bias"
)
weight_mapping[f"block.{layer_id}.mlp.norm.scale"] = (
f"model.layers.{layer_id}.post_attention_layernorm.weight"
)
weight_mapping[f"block.{layer_id}.mlp.experts.gate_up_proj"] = (
f"model.layers.{layer_id}.mlp.experts.gate_up_proj"
)
weight_mapping[f"block.{layer_id}.mlp.gate_up_proj_bias"] = (
f"model.layers.{layer_id}.mlp.experts.gate_up_proj_bias"
)
weight_mapping[f"block.{layer_id}.mlp.down_proj"] = (
f"model.layers.{layer_id}.mlp.experts.mlp2_weight"
)
weight_mapping[f"block.{layer_id}.mlp.down_proj_bias"] = (
f"model.layers.{layer_id}.mlp.experts.mlp2_bias"
)

return weight_mapping
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's an inconsistency between the weight names processed in _canonicalize_weights and the names expected by _get_default_weight_mapping. _canonicalize_weights dequantizes weights named mlp1_weight and mlp2_weight, but _get_default_weight_mapping sets up mappings for gate_up_proj and down_proj. This will likely cause weights to fail to load.

To fix this, you should align the names in _get_default_weight_mapping with what _canonicalize_weights produces. The mapping for down_proj also seems incorrect as it maps to mlp2_weight which is not a valid parameter name in the expert layer. It should probably map to down_proj which is then handled by the expert weight loader.

Here is a suggested change to align the mappings:

    def _get_default_weight_mapping(self):
        """Generate default weight name mapping for GptOss safetensors."""
        weight_mapping = {}

        # Map router weights to gate
        weight_mapping["embedding.weight"] = "model.embed_tokens.weight"
        weight_mapping["unembedding.weight"] = "lm_head.weight"
        weight_mapping["norm.scale"] = "model.norm.weight"
        for layer_id in range(self.config.num_hidden_layers):
            weight_mapping[f"block.{layer_id}.attn.q_proj.weight"] = (
                f"model.layers.{layer_id}.self_attn.q_proj.weight"
            )
            weight_mapping[f"block.{layer_id}.attn.q_proj.bias"] = (
                f"model.layers.{layer_id}.self_attn.q_proj.bias"
            )

            weight_mapping[f"block.{layer_id}.attn.k_proj.weight"] = (
                f"model.layers.{layer_id}.self_attn.k_proj.weight"
            )
            weight_mapping[f"block.{layer_id}.attn.k_proj.bias"] = (
                f"model.layers.{layer_id}.self_attn.k_proj.bias"
            )

            weight_mapping[f"block.{layer_id}.attn.v_proj.weight"] = (
                f"model.layers.{layer_id}.self_attn.v_proj.weight"
            )
            weight_mapping[f"block.{layer_id}.attn.v_proj.bias"] = (
                f"model.layers.{layer_id}.self_attn.v_proj.bias"
            )

            weight_mapping[f"block.{layer_id}.attn.out.weight"] = (
                f"model.layers.{layer_id}.self_attn.o_proj.weight"
            )
            weight_mapping[f"block.{layer_id}.attn.out.bias"] = (
                f"model.layers.{layer_id}.self_attn.o_proj.bias"
            )
            weight_mapping[f"block.{layer_id}.attn.sinks"] = (
                f"model.layers.{layer_id}.self_attn.sinks"
            )
            weight_mapping[f"block.{layer_id}.attn.norm.scale"] = (
                f"model.layers.{layer_id}.input_layernorm.weight"
            )

            weight_mapping[f"block.{layer_id}.mlp.gate.weight"] = (
                f"model.layers.{layer_id}.mlp.router.weight"
            )
            weight_mapping[f"block.{layer_id}.mlp.gate.bias"] = (
                f"model.layers.{layer_id}.mlp.router.bias"
            )
            weight_mapping[f"block.{layer_id}.mlp.norm.scale"] = (
                f"model.layers.{layer_id}.post_attention_layernorm.weight"
            )
            weight_mapping[f"block.{layer_id}.mlp.mlp1_weight"] = (
                f"model.layers.{layer_id}.mlp.experts.gate_up_proj"
            )
            weight_mapping[f"block.{layer_id}.mlp.mlp1_bias"] = (
                f"model.layers.{layer_id}.mlp.experts.gate_up_proj_bias"
            )
            weight_mapping[f"block.{layer_id}.mlp.mlp2_weight"] = (
                f"model.layers.{layer_id}.mlp.experts.down_proj"
            )
            weight_mapping[f"block.{layer_id}.mlp.mlp2_bias"] = (
                f"model.layers.{layer_id}.mlp.experts.down_proj_bias"
            )

        return weight_mapping

Comment on lines +24 to +51
def quantize(w, dtype, dev, **opt):
if dtype == "bf16":
return w.to(torch.bfloat16), InFlexData()
elif dtype == "fp8":
wq = w.to(torch.float8_e4m3fn).transpose(-1, -2).contiguous().transpose(-1, -2)
return (
wq,
InFlexData(dtype=wq.dtype, scale=w.abs().max().unsqueeze(0)),
MicroscalingCtx(),
)
else:
assert dtype == "mx4", f"{dtype=}"
swizzle_mx_scale = opt["swizzle_mx_scale"]
swizzle_axis = 2 if swizzle_mx_scale else None
w = w.to(torch.bfloat16)
w, mx_scales, weight_scale_shape = downcast_to_mxfp(
w, torch.uint8, axis=1, swizzle_axis=swizzle_axis
)
return (
w,
InFlexData(),
MicroscalingCtx(
weight_scale=mx_scales,
swizzle_mx=swizzle_mx_scale,
actual_weight_scale_shape=weight_scale_shape,
),
)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The quantize function contains references to undefined symbols MicroscalingCtx and downcast_to_mxfp. While this code path is not currently used as only dtype="bf16" is passed, it will cause errors if other dtypes like fp8 or mx4 are used in the future. Please either define these symbols by importing them or remove the unused code paths to improve maintainability.

@zhyncs zhyncs merged commit c1d2061 into main Aug 5, 2025
51 of 113 checks passed
@zhyncs zhyncs deleted the oss-oai branch August 5, 2025 20:42
pi314ever pushed a commit to pi314ever/sglang that referenced this pull request Aug 6, 2025
@zengqingfu1442
Copy link
Copy Markdown

zengqingfu1442 commented Aug 8, 2025

Does this PR support running gpt-oss model on L40s with sglang?

narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants