Skip to content

torcho gemlite integration#2498

Closed
HDCharles wants to merge 1 commit intosgl-project:mainfrom
HDCharles:test_gemlite
Closed

torcho gemlite integration#2498
HDCharles wants to merge 1 commit intosgl-project:mainfrom
HDCharles:test_gemlite

Conversation

@HDCharles
Copy link
Copy Markdown

Summary:

adds support for gemlite kernels

Test Plan:

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --torchao-config gemlite-32-4-64 --dtype float16 --disable-cuda-graph

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 32 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --torchao-config gemlite-32-4-64 --dtype float16 --disable-cuda-graph

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --enable-torch-compile --torchao-config gemlite-32-4-64 --dtype float16

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --enable-torch-compile --torchao-config gemlite-8-4-64 --dtype float16

Reviewers:

Subscribers:

Tasks:

Tags:

Motivation

This PR is to add support for teh torchao gemlite integration in SGLang for int4wo quantization, the motivation behind the work is that we expect these kernels to have better TTFT performance compared to the existing int4 integration which is optimized for non prefill performance.

Modifications

Added some new options to the torchao utils and added a place to store the gemlite cache after warmup

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

Summary:

adds support for gemlite kernels

Test Plan:

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --torchao-config gemlite-32-4-64 --dtype float16 --disable-cuda-graph

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 32 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --torchao-config gemlite-32-4-64 --dtype float16 --disable-cuda-graph

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --enable-torch-compile --torchao-config gemlite-32-4-64 --dtype float16

python3 -m sglang.bench_one_batch --model meta-llama/Meta-Llama-3-8B --batch-size 1 --input 1024 --output 512 --json-model-override-args '{"architectures": ["TorchNativeLlamaForCausalLM"]}' --enable-torch-compile --torchao-config gemlite-8-4-64 --dtype float16

Reviewers:

Subscribers:

Tasks:

Tags:
@merrymercy
Copy link
Copy Markdown
Contributor

supported by #2528

@merrymercy merrymercy closed this Dec 26, 2024
@zhyncs
Copy link
Copy Markdown
Collaborator

zhyncs commented Dec 26, 2024

FYI I have temporarily removed it from the main branch due to some issues. I will add it back in the next version, as I need some time to figure out how dependency management can be more compatible.

@zhyncs zhyncs self-assigned this Dec 26, 2024
@zhyncs
Copy link
Copy Markdown
Collaborator

zhyncs commented Dec 26, 2024

Currently, if you want to use it, you can install it separately after installing SGLang.

kaixih added a commit to kaixih/sglang that referenced this pull request Feb 20, 2026
- benchmark_gdn_transpose_vs_flashinfer.py: compares SGLang PR sgl-project#17981
  (cutedsl transpose kernel) vs FlashInfer PR sgl-project#2498 (gdn_kernels)
  - T=1: sigmoid decode kernel vs gated_delta_rule (both bf16 state)
  - T>1: MTP kernel vs gated_delta_rule_mtp (both fp32 state)
  - Correctness verified for T=1 and T>1 (g pre-computed from A_log/a/dt_bias)
- README_GDN_FLASHINFER_VS_SGLANG.md: benchmark results on B200

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants