Releases: sgl-project/sglang
Release v0.4.6
Highlights
- Use FlashAttention3 as the default attention backend for main stream models (DeepSeek, QWen, Llama, etc). #4709 (comment)
- PD disaggregation with mooncake and NIXL transfer backends #4880 #5477 #4655
- DeepSeek performance improvements: turn on DeepGemm by default and some kernel fusions. #5580 #5628
- Update torch to 2.6.0. Fix torch.compile cache. #5417 #5213
- Preliminary support for blackwell #5303
Thanks very much to LinkedIn team, Alibaba Cloud, Mooncake team, NVIDIA Team, AMD Team, Pytorch Team, Ant Group, Baseten Team, Oracle Team, Meituan Team, iFlytek MaaS team and the open source community users for their contributions!
We’re thrilled about these advancements and eager to hear your feedback! Join us on our Slack channel at slack.sglang.ai to connect and share your thoughts. Cheers!
Coming Soon
- Large scale expert parallelism + PD disaggregation #4734 #5524
- Pipeline Parallelism #5724
- MLA Cutlass Backend #5390
What's Changed
- [ci] fix llama4 ci error by @BBuf in #5126
- Refactor and Optimize FA3 Code by @hebiao064 in #5090
- Add Llama4 user guide by @ispobock in #5133
- [Misc] Use pytest.mark.skipif in sgl-kernel test by @yinfan98 in #5137
- feat: disable grammar restrictions within reasoning sections by @minleminzui in #4984
- [modelopt] automatically inspect if model is ModelOpt quantized and set quantization method by @yundai424 in #5145
- [AMD] Fix missing per_token_group_quant_fp8 for ROCm by @hubertlu-tw in #5140
- fix multimodal hash feature by @huangtingwei9988 in #5083
- Fix run time error in ROCm platform by @kkHuang-amd in #5147
- [FA3 Feature] Support multi modal Llama-3.2-11B-Vision-Instruct by @zcnrex in #5103
- Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 by @yubofredwang in #4760
- Use public model for FA3 speculative decode testing by @yubofredwang in #5152
- Add dummy grok test to amd CI. by @saienduri in #5115
- fix empty_cache error in pt_weights_iterator by @dangkai4u in #5151
- Fix torch compile errors by @kkHuang-amd in #5158
- Fix loading KV quantization scale; Enable modelopt kv cache by @yundai424 in #4686
- [PD] Fix unclosed prefill connection warning of mini_lb by @ShangmingCai in #5155
- Add optimized native kernels in sgl-kernel by @mingfeima in #5150
- [PD] Simplify mini LB by @ByronHsu in #4911
- Small improvement of native api docs by @simveit in #5139
- [feat&refactor] Enhance multimodal input support with refactor io_struct by @JustinTong0323 in #4938
- Support 2x8xH100 for Llama 4 by @fzyzcjy in #5159
- FP4 weight loading and inference (2/2) by @trevor-m in #3972
- Fix multimodal hashing error by @fzyzcjy in #5174
- Tiny disable model that does not work by @fzyzcjy in #5175
- [Bugfix] Fix index out of bounds in local attention with large sequences by @CatherineSue in #5173
- [Fix] DeepEP Compatibility with Low Latency by @liz-badada in #5068
- docs: remove the use of Downward API for LWS_WORKER_INDEX by @yankay in #5110
- feat: add DeepGEMM build warning by @zhyncs in #5176
- fix: use DeepEPDispatcher on CUDA by @zhyncs in #5180
- [DeepEP] fix: import buffer error by @ch-wan in #5179
- Let
bench_one_batch
supportenable_dp_attention
by @fzyzcjy in #4058 - [Misc] clean up vllm in sgl-kernel test by @yinfan98 in #5189
- Fix ci test "test_eval_fp8_accuracy" failed by @kkHuang-amd in #5185
- Optimize topk operation in llama4 by @fzyzcjy in #5128
- Support Llama4 fp8 inference by @HandH1998 in #5194
- [ci] fix ci test fused_moe op by @BBuf in #5102
- model: support mllama4 by @mickqian in #5144
- Rework grok test. by @saienduri in #5171
- sgl-kernel use cutlass latest version for fp8 blockwise gemm by @yizhang2077 in #5207
- Add H20 dtype fp8_w8a8 fused MoE kernel tuning configs for DeepSeek V3/R1 by @Muuuchen in #5196
- fix: log warning when disable cuda graph by @zhyncs in #5209
- [metrics] Add in queue metrics by @hebiao064 in #4444
- Fix DeepSeek error when using DeepEP mode by @fzyzcjy in #5190
- reduce moe_align_block_size_kernel small batch mode overhead by @BBuf in #5086
- [PD] Support KV transfer with mooncake by @stmatengss in #4880
- [PD] Add get_contiguous_buf_infos interface for MLATokenToKVPool by @stmatengss in #5204
- Update deps for mllama4 by @ispobock in #5215
- Fix deepseek-v3 with torch.compile in PyTorch 2.6. by @zou3519 in #5213
- ROCm sgl-kernel: compatible to later torch by @HaiShaw in #5167
- [Misc] Clean sgl-kernel test by @yinfan98 in #5216
- Update Makefile / build script to avoid installing incompatible torch dependency by @elfiegg in #5245
- Fix torch.compile cacheing by @zou3519 in #5259
- ROCm/AITER CK_MoE: update 2-stage kernels & support both Activations by @HaiShaw in #5228
- Optimize attention in llama4 by @fzyzcjy in #5127
- Optimize GPU memory usage in FlashAttentionBackend's strided indexing by @CatherineSue in #5262
- Support
--enable-llama4-multimodal
by @ch-wan in #5254 - [fix] fix mrope positions not picked up by @mickqian in #5265
- doc: nested loop code for offline engine by @minleminzui in #5244
- fix: examples for token_in_token_out_vlm by @JustinTong0323 in #5193
- Fix a 404 link in send_request.ipynb by @windsonsea in #5280
- fix: enable fp4 compilation on cu128 by @zhyncs in #5286
- feat: add cu128 identifier for sgl-kernel by @zhyncs in #5287
- chore: relax the torch version restriction for sgl-kernel compilation by @zhyncs in #5288
- chore: bump sgl-kernel v0.0.8.post1 by @zhyncs in #5289
- [PD] fix: skip warmup request in disaggregation mode to prevent crash on timeout by @GaoYusong in #5292
- [Docs] Supported Model Docs - Major restructuring by @adarshxs in #5290
- fix: update update_wheel_index for cu128 by @zhyncs in #5300
- [Docs] Remove the older supported docs section by @adarshxs in #5301
- remove moe_align_block_size torch.zeros in small batch/expert mode by @BBuf in #5298
- feat: add blackwell Dockerfile by @zhyncs in #5302
- feat: add blackwell workflow by @zhyncs in #5303
- fix: use fa3 unit test on hopper only by @zhyncs in #5304
- misc: update blackwell Dockerfile by @zhyncs in #5306
- fix: remove cublas_grouped_gemm by @zhyncs in #5307
- fix: update flash attn by @zhyncs in #5308
- fix: use deepgemm only on hopper by @zhyncs in #5310
- [VLM] Adopt fast image processor by default by @mickqian in #5065
- Adjust ci test threshold by @ispobock in #5271
- Blackwell Cutlass MLA kernel by @trevor-m in #5142
- misc: cleanup 3rdparty by @zhyncs in https:/...
Release v0.4.5
Highlights
The SGLang team is excited to the release of v0.4.5! This version introduces several significant features, including Llama 4 support, FlashAttention 3 backend, EAGLE3 speculative decoding, DeepEP integration, and disaggregated prefill and decoding.
New Features
-
Llama 4 Support: We supported Llama 4 model with accuracy matching official benchmark numbers, achieving a zero-shot score of 75.2 on the MMLU Pro dataset for
Llama-4-Scout-17B-16E-Instruct
model and 80.7 forLlama-4-Maverick-17B-128E-Instruct
model. #5092 -
FlashAttention 3 Backend: Our implementation of the FlashAttention 3 backend delivers significant acceleration for long-context tasks. #4709
-
EAGLE3 Speculative Decoding: We’re proud to be the first to support EAGLE3 speculative decoding, offering substantial gains in decoding throughput. Learn more in our documentation and the EAGLE3 paper. #4247
-
DeepEP Integration: By incorporating DeepEP, we enhanced performance for MoE inference.
-
Disaggregated Prefill and Decoding: We introduced a prototype for disaggregated prefill and decoding, with plans for further optimizations.
Thanks very much to the NVIDIA team, LinkedIn team, EAGLE team, Oracle team, Meituan team, and our incredible open-source community for their invaluable contributions!
Coming Soon
-
Disaggregated Prefill and Decoding: #4655
-
Llama 4 Optimization: #5118
-
EP Enhancement: #4734
-
FA3 Enhancement: #4709
We’re thrilled about these advancements and eager to hear your feedback! Join us on our Slack channel at slack.sglang.ai to connect and share your thoughts. Cheers!
What's Changed
- Fix a regression introduced by overlapping KV cache writing by @merrymercy in #4375
- Update ci_install_dependency.sh to use accelerate 1.4.0 by @merrymercy in #4392
- Improve DP attention by @merrymercy in #4390
- Fix auto merge & add back get_flat_data_by_layer by @merrymercy in #4393
- Add some fused elementwise kernels for grok-1 by @merrymercy in #4398
- Fix Llama3.3 tool call support by @CatherineSue in #4320
- Fix the output of hidden states after HTTP requests by @Qiaolin-Yu in #4269
- Add a dummy grok test case by @merrymercy in #4399
- Hot fix for hicache with new page aligned radixtree by @xiezhq-hermann in #4397
- bump v0.4.4.post1 by @zhyncs in #4402
- Update CODEOWNERS by @merrymercy in #4403
- Hierarchical Caching supports MLA by @zeroorhero in #4009
- cleanup deps 1/n by @zhyncs in #4400
- feat(remote_model): support variable remote backend for model loader by @DellCurry in #3964
- [bug] fix duplicate variable MAX_PIXELS in qwen_vl.py by @qibaoyuan in #4419
- [Doc] fix wrong flag in deepseek documentation by @lausannel in #4427
- Add moe topk softmax templated from vllm by @qingquansong in #4302
- bump v0.0.5.post1 by @zhyncs in #4437
- Fix maximum recursion depth triggered on exception exit by @merrymercy in #4438
- use topk_softmax with sgl-kernel by @zhyncs in #4439
- docs: hot fix torch compile cache by @zhaochenyang20 in #4442
- ci: update transformers==4.48.3 by @mickqian in #4451
- Fix test_create_kvindices unit test by @sleepcoo in #4452
- [Fix] Fix errors when using the device except cuda. by @cboss6 in #4455
- docs: Add Llama 3.3 to supported models by @JiangJiaWei1103 in #4453
- Update bench_serving.py by @xu-song in #4454
- bugfix: Update sampling_params.py by @WrRan in #4413
- typos: Update sampling_params.md by @WrRan in #4391
- Auto-detect device if not specified in server arguments. by @vshekhawat-hlab in #4423
- Add support for upcoming QwenMoe by @michaelfeil in #4447
- perf: update fused moe config by @mickqian in #4459
- typos by @WrRan in #4368
- Fix minor style by @merrymercy in #4460
- cleanup deps 2/n by @zhyncs in #4464
- feat: Add FlashMLA submodule by @shuaills in #4449
- [Fix] use
torch.cat
instead oftorch.concat
to prevent entering theAutograd
backends. by @Alcanderian in #4466 - Fix finish step for pr tests and notebook tests by @merrymercy in #4467
- Remove filter for pr-tests by @merrymercy in #4468
- Add greedy verification kernel by @Ying1123 in #4383
- Release sgl-kernel v0.0.5.post2 by @merrymercy in #4469
- Revert "feat: Add FlashMLA submodule (#4449)" by @zhyncs in #4470
- [Eagle] Remove the greedy branch and some redundant code by @Ying1123 in #4363
- Support FlashMLA backend by @sleepcoo in #4472
- fix custom allreduce performance/accuracy problem by @yizhang2077 in #4477
- 400 on empty input_ids by @yinghai in #4481
- Update CODEOWNERS by @merrymercy in #4484
- Statistical Analysis of the Output Stability of the Deepseek Model by @tanzelin430 in #4202
- model: support gemma-3-it by @mickqian in #4424
- Initialize image processor for skip-tokenizer-init codepath by @yinghai in #4479
- Fix: modelscope env comment by @huiwq1990 in #4474
- Fix: Complete int32 to int64 conversion by @xiezhq-hermann in #4465
- [ROCm] enable moe topk softmax in amd by @yiakwy-xpu-ml-framework-team in #4448
- Feat/support code completion by @woodx9 in #3612
- Add endpoint for file support, purely to speed up processing of input_embeds. by @RinRin-32 in #2797
- Set xgrammar as the default grammar backend by @minleminzui in #4386
- Fix router test by @ByronHsu in #4483
- [Fix] use
torch.inference_mode()
instead oftorch.no_grad()
by @Alcanderian in #4372 - [Feature] Support Deepseek-VL2 by @ccw1996 in #2798
- config: Update fused moe config by @mickqian in #4493
- Support serving DeepSeek-R1-Channel-INT8 with 32 L40S. by @solrex in #4418
- Support Online Quantization for W8A8 by @hebiao064 in #4485
- Tool call with text by @xihuai18 in #4067
- Nicer standalone engine inferface by @yinghai in #4480
- [Fix] Resolve GPU Memory Leak in update_weights_from_tensor by @U-rara in #4446
- [Doc] add doc for quantization w8a8_fp8 or w8a8_int8 by @HandH1998 in #4495
- Fix data parallel + tensor parallel by @merrymercy in #4499
- [ROCm] fix dtype by @yiakwy-xpu-ml-framework-team in #4510
- Remove redundant type conversion by @merrymercy in #4513
- Update readme by @merrymercy in #4517
- [sgl-router] improvement to avoid hang by @yinghai in #4482
- Revert "feat: update grouped_topk to support softmax and sigmoid" by @ispobock in #4505
- bump v0.0.5.post3 by @zhyncs in #4520
- upgrade sgl-kernel 0.0.5.post3 by @zhyncs in https://github.com/sgl-project/sg...
Release v0.4.4
Highlights
The SGLang team is excited to announce the release of v0.4.4. We will keep improving DeepSeek V3/R1 performance. With the combination of FlashInfer, MTP, DeepGEMM, and Torch Compile optimizations on H200, it can achieve nearly 100 tokens/s, which is currently the fastest open-source implementation. Look out for new optimizations coming soon!
Thanks very much to xAI Team, NVIDIA Team, AMD Team, LinkedIn team, Baseten Team, Oracle Team, Meituan Team and the open source community users for their contributions!
Regarding the use of SGLang for DeepSeek R1 inference acceleration, in addition to the users mentioned in the announcement, there are also teams such as Tencent and Ant Group. We are very happy to have received recognition and usage from these teams!
Though surely there will be bugs and fixes that we'll be discovering and quickly patching in the coming days, including today :) Let's build and ship. Please feel free to join our Slack channel https://slack.sglang.ai/ Cheers!
Optimizations
-
AMD Performance Leadership: SGLang is now the fastest LLM engine for DeepSeek V3/R1 inference on AMD hardware, as confirmed by AMD's technical blog
-
Enhanced FlashInfer MLA Support: Now fully compatible with radix cache, chunked prefill, and MTP optimizations - enable with
--enable-flashinfer-mla
-
Advanced MTP Capabilities: Both Triton and FlashInfer backends now offer comprehensive Multi-Token Prediction support, easily tunable via the bench_speculative script, compatible with radix cache and chunked prefill.
-
DeepGEMM Integration: Full integration of DeepGEMM for NVIDIA Hopper architectures - enable with
export SGL_ENABLE_JIT_DEEPGEMM=1
-
Pioneering INT8 Quantization: First industry implementation of INT8 support for DeepSeek R1 models:
-
Other Optimizations:
-
Blackwell architecture Block Scale FP8 GEMM support
-
Support page size greater than 1 #4356
-
Optimized W8A8 FP8 implementation with performance gains across all architectures (sm80, sm89, sm90), featuring 15%+ improvement specifically on sm89
-
Enhanced distributed parallelism capabilities (e.g., two-node configurations with DP 2, TP 16) #4390
-
Coming soon
-
Integrate Flash Attention #4385
-
Integrate FlashMLA #4384
-
EAGLE 2 optimization #4383
-
EAGLE 3 day one support #4247
-
Integrate DeepEP #4232
-
Prefill and Decoding Disaggregation
What's Changed
- update flashinfer-python by @zhyncs in #3557
- fix doc by @zhyncs in #3558
- Add support for OpenAI API o1 model by @ChuyueSun in #3363
- fix sgl-kernel codestyle by @BBuf in #3563
- docs: update install by @zhyncs in #3581
- Copy config files for MI300X to support in virtualized environments by @yosoyjay in #3505
- ROCm docker: triton update by @HaiShaw in #3584
- [fix] added support for vlm in offline inference by @FrankLeeeee in #3548
- Support NextN (MTP) speculative decoding for DeepSeek-V3/R1 by @ispobock in #3582
- [CI] Improve Docs CI Efficiency by @shuaills in #3587
- doc: emphasize and notify the usage of chat_template by @mickqian in #3589
- fix eagle unit test by @zhyncs in #3591
- fix high qps crash when enable mtp by @zhyncs in #3592
- fix apply_token_bitmask_inplace_cuda by @zhyncs in #3594
- [docs] added favicon to sphinx html by @FrankLeeeee in #3564
- fix lockfile and port_registry file permission error by @Jiadalee in #3598
- feat: Support Qwen 2.5 vl by @mickqian in #3258
- [ROCm] Use
tl.range()
in block GEMM kernels withnum_stages
set by host. by @whchung in #3535 - Update to latest amd image. by @saienduri in #3597
- Benchmark for reasoning models by @simveit in #3532
- Draft of updated doc for sampling params. by @simveit in #3260
- [docs] Update sampling_params.md by @shuaills in #3617
- [docker] added rdma support by @FrankLeeeee in #3619
- Revert "[ROCm] Use
tl.range()
in block GEMM kernels with `num_stage… by @zhyncs in #3632 - add mtp unit test by @zhyncs in #3634
- update unit test by @zhyncs in #3636
- chore: bump v0.4.3.post1 by @zhyncs in #3638
- h800 deepseek r1 config and support multi-gpu block-gemm tuning by @BBuf in #3639
- feat: support flashinfer mla with prefix cache by @zhyncs in #3643
- chore: update flashinfer v0.2.1.post2 by @zhyncs in #3644
- chore: bump v0.4.3.post2 by @zhyncs in #3645
- use transformers 4.48.3 by @zhyncs in #3650
- [ROCm] Add additional block quant GEMM tuning configs for AMD GPUs. by @whchung in #3616
- [ROCm] Optimal MOE Tuning for AMD Radeon Graphics by @BruceXcluding in #3567
- Deploy multi-node inference (LWS method) using sglang in a K8s cluster by @whybeyoung in #3624
- Update amd docker image. by @saienduri in #3654
- [Feature] Apply Cublas Grouped Gemm kernel by @Fridge003 in #3629
- update pr-test by @zhyncs in #3663
- Fix draft decode max batch size by @ispobock in #3676
- fix: remove dependency on latest transformers impl by @mickqian in #3635
- AMD Prefill optimize by @fsx950223 in #3665
- fix: apply cache size limit of attention mask for VisionAttention by @mickqian in #3657
- set NCCL_IB_GID_INDEX=3 for multi node NVIDIA InfiniBand if needed by @zhyncs in #3698
- use warp shuffle style reduce and flashinfer vectorize by @BBuf in #3628
- [Docs] Add SkyPilot DeepSeek example by @Michaelvll in #3706
- [k8s] remove unnecessary hostIPC for security concern by @panpan0000 in #3700
- [moe] optim: reduce memory consumption in fused_moe by @ch-wan in #3692
- [Improve] Fix Multi-User Port Allocation Conflicts by @shuaills in #3601
- Variance measure for reasoning benchmark by @simveit in #3677
- Docs: Fix layout with sub-section by @zhaochenyang20 in #3710
- add control for cutlass fp8 blockwise gemm by @yizhang2077 in #3727
- revert BLOCK and num_warps on HIP by @HaiShaw in #3722
- Optimize triton attention custom mask by @ispobock in #3731
- [Bugfix] Fix scores mask for moe topk by @Chen-XiaoBing in #3705
- [Docs] Modify ep related server args and remove cublas part of deepseek by @Fridge003 in #3732
- [Fix] Fix bugs and refactor codes in lora for better scalability. by @aoshen524 in #3652
- docs: fix 404 link by @trayvonpan in #3588
- [docs] added torch.compile cache to dpsk manual by @FrankLeeeee in #3737
- AMD/ROCm: update AITER repo to ROCm/aiter by @HaiShaw in #3747
- feat: update grouped_topk to support softmax and sigmoid by @zixuanzhang226 in #3680
- feat: Add SageMaker support by @andjsmi in #3740
- Change description of nvidia jetson docs by @shahizat in https://github.com/sgl-proj...
v0.4.3
Highlights
The SGLang team is excited to announce the release of v0.4.3. We will keep improving DeepSeek V3/R1 performance. In the last six weeks, SGLang has been the fastest engine running DeepSeek V3/R1 among all open-source LLM inference engines. We stay ahead by integrating FlashInfer MLA and optimizing further. Look out for new optimizations coming soon! Please feel free to join our Slack channel https://slack.sglang.ai Cheers!
Performance Improvements
DeepSeek V3/R1 Optimizations
- Pioneering integration of FlashInfer MLA Attention delivers 4x performance improvement for long-context scenarios (Special thanks to the FlashInfer team @yzh119 ) #3550
- Added torch.compile support for FP8, achieving 50 tokens/s for online inference #3232
- Implemented CUTLASS block-wise FP8 for enhanced efficiency
Architecture Enhancements
- Upgraded to FlashInfer v0.2
- Enabled Flash Attention 3 by default for prefill
- Extended EAGLE 2 support:
- Enhanced integration with FlashInfer backend
- Added support in Triton backend
New Features
- Introduced Function Calling capabilities
- Added regex pattern support in XGrammar backend
- Implemented custom sampling processor for flexible inference control
- Integrated LoRA support in Triton backend
What's Changed
- docs: add deepseek v3 launch instructions by @zhyncs in #2589
- fix: only enable moe_align_block_size for now by @zhyncs in #2590
- docs: update deepseek v3 example by @zhyncs in #2592
- h100 tuning fused_moe_triton for qwen2 moe by @BBuf in #2560
- Fix cache hit rate when chunked prefill by @hnyls2002 in #2555
- Update README.md by @merrymercy in #2594
- Error occurs when loading the gemma model in bitsandbytes format. by @upskyy in #2557
- [Feature] Support new parameter - EBNF in xgrammar by @adarshxs in #2526
- update readme of DeepSeek V3 by @fsygd in #2596
- Fix logprob_start_len for multi modal models by @merrymercy in #2597
- Fix duplicated handling of GetWeightsByNameReqInput by @fzyzcjy in #2565
- [unittest] add unit test to test quant args of srt engine by @JamesSand in #2574
- Fix test and benchmark scripts by @merrymercy in #2598
- fix: package data missing by @yudian0504 in #2521
- [UTILS] improve makefile a bit by adding help info by @kzhou003 in #2570
- Super tiny typo fix by @fzyzcjy in #2564
- Update contributor_guide.md by @merrymercy in #2603
- Update README.md by @merrymercy in #2605
- Tiny code cleanup in tokenizer_manager.py by @fzyzcjy in #2586
- Regression fix to AMD/ROCm from recent change by @HaiShaw in #2606
- Update CODEOWNERS by @merrymercy in #2608
- Fused moe triton cfg opt for rocm by @kkHuang-amd in #2612
- Fix triton kernel performance regression by @kkHuang-amd in #2611
- Change extend attention kernel launch parameter for ROCm platform to … by @kkHuang-amd in #2610
- fix moe_align_block_size by @HandH1998 in #2615
- update sgl_moe_align_block_size usage by @HandH1998 in #2617
- chore: bump v0.4.1.post1 by @zhyncs in #2616
- docs: update README by @zhyncs in #2618
- [FIX] Update EOS from config by @zhengy001 in #2475
- [minor] clean up docs and eos id by @merrymercy in #2622
- Add more supporting organizations by @merrymercy in #2623
- Update readme by @ispobock in #2625
- avoid fused_moe_triton
padding
circular import by @BBuf in #2624 - [CI] Fix nightly test and raise better error message by @merrymercy in #2626
- Docs: Add constrained decoding tutorial by @shuaills in #2614
- [docs]Refactor constrained decoding tutorial by @shuaills in #2633
- add configs for block fp8 related kernels by @zhyncs in #2628
- Add
update_weights_from_tensor
by @fzyzcjy in #2631 - [Feature] Function Calling by @Tushar-ml in #2544
- [Docs] Add EBNF to sampling params docs by @adarshxs in #2609
- Clean up wrapper in flashinfer backend by @merrymercy in #2638
- minor: add nsys cli for docker dev by @zhyncs in #2639
- Add llama_eagle.py by @merrymercy in #2640
- [Session] Update session control interface by @Ying1123 in #2635
- AMD: set weights and scaling numbers properly for block FP8 by @HaiShaw in #2637
- Update Triton configs for block fp8 kernels by @HandH1998 in #2641
- chore: bump v0.4.1.post2 by @zhyncs in #2643
- docs: update README by @zhyncs in #2644
- docs: add development guide using docker by @zhyncs in #2645
- [Feature] Get Token IDs with Engine.generate() by @shuaills in #2636
- Fix unittest for input tokens by @shuaills in #2646
- skip special token for unit test by @zhaochenyang20 in #2648
- Release 0.4.1.post3 - upload the config.json to PyPI by @merrymercy in #2647
- Update the timeout in nightly-test.yml by @merrymercy in #2649
- add 2*h20 node serving example for deepseek v3 by @Lzhang-hub in #2650
- docs: update README by @zhyncs in #2651
- [feat] Add math eval to CI by @XiaotongJiang in #2652
- Revert "[feat] Add math eval to CI" by @merrymercy in #2656
- fix typo by @HaiShaw in #2655
- [Docs] clean up structured outputs docs by @merrymercy in #2654
- Update structured_outputs.ipynb by @merrymercy in #2666
- Refactor sgl-kernel build by @ispobock in #2642
- Refactor logprob computation to return the real logprob used in sampling by @merrymercy in #2664
- Add GemLite caching after each capture by @mobicham in #2669
- AMD DeepSeek_V3 FP8 Numerical fix by @HaiShaw in #2667
- Minor follow-up fixes for the logprob refactor by @merrymercy in #2670
- Tiny update scripts to fail fast by @fzyzcjy in #2672
- Improve the computation for time_per_output_token Prometheus metrics by @merrymercy in #2674
- Add cutlass submodule for sgl-kernel by @ispobock in #2676
- minor: cleanup sgl-kernel by @zhyncs in #2679
- Eagle speculative decoding part 1: Support target model verification in the attention backend by @merrymercy in #2678
- misc: update CODEOWNERS by @zhyncs in #2680
- feat: use CUDA 12.4 by default (for FA3) by @zhyncs in #2682
- Update README.md by @merrymercy in #2683
- Eagle speculative decoding part 2: Fix cuda graph + DP attention hanging by @merrymercy in #2684
- [Fix] fix openai adapter by @Ying1123 in #2685
- h200 tuning fused_moe_triton config for Mixtral 8x7B/8x22B and Qwen2 57BA14B by @BBuf in #2689
- [Docs] refactor Contribution Guide by @shuaills in #2690
- Doc: Rename contribution_guide.md by @zhaochenyang20 in #2691
- ROCm base image update by @kkHuang-amd in #2692
- [Docs] Add Support for Structured Output Format by @shuaills in #2697
- [feat]...
Release v0.4.1
Highlights
-
We're excited to announce SGLang v0.4.1, which now supports DeepSeek V3 - currently the strongest open-source LLM, even surpassing GPT-4o.
The SGLang and DeepSeek teams worked together to get DeepSeek V3 FP8 running on NVIDIA and AMD GPU from day one. We've also supported MLA optimization and DP attention before, making SGLang one of the best open-source LLM engines for running DeepSeek models.
Special thanks to Meituan's Search & Recommend Platform Team @ispobock @HandH1998 and Baseten's Model Performance Team for implementing the model, and DataCrunch for providing GPU resources.
-
Various improvements to the cache-aware sglang router, torchao integration, server termination
-
Added a standalone package sgl-kernel for supporting more custom kernels in the code base.
What's Changed
- Adding SGLang FP8 Utils by @HaiShaw in #2348
- docs: add SGLang v0.4 blog by @zhyncs in #2341
- MLA prefill w/o weight absorption by @ispobock in #2349
- Check gpu availability at server args creation by @MrAta in #2340
- minor: limit the range of vllm versions by @zhyncs in #2350
- Fix Docs CI When Compile Error by @zhaochenyang20 in #2323
- Add Docs For SGLang Native Router by @zhaochenyang20 in #2308
- Make torch TP composable with torch.compile by @kwen2501 in #2352
- move apply_torchao_config_ to model_runner by @jerryzh168 in #2342
- [Minor] Code style improvements by @merrymercy in #2355
- Fix AWQ with enable MLA by @ispobock in #2364
- MoE Expert Parallel by @xiaobochen123 in #2371
- Move FP8 to SGLang by @zhyncs in #2370
- optimize cuda graph max_bs_settings on low-end gpus by @BBuf in #2360
- Add more support for intel Gaudi accelerators by @YangQun1 in #2357
- [router] support
/add_worker
api by @ByronHsu in #2369 - docs: update adoption (Meituan) by @zhyncs in #2373
- Use proc.join instead of busy waiting by @merrymercy in #2374
- docs: Improve instructions for supporting new models by @vchzls in #2363
- Fix the overlap for xgrammar by @merrymercy in #2377
- Release v0.4.0.post1 by @merrymercy in #2375
- [Router] remove duplicate char count by @ByronHsu in #2378
- [router] add remove tenant method in the radix tree by @ByronHsu in #2379
- [router] Add remove worker api by @ByronHsu in #2380
- fix: resolve fp8 moe issue by @zhyncs in #2387
- fix: update xgrammar v0.1.6 by @zhyncs in #2390
- Fp8 MoE optimizations on AMD by @HaiShaw in #2388
- minor: update killall script by @zhyncs in #2391
- [router] Health check on worker before added to the router by @ByronHsu in #2392
- Fix shape error that occurred when loading lora weight of gemma2 model. by @upskyy in #2330
- nit: Remove busy waiting on scheduler by @rkooo567 in #2382
- Optimize Triton decoding kernel for long context by @ispobock in #2394
- Update killall_sglang.sh by @merrymercy in #2397
- Remove unused vars in the triton backend by @ispobock in #2401
- Fix a bug with logprob streaming + chunked prefill by @merrymercy in #2403
- fix: specify dtype with begin_forward aka plan by @zhyncs in #2404
- Fix recv_requests by @merrymercy in #2405
- minor: update correct measurement unit by @zhyncs in #2406
- feat: support custom task runner by @zhyncs in #2407
- minor: add random use case by @zhyncs in #2408
- minor: add random flashinfer vs triton use case by @zhyncs in #2409
- Simplify stream_output by @merrymercy in #2398
- [router] Improve cleanup logic by @ByronHsu in #2411
- [Router] fix interrupt from terminal by @ByronHsu in #2413
- [router] defer health checking to router init by @ByronHsu in #2393
- reduce watchdog interval to 5s by @ByronHsu in #2410
- Add a unittest for fused_moe by @BBuf in #2416
- [Minor] Improve code style by @merrymercy in #2419
- [Minor] Improve code style by @merrymercy in #2422
- [feat] Enable chunked prefill for llava-onevision by @Ying1123 in #2412
- Typo fix in router.md by @adarshxs in #2424
- feat: support sgl-kernel PyPI by @zhyncs in #2433
- fix: use manylinux2014_x86_64 tag by @zhyncs in #2434
- fix: compatible with PEP 440 by @zhyncs in #2435
- [router] Refactor: decouple select and send stage by @ByronHsu in #2440
- [router] Use borrow if possible to save cost by @ByronHsu in #2441
- Make torch TP composable with torchao by @kwen2501 in #2436
- chore: update ao v0.7.0 by @zhyncs in #2447
- decoding attention kernel benchmark by @bjmsong in #2425
- Fix model loader for more quantization formats by @merrymercy in #2448
- Fix warmup in bench_offline_throughput.py by @merrymercy in #2449
- Add support for IBM Granite 3.x models by @frreiss in #2437
- [router] Add retries based fault tolerance by @ByronHsu in #2452
- [router] remove main.rs because only lib.rs is used for py binding by @ByronHsu in #2453
- [Core] in batch prefix caching by delay scheduling by @rkooo567 in #2442
- [router] Update doc for dynamic scaling and fault tolerance by @ByronHsu in #2454
- [router] Release router 0.1.0 with dynamic scaling and fault tolerance by @ByronHsu in #2455
- Make request payload size configurable by @MrAta in #2444
- Include version info into the router package by @MrAta in #2456
- Bump sglang-router to 0.1.1 by @MrAta in #2459
- chore: bump v0.0.2 for sgl-kernel by @zhyncs in #2462
- minor: update pypi tag by @zhyncs in #2463
- fix: set runtime path by @zhyncs in #2466
- Rename rust folder to sgl-router by @MrAta in #2464
- feat: support dev image by @zhyncs in #2469
- [Minor] Fix grok model loader by @merrymercy in #2473
- Fix correctness issue for triton decoding kernel by @ispobock in #2479
- format: add clang-format for sgl-kernel by @zhyncs in #2483
- Remove cuda graph batch size adjustment for dp attention by @ispobock in #2484
- hotfix: checking for HIP by @zhyncs in #2485
- sgl-kernel adapt tensorrt llm custom allreduce by @yizhang2077 in #2481
- fix typo by @zhyncs in #2487
- [Benchmark] add a benchmark for hf/vllm/sglang rmsnorm by @BBuf in #2486
- fix moe-ep accuracy issue for fp8 by @xiaobochen123 in #2489
- minor: update flashinfer nightly by @zhyncs in #2490
- Small fixes for torchao quant by @jerryzh168 in #2476
- Simplify pytorch sampling kernel and logit processor by @merrymercy in #2491
- Temporarily disable unit test of torch native attention backe...
Release v0.4.0
Highlights
blog: https://lmsys.org/blog/2024-12-04-sglang-v0-4/
We’re excited to release SGLang v0.4, featuring significant performance improvements and new features:
- Zero-overhead batch scheduler: 1.1x increase in throughput.
- Cache-aware load balancer: up to 1.9x increase in throughput with 3.8x higher cache hit rate.
- Data parallelism attention for DeepSeek models: up to 1.9x decoding throughput improvement.
- Fast structured outputs with xgrammar: up to 10x faster.
What's Changed
- fix: add xgrammar dependency by @zhyncs in #2126
- docs: fix module docstrings and copyright headers by @XuehaiPan in #2077
- feat(pre-commit): trim unnecessary notebook metadata from git history by @XuehaiPan in #2127
- Expose max total num tokens from Runtime & Engine API by @henryhmko in #2092
- Only stream output on tp rank 0 by @merrymercy in #2124
- Revert "Only stream output on tp rank 0" by @merrymercy in #2130
- Add initial support for intel Gaudi accelerators by @ankurneog in #2121
- Add simple CPU offloading support. by @janimo in #2081
- Fix grid size in Triton decoding kernel by @ispobock in #2134
- [CI] Fix test cases by @merrymercy in #2137
- Add concurrency option for benchmark by @cermeng in #2136
- Fix dp print message by @merrymercy in #2138
- fix: resolve bench_serving args by @zhyncs in #2139
- [router] cache-aware load-balancing router v1 by @ByronHsu in #2114
- Bump sglang-router to 0.0.5 by @ByronHsu in #2142
- update router doc by @ByronHsu in #2143
- fix dp_rank env by @ByronHsu in #2144
- Add more api routes (completion, health, etc) to the router by @ByronHsu in #2146
- add prefix match for certain tenant by @ByronHsu in #2147
- Improve sglang router by @ByronHsu in #2148
- Merged three native APIs into one: get_server_info by @henryhmko in #2152
- feat: remove the dependency on FusedMoE by @zhyncs in #2153
- feat: update gitignore and add tuning config for FusedMoE by @zhyncs in #2155
- fix: resolve end-of-file-fixer by @zhyncs in #2157
- Simplify
Scheduler.update_running_batch
by @merrymercy in #2154 - feat: update other MoE models deps by @zhyncs in #2156
- Update CI threshold & Improve code style by @merrymercy in #2159
- fix: use torch.sum for compatible by @zhyncs in #2161
- Fix mixed chunked prefill in overlap mode by @merrymercy in #2158
- Balance CI tests by @merrymercy in #2162
- Rename triton_fused_moe -> fused_moe_triton by @merrymercy in #2163
- Fix docs by @merrymercy in #2164
- [Fused moe] add tuning fused configs for qwen2 57b and mixtral 8x7b by @BBuf in #2167
- Allow overwrite flashinfer use_tensorcore by @merrymercy in #2169
- Replace prob based with threshold based load balancing by @ByronHsu in #2170
- feat: fused_moe fp8 monkey patch by @zhyncs in #2174
- [Fix] Avoid calling fill_vocab_mask for terminated requests by @Ubospica in #2175
- [CI] Split test cases in CI for better load balancing by @merrymercy in #2180
- Bump rustls from 0.23.16 to 0.23.18 in /rust by @dependabot in #2182
- [feat] Refactor session control interface and add CI by @Ying1123 in #2173
- [router] Replace print with logger by @ByronHsu in #2183
- Use custom allreduce w/ torch.compile by @merrymercy in #2185
- [Performance]: Process affinity to CPU cores with multiple sockets support by @HaiShaw in #2171
- Update CI threshold by @merrymercy in #2186
- Update XGrammar to the latest API by @Ubospica in #2176
- [router] Rust e2e test by @ByronHsu in #2184
- Input_embeds support by @RinRin-32 in #2052
- [CI] Minor fix for CI by @merrymercy in #2187
- Rename double sparsity config file by @merrymercy in #2188
- Release v0.3.6.post1 by @merrymercy in #2189
- Update sampler.py to skip the success check by @merrymercy in #2197
- remove unused imports by @WrRan in #2195
- Remove unresolved reference 'self' by @apemost in #2198
- using
is not
not!=
to testNone
by @WrRan in #2196 - fix: add cuda-python for xgrammar by @zhyncs in #2199
- minor: update check_env by @zhyncs in #2201
- add sglang version to get_server_info by @binarycrayon in #2206
- docs: update adoption by @zhyncs in #2204
- Bump router to 0.0.9 with better logging by @ByronHsu in #2207
- Fix rust warning by @ByronHsu in #2208
- Fix flasky tests by @merrymercy in #2212
- [feat] Support session control for vision language models by @Ying1123 in #2210
- Use an env var SGLANG_SET_CPU_AFFINITY to set cpu affinity; turn it off by default by @merrymercy in #2217
- Revert "Use an env var SGLANG_SET_CPU_AFFINITY to set cpu affinity; turn it off by default" by @merrymercy in #2221
- Use an env var SGLANG_SET_CPU_AFFINITY to set cpu affinity; turn it off by default by @merrymercy in #2222
- Release v0.3.6.post2 by @merrymercy in #2214
- Rename DP_RANK to SGLANG_DP_RANK by @merrymercy in #2218
- [3rdparty, document] Updated Documentation that for triton fused_moe kernel tuning for AMD Instinct GPUs by @kkHuang-amd in #2191
- Bump sglang-router to 0.0.10 for env name change by @ByronHsu in #2226
- fix typo prompts by @qibaoyuan in #2224
- Remove fused_moe_grok by @merrymercy in #2223
- add profile in offline benchmark & update doc by @bjmsong in #2123
- Rename tuned MI300X config files for fused_moe_triton by @HaiShaw in #2228
- Update Install Method 2. From source by @HaiShaw in #2232
- Fix chunked prefill size for bench_offline_throughput by @merrymercy in #2234
- Disable overlap scheduler for multimodal models by @merrymercy in #2235
- Add OLMo2 model. by @janimo in #2233
- Crash the server correctly during error by @merrymercy in #2231
- Fix memory leak during abort by @merrymercy in #2238
- fix missing launch server import by @qeternity in #2242
- [fix] Fix prefix caching for multi-image/video by @Ying1123 in #2239
- Update backend.md by @merrymercy in #2250
- Update backend.md by @merrymercy in #2251
- Revert "Add simple CPU offloading support" by @Ying1123 in #2252
- Revert "Revert "Add simple CPU offloading support"" by @Ying1123 in #2253
- Simplify tokenizer manager by @merrymercy in #2254
- Fix hash collision for multi modal models by @merrymercy in #2256
- [Minor] fix the style for multimodal models by @merrymercy in #2257
- chore: bump v0.3.6.post3 by @zhyncs in https://github.com/sgl-project/sglang/pul...
Release v0.3.6
Highlights
- Reduce CPU overhead by enabling overlap scheduler by default. 1.1x higher throughput. (#2105, #2067, #2095)
- Support data parallelism for attention and MLA. 1.5x higher decoding throughput. (#1970, #2061)
- Cache-aware load balancer. 4x higher cache hit rate (#1934)
- Support xgrammar backend for grammar-guided decoding (#2056)
- Support Prometheus metrics (#1853, #1981)
- Support torch 2.5.1 (#2069) and torch-native tensor parallelism (#1876)
- Support graceful termination (#1838) and watchdog (#1816)
- Support notebook-style documentation (https://sgl-project.github.io/)
- Add an offline benchmark script (#1968)
- Bug, deadlock, NaN, and OOM fixes (#2083, #1850, #1800, #1779, #1789, #1858)
- New models: Phi3-small (#2062), Gemma-2 reward model (#1954), GPT-2 (#1833)
What's Changed
- Fix edge case for truncated by @ByronHsu in #1747
- Fuse more ops & Simplify token mapping by @merrymercy in #1758
- [API] add get memory pool size by @Ying1123 in #1760
- Fix perf regression for set_kv_buffer by @merrymercy in #1765
- [Fix] Fix abort in data parallelism by @merrymercy in #1767
- Fix stop condition for <|eom_id|> by @merrymercy in #1766
- Update docs by @merrymercy in #1768
- Fix missing additional_stop_token_ids by @merrymercy in #1769
- Fix out of memory message. by @hnyls2002 in #1771
- Crash the server on warnings in CI by @merrymercy in #1772
- Fix the perf regression due to additional_stop_token_ids by @merrymercy in #1773
- Fix MockTokenizer in the unit tests by @merrymercy in #1774
- [Bug] Catch any errors caused by parsing json schema by @zolinthecow in #1776
- [Fix] Fix NaN issues by fixing the cuda graph padding values for flashinfer by @merrymercy in #1779
- [Fix] Fix cuda graph padding for triton attention backend by @merrymercy in #1782
- check user-specified model_max_len with hf derived max_model_len by @BBuf in #1778
- Re-introduce
get_cuda_graph_seq_len_fill_value
by @merrymercy in #1783 - Enhance the test case for chunked prefill and check memory leak by @merrymercy in #1785
- Fix seq_lens_sum for cuda graph runner in padded cases by @merrymercy in #1789
- Qwen2vl support cuda graph and disable radix cache by @yizhang2077 in #1780
- Fix log parsing in the chunked prefill unit tests by @merrymercy in #1793
- Fix memory leak when doing chunked prefill by @hnyls2002 in #1787
- [Fix] Fix the log parsing in chunked prefill uni tests by @merrymercy in #1794
- Revert "Fix memory leak when doing chunked prefill" by @merrymercy in #1797
- Fix logprob in the overlapped mode by @merrymercy in #1795
- Release v0.3.4.post2 by @merrymercy in #1796
- [Performance] Support both xgrammar and outlines for constrained decoding by @DarkSharpness in #1752
- [Fix] Fix --skip-tokenizer-init by @merrymercy in #1798
- move max_position_embeddings to the last by @hliuca in #1799
- add support for ipynb by @zhaochenyang20 in #1786
- Fix possible ZMQ hanging by @hnyls2002 in #1800
- Set
ZMQ
buffer size heuristic by @hnyls2002 in #1801 - Allow consecutive ports when launching multiple sglang servers. by @hnyls2002 in #1802
- fix int conversion for
SGLANG_CPU_COUNT
by @ByronHsu in #1803 - Update ci workflows by @merrymercy in #1804
- Update links by @merrymercy in #1805
- Simplify our docs with complicated functions into utils by @zhaochenyang20 in #1807
- Fix docs ci by @zhaochenyang20 in #1808
- Provide an argument to set the maximum batch size for cuda graph by @merrymercy in #1809
- Improve the user control of new_token_ratio by @merrymercy in #1811
- Update hyperparameter_tuning.md by @merrymercy in #1813
- Add a watch dog thread by @merrymercy in #1816
- Fix unit tests by @merrymercy in #1817
- Add openAI compatible API by @zhaochenyang20 in #1810
- Fix Triton decode kernel & ut by @ispobock in #1819
- support token ids in
engine.generate
by @ByronHsu in #1820 - Fix docs deploy ci by @zhaochenyang20 in #1821
- [router] rust-based router by @ByronHsu in #1790
- Fix update_weights deadlock for DP by @ByronHsu in #1825
- fix get_memory_pool_size deadlock for DP by @ByronHsu in #1830
- Support setting
use_thread
in therun_program
for easier debugging. by @liuyanyi in #1823 - [3rdparty, document] Add 3rdparty/amd, with profiling and tuning instructions to be added by @HaiShaw in #1822
- stop_str of qwen2-vl template should be a tuple not a str by @yizhang2077 in #1834
- [FP8 KV Cache, Mixtral] Avoid KeyError at loading pre-quantized FP8 m… by @HaiShaw in #1835
- Gpt2 by @DanielC12321 in #1833
- Imporve openai api documents by @zhaochenyang20 in #1827
- Update docs by @merrymercy in #1839
- Update README.md by @merrymercy in #1840
- [Production] Drain requests before exit when receive SIGTERM by @Ying1123 in #1838
- [Performance, Hardware] MoE weights padding to AMD MI300x GPUs by @HaiShaw in #1836
- Fix suggest edit by @zhaochenyang20 in #1842
- [Performance, Triton Kernel Args] _decode_grouped_softmax_reducev_fwd… by @HaiShaw in #1845
- Make decode log interval configurable by @ByronHsu in #1847
- Fix mixed chunked prefill by @merrymercy in #1850
- Refactor tokenizer manager by @ByronHsu in #1846
- Simplify documentation by @merrymercy in #1851
- Fix warnings in doc build by @merrymercy in #1852
- delete unused character by @geeker-smallwhite in #1855
- Fix memory leak for chunked prefill 2 by @merrymercy in #1858
- [Build, ROCm] Dockerfile.rocm for Instinct GPUs, with package updates by @HaiShaw in #1861
- Fix retraction + overlap by @hnyls2002 in #1860
- change file tree by @zhaochenyang20 in #1859
- Update vocab embedding deps and add TP switch by @ispobock in #1856
- minor: add human eval by @zhyncs in #1754
- Add vlm document by @zhaochenyang20 in #1866
- minor: update nightly eval by @zhyncs in #1867
- [3rdparty, document] Updated Documentation that covers performance tuning techniques for AMD Instinct GPUs. by @yichiche in #1871
- Improve docs and fix the broken links by @merrymercy in #1875
- Add a FAQ documentation by @merrymercy in #1877
- Update docs title by @merrymercy in #1879
- Update docs and workflow by @merrymercy in #1881
- Fix doc links by @merrymercy in #1882
- Fix incorrect context length for llama3.2-11b by @rchen19 in #1873
- add native api docs by @zhaochenyang20 in #1883
- Update index.rst to improve the order of docs by @merrymercy in #1885
- Native api by...
Release v0.3.4.post1
Highlights
- Hosted the first LMSYS online meetup: Efficient LLM Deployment and Serving.
- Covered CPU overhead hiding, faster constrained decoding, and DeepSeek MLA. Slides
- Added Engine API for offline inference with reduced overhead. Usage. #1614 #1567
- Added an overlap scheduler for reducing CPU overhead #1738
- New models: Llama 3.2 (#1551), QWen-VL2 (#1721), OLMo (#1676), GLM 4 (#1736).
- Added support for reward models #1525.
- Added support for Intel XPU #1480.
- Improved stability for greedy decoding #1589.
- Accelerated multi-LoRA serving #1587.
What's Changed
- [Fix] Ignore model import error by @merrymercy in #1513
- minor: fix config by @hnyls2002 in #1524
- [Event] Update meeting link by @Ying1123 in #1529
- [Feature] Support reward model LxzGordon/URM-LLaMa-3.1-8B by @Ying1123 in #1525
- Add float8 dynamic quant to torchao_utils by @jerryzh168 in #1528
- [FIX] Catch syntax error of Regex Guide to avoid crash by @du00cs in #1521
- [bugfix]Add modelscope package to avoid docker image without modelscope by @KylinMountain in #1520
- Fix RuntimeEndpoint.select method by @jeffrey-fong in #1495
- Multiple minor fixes by @merrymercy in #1530
- Make detokenizer_manager.py not asyncio by @merrymercy in #1532
- Organize image inputs by @hnyls2002 in #1531
- Improve process creation by @merrymercy in #1534
- fix ipv6 url when warm up model by @cauyxy in #1537
- Move scheduler code from tp_worker.py to scheduler.py by @merrymercy in #1538
- Process image in parallel by @hnyls2002 in #1539
- Let ModelRunner take InputMetadata as input, instead of ScheduleBatch by @merrymercy in #1541
- Rename InputMetadata -> ForwardBatch by @merrymercy in #1543
- Clean up batch data structures: Introducing ModelWorkerBatch by @merrymercy in #1544
- [Fix, LoRA] fix LoRA with updates in main by @Ying1123 in #1545
- Organize Attention Backends by @hnyls2002 in #1547
- Fix bugs of
logprobs_nums
by @hnyls2002 in #1548 - Dispatch flashinfer wrappers by @hnyls2002 in #1550
- Simplify flashinfer dispatch by @hnyls2002 in #1552
- [Refactor] Simplify io_struct and tokenizer_manager by @Ying1123 in #1549
- [Performance, Hardware] MoE tuning on AMD MI300x GPUs by @kkHuang-amd in #1554
- [Fix] Fix all the Huggingface paths by @tbarton16 in #1553
- [Fix] do not maintain regex_fsm in SamplingBatchInfo by @merrymercy in #1555
- [Fix] Move ScheduleBatch out of SamplingInfo by @merrymercy in #1556
- Move status check in the memory pool to CPU by @merrymercy in #1557
- [Fix] Fix AttributeError in Qwen2.5 LoRA: 'Qwen2ForCausalLM' object has no attribute 'get_hidden_dim' by @mssongit in #1536
- [FP8 KV Cache] Avoid KeyError at loading pre-quantized FP8 model with kv_scale by @HaiShaw in #1559
- Organize sampling batch info better by @merrymercy in #1562
- Use ipc instead of tcp in zmq by @merrymercy in #1566
- Make input_ids a torch.Tensor by @merrymercy in #1568
- [Minifix] Remove extra space in cot example by @FredericOdermatt in #1569
- [Fix] Fix major performance bug in certain cases by @Ying1123 in #1563
- Refine the add request reasons to avoid corner cases. by @hnyls2002 in #1574
- chore: update README.md by @eltociear in #1580
- [Easy] use .text() instead of .text by @ByronHsu in #1577
- [Event] Update README.md by @Ying1123 in #1572
- Add llama implementation with no tensor parallel linears by @jerryzh168 in #1561
- Backend method not found when SRT Runtime is used by @ByronHsu in #1576
- default sampling param should be deepcopied by @ByronHsu in #1581
- Fix styling by @ByronHsu in #1583
- Fix runtime.generate when sampling param is not passed by @ByronHsu in #1582
- Support min_tokens in sgl.gen by @ByronHsu in #1573
- [Minor] Improve the style and fix flaky tests by @merrymercy in #1584
- [Bug] Fix decode stats error on output_len 1 by @HaiShaw in #1585
- Clean up event loop by @merrymercy in #1586
- [LoRA, Performance] Speedup multi-LoRA serving - Step 1 by @Ying1123 in #1587
- [Minor, Performance] Use torch.argmax for greedy sampling by @Ying1123 in #1589
- Test consistency for single and batch seperately by @ByronHsu in #1590
- Update README.md by @merrymercy in #1591
- Fix modality for image inputs by @merrymercy in #1592
- Provide an offline engine API by @ByronHsu in #1567
- [Fix] Fix the case where prompt_len = 0 by @merrymercy in #1593
- Use
atexit
hook to implicitly shutdownRuntime
by @ByronHsu in #1595 - Use is_flashinfer_available to replace is_hip for flashinfer check by @merrymercy in #1596
- Fix chunked prefill condition by @ispobock in #1594
- Fix the port_args in bench_latency by @merrymercy in #1597
- Remove references to squeezellm by @janimo in #1603
- [Profile] Add pytorch profiler by @Ying1123 in #1604
- [Engine] Fix generate hanging issue after the first call by @ByronHsu in #1606
- Release v0.3.3 by @merrymercy in #1605
- [Minor] Fix logging typo by @amosyou in #1615
- Fix test_vision_openai_server on CI by @ByronHsu in #1620
- [Performance, hardware] MoE tuning update to AMD MI300x GPUs by @HaiShaw in #1619
- Update README.md by @kushal34712 in #1625
- Update README.md by @merrymercy in #1629
- Add device support by @liangan1 in #1607
- Nit about the decorator of
PortArgs.init_new
by @glen-amd in #1611 - [Bug] Fix the Image Input of Batch Generation by @OBJECT907 in #1579
- Add the ability to enable and disable the Profiler via HTTP API. by @Abatom in #1626
- Fix the correctness test in bench_latency.py when tp > 1 and test_generation_models.py by @merrymercy in #1631
- Add image_token in conversation.py by @merrymercy in #1632
- Added a "Back To Top" Button by @JanumalaAkhilendra in #1633
- Fix constrained decoding by @merrymercy in #1634
- Add back data parallelism by @merrymercy in #1635
- Release v0.3.3.post1 by @merrymercy in #1636
- [engine] support async and streaming by @ByronHsu in #1614
- [Fix] Fix the style of test_large_max_new_tokens.py by @merrymercy in #1638
- fix missing ignore_eos in v1/chat/completions by @learninmou in #1642
- Fix ignore_eos in the OpenAI ChatCompletions API by @merrymercy in #1645
- [Feature, Hardware] Enable SGLang on XPU GPUs via PyTorch by @liangan1 in #1480
- Fix...
Release v0.3.2
Highlight
- Support torch.compile, cuda graph for triton attention backend and DeepSeek MLA #1442 #1422
- Initial support for multi-LoRA serving #1307
- Integrate torchao for quantization #1341
- Optimize the CPU scheduler overhead
- Multiple critical bug fixes for llama and llava (tokenizer, modality)
- Support AMD backend #1420
- New models: MiniCPM3, OLMoE
What's Changed
- Remove useless fields in global_config.py by @merrymercy in #1328
- docs: update README by @zhyncs in #1336
- docs: highlight ttft itl and throughput by @zhyncs in #1337
- docs: add conclusion by @zhyncs in #1340
- Optimize schedule by @hnyls2002 in #1339
- Fix some online scheduling delay by @hnyls2002 in #1345
- [triton] Support head_dim not 2^n in triton extend and decode attention by @ByronHsu in #1281
- [Feat] Add modalities for vision server when handling pixel values for llava by @kcz358 in #1346
- [server] Passing
model_override_args
tolaunch_server
via the CLI. by @kevin85421 in #1298 - [Minor] Many cleanup by @merrymercy in #1357
- Add torchao quant (int4/int8/fp8) to llama models by @jerryzh168 in #1341
- [CI] Return output logprobs in unit test by @Ying1123 in #1361
- Unify forward mode by @hnyls2002 in #1360
- Support OpenAI API json_schema response format by @zifeitong in #1363
- Adding Documentation for installation by @zhaochenyang20 in #1300
- [Docs] Improve documentations by @merrymercy in #1368
- fix bug of
undefined is_single
in methcreate_abort_task
by @wcsjtu in #1370 - Support MiniCPM3 by @Achazwl in #1371
- Fix CORS compatibility with OpenAI, vLLM, TGI, LMDeploy by @josephrocca in #1373
- [Minor] improve kill scripts and torchao import by @merrymercy in #1375
- Fix vocab mask update bug by @hnyls2002 in #1376
- [Minor] move triton attention kernels into a separate folder by @merrymercy in #1379
- Deprecate --disable-flashinfer and introduce --attention-backend by @merrymercy in #1380
- Organize flashinfer indices update by @hnyls2002 in #1378
- remove assertion in triton attention and add an unit test by @ByronHsu in #1385
- BaiChuan2 Model by @blacker521 in #1367
- [Fix] Fix --disable-flashinfer by @merrymercy in #1389
- Improve error reporting during server launch by @merrymercy in #1390
- Refactor attention backend by @merrymercy in #1381
- Add no commit to main rule by @hnyls2002 in #1393
- Fix README format by @Achazwl in #1399
- Support cuda graph in the triton attention backend by @merrymercy in #1401
- kernel: use tensor cores for flashinfer gqa kernels by @yzh119 in #1403
- [Minor Fix] Fix llava modalities issue for single-image by @kcz358 in #1402
- Add Support for XVERSE Models (Dense and MoE) to sglang by @hxer7963 in #1397
- [Feature] Initial support for multi-LoRA serving by @Ying1123 in #1307
- [Minor, CI] remove lora test from minimal suite by @Ying1123 in #1406
- Make stop reason a dict instead of str by @merrymercy in #1407
- [CI] Include triton backend and online serving benchmark into CI by @merrymercy in #1408
- [Minor] Raise exception for wrong import by @Ying1123 in #1409
- Balance test in CI by @merrymercy in #1411
- Update pr-test.yml by @merrymercy in #1412
- ci: fix finish by @zhyncs in #1414
- Optimize conflicts between CUDA graph and vocab mask tensors by @hnyls2002 in #1392
- Add torchao quant for mixtral and qwen_moe by @jerryzh168 in #1418
- Add pytorch sampling backend ut by @ispobock in #1425
- fix: resolve nightly eval by @zhyncs in #1426
- Enable torch.compile for triton backend by @merrymercy in #1422
- Add libibverbs-dev to Dockerfile by @Aphoh in #1427
- Update backend.md by @merrymercy in #1429
- [Fix] Fix logprob and normalized_logprob by @merrymercy in #1428
- Release v0.3.1 by @merrymercy in #1430
- Remove deprecated configs by @merrymercy in #1431
- [Feature] Support LoRA path renaming and add LoRA serving benchmarks by @Ying1123 in #1433
- Revert "[Minor] Raise exception for wrong import (#1409)" by @Ying1123 in #1432
- Add constrained_json_whitespace_pattern to ServerArgs by @zifeitong in #1438
- Clean up model loader by @merrymercy in #1440
- Simplify sampler and its error handling by @merrymercy in #1441
- [Feature, Hardware] Enable SGLang on AMD GPUs via PyTorch for ROCm by @HaiShaw in #1420
- Fix torch compile for deepseek-v2 by @ispobock in #1442
- Add OLMoE model by @janimo in #1444
- Release 0.3.1.post1 by @merrymercy in #1445
- Enable MLA by default by @ispobock in #1447
- Fix attention backend by @ispobock in #1448
- fix schedule bug by @hnyls2002 in #1450
- Fix schedule bug by @hnyls2002 in #1451
- Fixed n>1 causing list index out of range with VLM by @jasonyux in #1449
- Add bench_server_latency.py by @merrymercy in #1452
- [Bugfix] Enable SGLang on AMD GPUs via PyTorch for ROCm (#1419) by @HaiShaw in #1453
- Fix oom issues with fp8 for llama by @merrymercy in #1454
- Fuse top_k and top_k in the sampler by @merrymercy in #1457
- [Event] Add public meeting invite to README by @Ying1123 in #1458
- fix: creat new dict everytime for putting new frame by @Luodian in #1464
- Fix padding in the cuda graph by @merrymercy in #1469
- Release v0.3.1.post2 by @merrymercy in #1470
- Fix env vars in bench_latency by @merrymercy in #1472
- feat: update linear deps 1/N by @zhyncs in #1305
- minor: add quant eval compared with base by @zhyncs in #1475
- Add OLMoE by @Muennighoff in #1476
- Fix triton head num by @ispobock in #1482
- Add MLA gsm8k eval by @ispobock in #1484
- chore: bump v0.3.1.post3 by @zhyncs in #1483
- fix incorrect links in documentation by @rchen19 in #1481
- doc: update backend by @zhyncs in #1486
- Better unit tests for adding a new model by @merrymercy in #1488
- Pr fix max workers by @wellhowtosay in #1456
- Add a unit test for data parallelism by @merrymercy in #1489
- Add AMD tests to CI by @Ying1123 in #1491
- Update dockerfile to include datamodel_code_generator by @merrymercy in #1492
- [API, Feature] Support response prefill for openai API by @Ying1123 in #1490
- minor: add mla fp8 test by @zhyncs in #1494
- Fix the overhead due to penalizer in bench_latency by @merrymercy i...
Release v0.3.0
Highlights
Checkout the release blog post https://lmsys.org/blog/2024-09-04-sglang-v0-3/ to find detailed instructions and descriptions for the items below.
- Up to 7x higher throughput for DeepSeek Multi-head Latent Attention (MLA)
- Up to 1.5x lower latency with torch.compile on small batch sizes
- Support for interleaved text and multi-image/video in LLaVA-OneVision
- Support for interleaved window attention and 2x longer context length in Gemma-2
- Chunked prefill is turned on by default (You can choose separate or mix prefill and decode).
- Add multi-GPU accuracy, performance test, and nightly accuracy test for more models.
What's Changed
- update hyperparameter guide by @merrymercy in #1114
- ci: compatible with fork repo by @zhyncs in #1115
- fix: resolve Python.h header missing by @zhyncs in #1119
- Fix the deadlock in multi-node tp by @merrymercy in #1122
- Mixed style of chunked prefill by @hnyls2002 in #1013
- Fix port conflicts between local CI and runner CI. by @hnyls2002 in #1131
- Fix CI accuracy && time out limit by @hnyls2002 in #1133
- fix: use fp16 dtype for sm75 by @zhyncs in #1136
- Improve the code style: more comments and remove useless packages by @merrymercy in #1139
- Improve benchmark by @merrymercy in #1140
- Fix duplicated imports in hf_transformers_utils.py by @merrymercy in #1141
- fixed a typo by @min-xu-et in #1143
- [Docs] Add instruction for running on clouds and kubernetes with SkyPilot by @Michaelvll in #1144
- [Feat]Add support for optional start len of logprobs by @yichuan520030910320 in #1035
- Optimize MLA/GQA/MQA Triton decoding by @ispobock in #1138
- feat: allow streaming for multi-prompt and/or parallel sampling by @vhain in #1134
- Improve docs and warnings by @merrymercy in #1164
- [Feature] add disable-custom-all-reduce by @Xu-Chen in #1148
- misc: add hypervisor vendor by @zhyncs in #1165
- support /v1/health using a generation 1 token by @LucienShui in #1154
- fix: resolve README render by @zhyncs in #1166
- [Feat] Support update weights without restart server by @shanyu-sys in #1157
- Improve multi-node stability by @merrymercy in #1171
- fix: custom op fallback forward native when lower sm80 by @zhyncs in #1177
- [Feature] Add a function to convert sampling_params to kwargs by @gryffindor-rr in #1170
- Support min-p sampling by @intervitens in #1167
- [Docs] Fix rendering of details in README by @Michaelvll in #1179
- Improve code style of sampler by @hnyls2002 in #1168
- [Minor] Improve logging and rename the health check endpoint name by @merrymercy in #1180
- Fix broken penalty by @hnyls2002 in #1184
- Fix benchmark script by @Ying1123 in #1185
- [Feat] add llava-onevision, with support for (1) siglip encoder, (2) qwen2 decoder (3) openai api compatible server. by @kcz358 in #1123
- feat: use gelu_tanh_and_mul by @zhyncs in #1193
- Cleanup readme, llava examples, usage examples and nccl init by @merrymercy in #1194
- Update README.md by @merrymercy in #1198
- [CI] Fix the problem of hf runner too slow by @Ying1123 in #1202
- [Fix] the issue of random order when input is a list by @Ying1123 in #1199
- Relax the assert in moe throughput test to fix the flaky CI by @merrymercy in #1207
- [Fix] Fixing the multi-images error for llava-onevision by @kcz358 in #1205
- Support Alibaba-NLP/gte-Qwen2-7B-instruct embedding Model by @zhaochenyang20 in #1186
- [Minor] Improve the function organization in TokenizerManager & improve loggers by @merrymercy in #1208
- [Minor] Temporarily skip flaky test by @Ying1123 in #1209
- [CI] Fix the issue of unit test hanging by @Ying1123 in #1211
- Update CI workflows by @merrymercy in #1210
- Update CI runner docs by @merrymercy in #1213
- [Feature] Support fp8 e5m2 kv cache with flashinfer by @ispobock in #1204
- Update workflow files by @merrymercy in #1214
- improve the threshold and ports in tests by @wisclmy0611 in #1215
- [CI] Fix CI by @wisclmy0611 in #1217
- [Fix] Multi-images loading error by @kcz358 in #1218
- [Minor] improve CI and dependencies by @hnyls2002 in #1212
- [CI] Parallelize unit tests in CI by @wisclmy0611 in #1219
- Move sampler into CUDA graph by @hnyls2002 in #1201
- chore: bump v0.2.14 by @zhyncs in #1155
- [FEAT] JSON constrained support by @havetc in #1125
- Torch compile CI throughput test by @hnyls2002 in #1223
- [FEAT] Support batches cancel by @caiyueliang in #1222
- [Minor] add delete test and delete tmp file on ci server by @yichuan520030910320 in #1227
- [FIX] Wrong logger by @havetc in #1230
- feat: replace get_act_fn for gpt_bigcode by @zhyncs in #1231
- Fix readme by @ArtificialZeng in #1236
- Fix bench latency benchmark by @hnyls2002 in #1225
- [Minor] Add more type annotations by @merrymercy in #1237
- feat: support sm75 with FlashInfer v0.1.6 by @zhyncs in #1233
- Update README.md by @merrymercy in #1239
- hotfix: revert sampler CUDA Graph by @zhyncs in #1242
- Add sglang.bench_latency to CI by @merrymercy in #1243
- fix: increase max_new_tokens when testing generation models by @zhyncs in #1244
- feat: update GemmaRMSNorm by @zhyncs in #1232
- Fix llava on multi images by @merrymercy in #1247
- feat: replace GeluAndMul by @zhyncs in #1234
- fix: resolve qwen2 moe weight loader by @zhyncs in #1252
- chore: bump v0.2.14.post2 by @zhyncs in #1250
- make json_schema usable from gen by @qeternity in #1254
- fix data racing due to mutable reference using deepcopy by @xiezhq-hermann in #1255
- Sampler cudagraph by @hnyls2002 in #1253
- fix: multimodal_config in monkey_patch_vllm_dummy_weight_loader by @lxww302 in #1260
- Transpose mla weight offline by @ispobock in #1261
- EXAONE 3.0 Model Support by @Deepfocused in #1258
- Update README Support Exaone 3.0 by @Deepfocused in #1267
- Report median instead of mean in bench_latency.py by @merrymercy in #1269
- Allow more flexible assistant and system response by @BabyChouSr in #1256
- fix: resolve the fp8 bug introduced by vLLM 0.5.5 by @zhyncs in #1276
- [doc] fix quick start link by @ByronHsu in #1282
- Optimize the update flashinfer indices by @xiaobochen123 in #1262
- [CI] Add more multi-gpu tests by @merrymercy in #1280
- feat: fix fp8 for MLA and support bmm fp8 for DeepSeek V2 by @zhyncs in #1285
- [CI] merge all ci tests into one file by @merrymercy i...