Merged
Conversation
4 tasks
Ying1123
pushed a commit
that referenced
this pull request
Sep 13, 2024
…e flashinfer decode kernel (#6)
5 tasks
5 tasks
5 tasks
5 tasks
5 tasks
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Feb 20, 2025
sgl-project#6) * Optimize all_reduce by porting the shm kernel of deepspeed * Fix rebase: use get_tp_group in sglang.srt.distributed * Fix rebase: directly modify tensor_model_parallel_all_reduce in sglang
Closed
5 tasks
5 tasks
timethink
pushed a commit
to timethink/sglang
that referenced
this pull request
Mar 9, 2025
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 11, 2025
sgl-project#6) * Optimize all_reduce by porting the shm kernel of deepspeed * Fix rebase: use get_tp_group in sglang.srt.distributed * Fix rebase: directly modify tensor_model_parallel_all_reduce in sglang
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
sgl-project#6) * Optimize all_reduce by porting the shm kernel of deepspeed * Fix rebase: use get_tp_group in sglang.srt.distributed * Fix rebase: directly modify tensor_model_parallel_all_reduce in sglang
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
sgl-project#6) * Optimize all_reduce by porting the shm kernel of deepspeed * Fix rebase: use get_tp_group in sglang.srt.distributed * Fix rebase: directly modify tensor_model_parallel_all_reduce in sglang
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
sgl-project#6) * Optimize all_reduce by porting the shm kernel of deepspeed * Fix rebase: use get_tp_group in sglang.srt.distributed * Fix rebase: directly modify tensor_model_parallel_all_reduce in sglang
This was referenced Apr 16, 2025
5 tasks
5 tasks
4 tasks
Xia-Weiwen
pushed a commit
to Xia-Weiwen/sglang
that referenced
this pull request
Sep 5, 2025
Add Deepseek FP8FP8 brgemm kernel
someoneexistsontheinternet
pushed a commit
to someoneexistsontheinternet/sglang
that referenced
this pull request
Oct 23, 2025
support for more than one host name
kalyank007
pushed a commit
to kalyank007/sglang
that referenced
this pull request
Nov 7, 2025
Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai>
5 tasks
5 tasks
fstandhartinger
pushed a commit
to fstandhartinger/sglang
that referenced
this pull request
Nov 11, 2025
Merge upstream changes, 20251022
5 tasks
nithinsubbiah
pushed a commit
to nithinsubbiah/sglang
that referenced
this pull request
Nov 21, 2025
Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Add wave extend attention kernel Signed-off-by: Harsh Menon <harsh@nod-labs.com> [Wave] Adding logit_cap and layer scaling to API Also add support for the wave backend to the model runner. And use Triton decode kernels for now. [Wave] Run chunked prefill for perf comparison on Wave test Need to rename the non chunked/regular prefill version because otherwise rpd will treat it as the same kernel Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Cache the function that loads the wave kernel Also maintain a global kernel hash to avoid recomputing the hash on every call. [Wave] Don't specify block size and enable buffer ops [Wave] Enable wave runtime and update scheduling API [Wave] Update API to use wave_compile & WaveCompileOptions [Wave] Update wave backend and extend attention to latest [Wave] Add speculative decode kernel Signed-off-by: nithinsubbiah <nithinsubbiah@gmail.com> cache kernels using lru_cache Update WaveBackend to use Wave Decode (sgl-project#6) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Revert "Update WaveBackend to use Wave Decode (sgl-project#6)" (sgl-project#7) This reverts commit eac4599. Wave Backend decode (sgl-project#8) * align shapes Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> * fix Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> --------- Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Wave backend fixes (sgl-project#10) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> More fixes to Wave decode (sgl-project#12) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> is_causal Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Enable the grok in3 model (sgl-project#14) Set unique cache dir for each worker (sgl-project#16) update kernel (sgl-project#18) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> updated spec decode test as per wave Signed-off-by: xintin <gaurav.verma@amd.com> fix extend (sgl-project#23) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Refactor paged decode intermediate arrays shapes (sgl-project#24) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> remove dyn symbols (sgl-project#26) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> cleanup shapes (sgl-project#27) Some fields were removed from `paged_decode_attention_shape`. Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Remove `mha` param from Wave decode attention kernel (sgl-project#28) Depends on iree-org/iree-turbine#1039 Signed-off-by: Paul Zhang <paul.zhang@amd.com> nfc: fix problems reported by linting update references from iree.turbine to wave_lang
chz34
added a commit
to chz34/sglang
that referenced
this pull request
Dec 4, 2025
yhyang201
pushed a commit
that referenced
this pull request
Dec 13, 2025
* Replace type to isinstance * Check --encode-urls * Add async lock for rid * Move thread logic into mm_receiver
triple-mu
pushed a commit
to triple-mu/sglang
that referenced
this pull request
Jan 1, 2026
# This is the 1st commit message: rebase # This is the commit message sgl-project#2: remove duplicated code # This is the commit message sgl-project#3: add type hints # This is the commit message sgl-project#4: add clear cache for benchmark alignment # This is the commit message sgl-project#5: remove unuse arg # This is the commit message sgl-project#6: clear cache once
triple-mu
pushed a commit
to triple-mu/sglang
that referenced
this pull request
Jan 1, 2026
# This is the 1st commit message: rebase # This is the commit message sgl-project#2: remove duplicated code # This is the commit message sgl-project#3: add type hints # This is the commit message sgl-project#4: add clear cache for benchmark alignment # This is the commit message sgl-project#5: remove unuse arg # This is the commit message sgl-project#6: clear cache once # This is the commit message sgl-project#7: simplified VAE cache logic for qwenimage and wan # This is the commit message sgl-project#8: remove duplicated code
Garrybest
pushed a commit
to Garrybest/sglang
that referenced
this pull request
Jan 9, 2026
* add get_default_sampling_params definition * Merge pull request sgl-project#6 from primatrix/feat/align-sampling-for-tunix align sampling param ability according to rfc * add multinomial_with_seed for sampler and test_sampler.py (sgl-project#12) * update flax fix duplicate register pytree and use nnx.data to wrap FlashAttentionMetadata * extract scheduler thread * add event loop * fix duplicate params * use server parameters * add tree_flatten & tree_unflatten * with mesh --------- Co-authored-by: aolemila <aolemilaluo@gmail.com> Co-authored-by: pathfinder-fp <aaaabbbbbb@163.com> Co-authored-by: aolemila <aolemila@primatrix.ai> Co-authored-by: pathfinder-fp <slackexplorer@gmail.com>
5 tasks
5 tasks
MatejKosec
added a commit
to MatejKosec/sglang
that referenced
this pull request
Feb 25, 2026
- Validate alloc reply_id matches request_id (sgl-project#3) - Remove dead variable num_gen_tokens (sgl-project#4) - Move inline imports to top level (sgl-project#5) - Replace hasattr guards with proper None checks (sgl-project#6) - Demote per-request logs to DEBUG, keep milestones at INFO (sgl-project#11) - Remove unused tree_cache param from start_kv_return_receiver (sgl-project#14)
MatejKosec
added a commit
to MatejKosec/sglang
that referenced
this pull request
Feb 26, 2026
- Validate alloc reply_id matches request_id (sgl-project#3) - Remove dead variable num_gen_tokens (sgl-project#4) - Move inline imports to top level (sgl-project#5) - Replace hasattr guards with proper None checks (sgl-project#6) - Demote per-request logs to DEBUG, keep milestones at INFO (sgl-project#11) - Remove unused tree_cache param from start_kv_return_receiver (sgl-project#14)
21 tasks
5 tasks
5 tasks
lawrence-harmonic
added a commit
to lawrence-harmonic/sglang
that referenced
this pull request
Mar 19, 2026
…ject#6) Four fixes for PD disaggregation hangs around weight updates and transient failures: 1. DecodePreallocQueue timeout: requests stuck in the prealloc queue with waiting_for_input=True but insufficient KV cache memory now time out after SGLANG_DISAGGREGATION_TRANSFER_TIMEOUT (default 600s) instead of hanging indefinitely. This closes a gap where no existing timeout covered this state. 2. Pre-aborted bootstrap rooms: if an abort arrives on the prefill side before the corresponding request enters the bootstrap queue, the bootstrap_room is recorded. When the request later arrives, it is immediately aborted instead of entering the queue and potentially hanging. 3. Pause/resume queue draining: in PD disaggregation, prefill and decode event loops now continue advancing already-admitted bootstrap and transfer queues while /pause_generation mode=in_place is active. This prevents in-flight requests from sitting until the 1800s disaggregation timeout fires. 4. Decode PREBUILT preservation on pause: if pause_generation lands after a decode PREBUILT batch has left waiting_queue but before it is merged into running_batch, the batch is now committed before pause state is finalized. This prevents a small number of requests from disappearing and timing out at the client. Also updates the PD pause/resume regression test documentation to cover both pause-related failure modes. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: chatgpt-codex-connector[bot] <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
5 tasks
apinge
pushed a commit
to apinge/sglang
that referenced
this pull request
Mar 31, 2026
* Initial commit Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Apply review comments Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Debug Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Add model test Signed-off-by: Xiake Sun <xiake.sun@amd.com> * add CI script permission to be executable Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Update model path, fix aiter path in original docker image Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Disable cudagraph for debug Signed-off-by: Xiake Sun <xiake.sun@amd.com> * AOT Prebuild aiter gemma rmsnorm fusion kernel Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Comment out curl single test temporarily Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Comment out curl single test temporarily Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Enable cuda graph Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix lauch server crash issue Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Update GPU_ARCHS and PYTORCH_ROCM_ARCH Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix bug Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix curl test Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Clean up build cache & images Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Clean up build cache & images Signed-off-by: Xiake Sun <xiake.sun@amd.com> * Fix format Signed-off-by: Xiake Sun <xiake.sun@amd.com> --------- Signed-off-by: Xiake Sun <xiake.sun@amd.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.