Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions python/sglang/srt/server_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@
get_device,
get_device_memory_capacity,
get_device_sm,
is_blackwell,
is_blackwell_supported,
is_cuda,
is_fa3_default_architecture,
Expand Down Expand Up @@ -1350,7 +1349,8 @@ def _handle_attention_backend_compatibility(self):

1. Models with MHA Architecture (e.g: Llama, QWen)
1.1 We will turn on FA3 on hopper unless user use spec decode with topk > 1 or page_size > 1.
1.2 Use trtllm_mha for Blackwell excluding spec with topk > 1.
1.2 Use trtllm_mha for SM100/SM103 (Blackwell B200/GB200/B300) excluding spec with topk > 1.
Note: trtllm_mha does not support SM120, which will fall back to flashinfer.
1.3 In other cases, we will use flashinfer if available, otherwise use triton.
2. Models with MLA Architecture and using FA3
2.1 We will use FA3 backend on hopper.
Expand All @@ -1366,7 +1366,7 @@ def _handle_attention_backend_compatibility(self):
and is_fa3_default_architecture(self.model_config.hf_config)
):
self.attention_backend = "fa3"
elif is_blackwell() and is_no_spec_infer_or_topk_one(self):
elif is_sm100_supported() and is_no_spec_infer_or_topk_one(self):
self.attention_backend = "trtllm_mha"
elif is_hip():
self.attention_backend = "aiter"
Expand Down
Loading