-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
[Kernel] [Quantization] Add MXFP4 and bias support for marlin kernel #22428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
simon-mo
merged 1,081 commits into
vllm-project:main
from
jinzhen-lin:marlin-mxfp4-bias
Aug 14, 2025
Merged
Changes from all commits
Commits
Show all changes
1081 commits
Select commit
Hold shift + click to select a range
2ea20a1
[Bugfix] fix when skip tokenizer init (#21922)
lengrongfu 5e0a899
security policy: take 1 (#21119)
sidhpurwala-huzaifa 024bae4
[Bugfix] [Performance] DeepEPHighThroughput + DeepSeek : Quant before…
varun-sundar-rabindranath 23e9231
Enable headless models for pooling in the Transformers backend (#21767)
hmellor d939b3f
[Misc] Minor enhancement of benchmark_moe (#22068)
jeejeelee 8d45d88
Fix pre-commit failure for SECURTIY.md (#22102)
mgoin 0361fbc
[compile][startup] Disable C++ compilation of symbolic shapes (#20836)
anijain2305 46b5ada
Introduce RayPPCommunicator for ray-based PP (#21660)
ruisearch42 3c9cf54
Add lora test for tp>1 case for TPU. (#21970)
vanbasten23 cfa5d09
[BugFix] Harden distributed DP startup (#21538)
njhill 61cfee8
[CI] Initial tests for SM100 Blackwell runner (#21877)
mgoin 8854ac4
[Perf] Optimize `reshape_and_cache_flash` CUDA Kernel (#22036)
yewentao256 c30510f
feat: Add Support GPTQ Quantization MOE on ROCM vllm serve (#21733)
JartX fe51612
[V1][CUDA] Full cudagraph support for FlashInfer (#21367)
fhl2000 e01266e
[Model] Qwen2.5 VL SiLU-and-Mul (#22066)
vllmellm 8e1e504
[Misc] `VLLM_TARGET_DEVICE.lower()` (#22101)
NickLucche cf16bc2
[Misc] DeepGemmExperts : Avoid JIT generation in the hot-path (#21955)
varun-sundar-rabindranath 1b1dcc7
[Speculators][Speculative Decoding] Add Qwen Eagle3 Support (#21835)
dsikka a698cfc
[BugFix] Improve internal DP load balancing (#21617)
njhill 5adc4f7
[Test] Add Unit Test for Batched DeepGEMM (#21559)
yewentao256 d857eba
[Attention][DBO] Add support for "splitting" the CommonAttentionMetad…
SageMoore 5e1e33c
[FEAT][ROCm] Enable running Flash Attention as ViT attn backend for Q…
vllmellm 9280075
[Misc] Getting and passing ray runtime_env to workers (#22040)
ruisearch42 aab9737
Fix test_kv_sharing_fast_prefill flakiness (#22038)
sarckk b81b304
[Bugfix] Mamba2 remove bugged initial state condition in chunk scan (…
cyang49 cc3a06f
docs: remove deprecated disable-log-requests flag (#22113)
2d6070c
[PERF] Use faster way of decode in tokenizer: avoid useless list-to-l…
vadiklyutiy 3ebe718
for glm-4.1V update (#22000)
zRzRzRzRzRzRzR ee9e5c1
[Model] Mamba2 preallocate SSM output tensor to avoid d2d copy overhe…
cyang49 852dbd9
[Frontend] Improve error message for too many mm items (#22114)
DarkLight1337 f2f2c1b
[V1] [Hybrid] Validate compatibility of attention backend batch reord…
tdoublep a118857
[xpu]support moe models on XPU platform (#21643)
yma11 8dbd196
Revert "[compile][startup] Disable C++ compilation of symbolic shapes…
xiszishu e9d7c1d
[Misc] Bump ray to 2.48.0 (#22123)
ruisearch42 3294e45
[Fix] Fix llama4 modelopt weight loading error (#22107)
jiahanc c4f893f
[Misc] Add tensor schema test coverage for multimodal models (#21754)
Isotr0py 93610de
[Benchmark] Support ready check timeout in `vllm bench serve` (#21696)
yeqcharlotte 9bb57ec
Support CUTLASS NVFP4 (w4a4) for Blackwell Geforce GPUs (SM120) (#21309)
LopezCastroRoberto 6437727
[Misc] update doc comment for send (#22026)
andyxning 4dcef5b
[executor] feat: add supports_pp attr to executors (#21786)
eric-haibin-lin 6e053b9
[V1] [P/D] Refactor KV Connector Path (#21980)
sdavidbd a383e60
[Responses API] Disable response store by default (#22137)
WoosukKwon 11b59d1
[CI/Build][Bugfix] Fix Qwen2.5 tests in CPU CI via fallback silu_and_…
bigPYJ1151 2f2a07f
Add chat doc in quick start (#21213)
TankNee 2dad4e6
fuse fp32 for GLM-4.5 e_score_correction_bias (#22143)
zRzRzRzRzRzRzR ce5621e
[Bugfix] Fix failing multimodal standard test (#22153)
Isotr0py e76ae81
Use `aiohttp` connection pool for benchmarking (#21981)
eicherseiji 8438063
[fix] fix correct assertion syntax error in attention utils. (#22154)
skyloevil ce75d12
[RLHF] Fix torch.dtype not serializable in example (#22158)
22quinn a2b02eb
[PD] add test for chat completions endpoint (#21925)
Abirdcfly c362f3c
remove duplicate code within cleanup_dist_env_and_memory (#22147)
andyxning 7ba6a65
Add tree attention backend for v1 (part 1) (#20401)
TheEpicDolphin 925ae38
[refactor] improve ConstantList exception specificity (#22156)
skyloevil 6dbbfa2
Remove index_put from MM embeddings merging (#22105)
chenxi-yang 941ff09
[CI Bugfix] Fix wNa16 kernel not found for test_shared_storage_connec…
tlrmchlsmth 0e1c84d
[Misc] Modify the organization of GLM series (#22171)
jeejeelee 65e17c1
[feat] move WEIGHT_SCALE_SUPPORTED into raise block to accelerate RLH…
weixiao-huang 34df04c
[Bugfix] Fix failing GGUF models test (#22174)
Isotr0py 254d1e8
[Sampler] Support returning all logprobs or logits (#21792)
22quinn 2090680
[Doc] Update pooling model docs (#22186)
DarkLight1337 5f73d90
Fix Arcee model weight loading: Add custom load_weights (#21725)
alyosha-swamy 0732912
[Responses API] Ignore `store=True` and process the request by defaul…
WoosukKwon 48e3973
[Bug] Update auto_tune.sh to separate benchmarking and profiling. (#2…
ericehanley 4207e96
[Bugfix][V1][P/D]Fix the uneven polling issue in the toy proxy for P2…
Abatom efca991
[NVIDIA] Auto detect modelopt quant and fix DSR1-FP4 weight loading (…
nvpohanh 4344398
[Bugfix] V1 Fix the cursor leakage issue during request scheduling. (…
CLFutureX 87b30bc
Revert "[Bugfix] V1 Fix the cursor leakage issue during request sched…
WoosukKwon 99eea67
[V1] reduce block size for tree attention correctness test to fix 'ou…
TheEpicDolphin 8e3248b
[V0 deprecation][P/D] Deprecate v0 `KVConnectorBase` code (1/2) (#21785)
lk-chen 09585eb
[FEAT] Refactor ROPE into module (#22192)
tjtanaa bbeddff
[ROCm][Bugfix] Compilation passes fix (#22202)
gshtras 6bada90
self.gate dtype update for GLM-4.5 (#22203)
zRzRzRzRzRzRzR e88a388
[Log] DeepGEMM Update Log for Unaligned Problem Size (#22208)
yewentao256 1004673
fix: kimi_k2 return empty tool call list (#22149)
tlipoca9 d496544
[Misc] Remove pass_config from CompilationConfig dump_json excluded (…
elvischenv a239863
[Doc] add backend to doc string of initialize_model_parallel (#22142)
andyxning 8ea25cd
[Misc] log more detailed message for ensure_model_parallel_initialize…
andyxning 0f9fda8
Optimize configuration access with LRU cache in custom ops (#22204)
skyloevil 3ef9666
[Bugfix] Misaligned params in TreeAttentionImpl (#22226)
DarkLight1337 1368161
[UX] Fail if an invalid attention backend is specified (#22217)
mgoin 2535ed4
[Core] Factor out common logic for MM budget calculation (#22228)
DarkLight1337 5559f6c
[Model] Pooling model activation supports per request control by Pool…
noooop 7270051
[Docs][TPU] Highlight TPU Software version selection (#22242)
NickLucche 36f9361
Migrate KimiVLImagePixelInputs to TensorSchema (#21769)
bbeckca 9c14e03
[Feature] Non-contiguous Support for FP8 Quantization (#21961)
yewentao256 fb2712f
[NVIDIA] Support Flashinfer TRT-LLM Prefill Attention Kernel (#22095)
elvischenv 690f05c
[Misc] correct static type check for GroupCoordinator (#21946)
andyxning 2ec868e
[V0 Deprecation][TPU] Remove V1 flag check from tests (#22248)
NickLucche 8550273
Use UV_LINK_MODE=copy in Dockerfile to avoid hardlink fail (#22128)
mgoin 43b3fce
[CI/Build] Update flashinfer to 0.2.9 (#22233)
mgoin 2ac177c
[Refactor] Remove Unused Environment Variable `VLLM_NO_DEPRECATION_WA…
yewentao256 41deb30
[V1] port xformers backend to v1 (#21342)
TheEpicDolphin 364baf6
[bugfix] fix blackwell deepep installation (#22255)
youkaichao 3e95f70
[CI][TPU] Fix docker clean up (#22271)
lsy323 7429b46
[Bugfix] Remove faulty test for oot attention backend (#22286)
mgoin 6a621c3
[Bugfix] Fix 3D input passed into cutlass_scaled_mm (#22278)
mgoin ec9612f
[Bugfix] Fix MoE BNB version (#22260)
jeejeelee 226eeea
[Perf] Parallelize fill_bitmask to accelerate high-throughput guided …
benchislett 56763b6
[Bugfix] Skip dead and non-GPU nodes for Ray DP engine allocation (#2…
ruisearch42 7b16b53
[Bugfix][CI/Build][ROCm] Make sure to use the headers from the build …
gshtras a463fb5
Upgrade FA3 for attention sink (#22313)
WoosukKwon f8ac961
Increase openai-python version (#22316)
WoosukKwon 25ad72a
Add attention sink in attention backends (#22320)
WoosukKwon 4dfff78
Update transformers to `v4.55` (#21931)
hmellor cf9ea63
Add GPT-OSS model code and config [1/N] (#22327)
WoosukKwon e326c3d
[ROCm] Add attention sink to use_rocm_custom_paged_attention (#22329)
WoosukKwon e23598a
[GptOss] Add GptOss reasoning parser to support structure output (#22…
heheda12345 de251f9
[gpt-oss] flashinfer attention sink init (#22330)
zyongye b3bd1f2
[gpt-oss] Add openai-harmony as default dependency (#22332)
WoosukKwon bbf3923
[Misc] Clean up duplicated hf overrides (#22311)
Isotr0py 1fa11ea
[gpt-oss] Add Tool/ConversationContext classes and harmony_utils (#22…
WoosukKwon a482ecb
[gpt-oss] add model to supported models doc (#22336)
11be35f
[gpt-oss] Support chat completion api (#22342)
WoosukKwon 1655e4c
[Minor] Fix type (#22347)
WoosukKwon 48d892e
[BugFix] Fix FA2 RuntimeError when sinks is provided (#22365)
LucasWilkinson 8b14e38
add the codes to check AMD Instinct GPU number (#22367)
zhangnju cc074b2
fix
jinzhen-lin cdb6d54
fix
jinzhen-lin ff94983
fix
jinzhen-lin 4694099
fix
jinzhen-lin e2ee111
fix
jinzhen-lin a94893a
fix fp4 layer process
jinzhen-lin a29da80
[BugFix] Fix triton compile error in `kernel_unified_attention_2/3d` …
LucasWilkinson 5912df5
[Bugfix] Make condition in triton kernel constexpr (#22370)
gshtras ceb4a80
[gpt-oss] Add loop for built-in tool call (#22374)
WoosukKwon 1310438
[gpt-oss] Enhance error msg on attention sink init (#22335)
zyongye daaf5c7
[gpt-oss] flashinfer mxfp4 (#22339)
zyongye 3d6ead9
[v1] - Mamba1 Attention Metadata (#21249)
Josephasafg 2c9ea84
[Bug] Fix B200 DeepGEMM E8M0 Accuracy Issue (#22399)
yewentao256 4bf71e3
[gpt-oss] add demo tool server (#22393)
heheda12345 f89f198
[gpt-oss] fix model config with hf_config (#22401)
zyongye a9d7b0b
Fix trtllm-gen attention env and add attention sink (#22378)
IwakuraRein 9051985
Update `flashinfer-python==0.2.10` (#22389)
mgoin f8101c6
[model] Support MiniCPM-V 4.0 (#22166)
tc-mb 11bb5da
Support encoder_only attention for FlexAttention (#22273)
maxdebayser b2c7ff2
[Attention] Support multiple attention metadata builders per kv_cache…
LucasWilkinson 677076f
[XPU]Fix `flash_attn_varlen_func` interface on xpu (#22350)
jikunshang 79dd15f
[Qwen3] Enable dual-chunk-attention support for Qwen3 models. (#21924)
sighingnow a84af5c
[Bugfix] Fix wrong method name in Intern-S1 image processor (#22417)
DarkLight1337 600f0f2
Use float32 for test_completion.py (#22385)
mgoin 220b984
[Bugfix]: Fix the streaming output for function calls in the minimax …
qscqesze f92c018
[Bugfix] Add proper comparison for package versions (#22314)
syedmba 5329d9a
Update `hf_xet` pin to resolve hangs (#22356)
hmellor ceeafed
Optimize logger init performance by using module-level constants (#22…
skyloevil 0405339
preload heavy modules when mp method is forkserver (#22214)
lionelvillard 6002d81
[gpt-oss] Convert user input to harmony format (#22402)
heheda12345 e285926
[Bugfix] EPLB load statistics problem (#22167)
david6666666 8fb5c56
[CI] Skip the pooling models that do not support transformers v4.55 (…
noooop de47ec7
[Bench] Split serve.py:main into async/async versions (#22405)
lk-chen f0e4a8f
[Model] Switch to Fused RMS norm in Qwen2.5_VL model. (#22184)
vllmellm 380a826
[Frontend] Update OpenAI error response to upstream format (#22099)
msanft 1bee7ec
[Misc] Support routing logic simulation (#21990)
minosfuture 2a8e85e
feat: Add --enable-log-outputs flag for logging model generations (#2…
mizadri 5a35780
init frondend
jinzhen-lin e1b2854
fix
jinzhen-lin 64874b1
fix scale
jinzhen-lin 27f67f8
fix interleave
jinzhen-lin e949f61
activation func test
jinzhen-lin 657f9ad
fix activation
jinzhen-lin 1b3037b
fix
jinzhen-lin 2af5b48
Update csrc/moe/marlin_moe_wna16/kernel.h
jinzhen-lin e07f4b6
fix format
jinzhen-lin ff5d7c9
fix
jinzhen-lin f329092
fix format
jinzhen-lin 88a15f6
[Docs] Add missing dependency for docs build (#22435)
hmellor 8987866
Add H20-3e fused MoE kernel tuning configs for GLM-4.5 (#22433)
JaceyShao 8838a14
[Misc] Enhance code formatting in mxfp4.py (#22423)
WoosukKwon bc5711f
[Doc] Fix link to prefix caching design (#22384)
sarckk 7c50c73
[Docs] Factor out troubleshooting to its own guide; add section for R…
crypdick 69ea93f
[Doc] update docs for nightly benchmarks (#12022)
andrewkchan c50d484
[Docs] Update features/disagg_prefill, add v1 examples and developmen…
david6666666 5ba005b
[Core] Store only the keys for multi-modal data in P0 (#22198)
DarkLight1337 8841232
[Bugfix] Add missing `packed_modules_mapping` to `DeepseekV2ForCausal…
fxmarty-amd 9f0edb0
[Tool] Fix auto tool call (#22434)
heheda12345 4b3ac39
[gpt-oss] Generate ResponseOutputItem from Harmony Message (#22410)
heheda12345 f2b502f
Fix pre-commit error in main (#22462)
WoosukKwon b84e781
[Core] Simplify mm processing cache (#22457)
DarkLight1337 7ec1dfa
[Frontend] Use engine argument to control MM cache size (#22441)
DarkLight1337 dc8ffbf
Remove `from_dict` from `SpeculativeConfig` (#22451)
hmellor df5e699
[Misc] normalize multiprocessing Queue usage (#22371)
andyxning c8df47b
[ROCm] [V1] [SpecDec] Enable Speculative Decoding on ROCm V1 Engine (…
tjtanaa c5d21f4
[PERF] Use pybase64 to more quickly decode prompt embeddings (#22469)
qthequartermasterman 4f12acf
Add ModelOpt Qwen3 nvfp4 support (#20101)
Edwardf0t1 a0484d1
Support Tensorrt-LLM MoE fp4 for low-latency (#21331)
wenscarl 21759ac
Fix Flashinfer CUTLASS MOE Allgather (#21963)
wenscarl cc5977f
[Kernel] Add support for block FP8 on SM120 (NVIDIA 5090 and RTX PRO …
0xjunhao 3308d73
[Bugfix] Fix RuntimeError: Index put requires the source and destinat…
chaunceyjiang 9e76de0
not tie_word_embeddings for glm-4.5 and glm-4.5v (#22460)
zRzRzRzRzRzRzR cc12828
Optimize MiniCPMO mask creation with vectorized implementation (#22464)
skyloevil 63c8132
Fix pre-commit (#22487)
DarkLight1337 0e49a59
[bugfix] Fix Llama3/4 issues caused by FlashInfer 0.2.10 (#22426)
nvpohanh b354589
[Doc] Sleep mode documentation (#22310)
iAmir97 784c4e3
[bench] Fix benchmark/serve.py to ignore unavailable results (#22382)
lk-chen 1bd12bc
fix topk
jinzhen-lin 43c3dea
fix topk
jinzhen-lin e3a4420
[CI/Build] Fix multimodal tests (#22491)
DarkLight1337 ed4b805
[Misc] Begin deprecation of `get_tensor_model_*_group` (#22494)
DarkLight1337 60c4022
[Misc] fix openai version (#22485)
lengrongfu 0c882d0
[BugFix] Don't cancel asyncio tasks directly from destructors (#22476)
njhill bb65f55
[Docs] Improve API docs (+small tweaks) (#22459)
hmellor 108ee94
Remove exception for Python 3.8 typing from linter (#22506)
hmellor c219316
[gpt-oss] triton kernel mxfp4 (#22421)
zyongye 6484dad
disable m_block_size_8 temporarily
jinzhen-lin 4b63d40
fix moe_block_size_8
jinzhen-lin 53871fa
fix idx
jinzhen-lin 78710bd
fix
jinzhen-lin 2634144
fix format
jinzhen-lin 481bf21
disable on hopper
jinzhen-lin 5a93660
[Benchmark] Add benchmark tool for multi turn conversations (#20267)
pliops-daniels 20473c8
[gpt-oss] guard import when triton kernel is not installed (#22529)
zyongye 4153b92
[Docs] Rename “Distributed inference and serving” to “Parallelism & S…
crypdick 8f4ee95
[gpt-oss] Support tool call and implement MCP tool server (#22427)
heheda12345 17ab1c1
[BugFix] Fix IMA FlashMLA full cuda-graph and DP + Update FlashMLA (#…
LucasWilkinson 327a7f5
[Misc] DeepGEMM : Avoid JIT generation in the hot-path (#22215)
varun-sundar-rabindranath 9ce2b24
[Bugfix] Update FA commit hash (#22546)
tdoublep 555741f
Skip Qwen 1 in CI because remote code is no longer compatible with Tr…
hmellor ccd6439
[Docs] fix broken links in metrics.md (#22315)
GuyStone d4d6bdf
[Frontend] Add unix domain socket support (#18097)
yyweiss 229d7da
Extract `CompilationConfig` from `config.py` (#22524)
hmellor 11240e0
Drop flaky test_healthcheck_response_time (#22539)
russellb 4af55c2
[XPU] upgrade torch 2.8 on for XPU (#22300)
jikunshang 4e9d93a
[BugFix] [P/D] Handle lookahead token count edge-case with Eagle Spec…
Pradyun92 2eebe6d
update use_marlin condition
jinzhen-lin 234d02b
Merge branch 'main' of github.com:vllm-project/vllm into marlin-mxfp4…
jinzhen-lin 1f48276
fix
jinzhen-lin 59a1e4f
Merge branch 'main' of github.com:vllm-project/vllm into marlin-mxfp4…
jinzhen-lin 1af6f1f
update activation and use_marlin condition
jinzhen-lin 3e918f6
Merge branch 'main' into marlin-mxfp4-bias
mgoin 7b61670
Fix precommit
mgoin b04cdbb
Fix _should_use_marlin
mgoin 71a1246
fix
jinzhen-lin 1dbcc42
Merge branch 'marlin-mxfp4-bias' of github.com:jinzhen-lin/vllm into …
jinzhen-lin 37df2f0
fix _can_support_mxfp4
jinzhen-lin b34ca30
fix nvcc warning
jinzhen-lin 53022e6
fix opcheck args
jinzhen-lin 4b43856
fix shared memory size
jinzhen-lin 3d14a28
fix format
jinzhen-lin da93a71
fix _gptq_marlin_gemm_fake
jinzhen-lin eb3214e
fix nvcc warning
jinzhen-lin f3e34ef
fix format
jinzhen-lin 2293ff1
Merge branch 'main' of github.com:vllm-project/vllm into marlin-mxfp4…
jinzhen-lin b63a6c0
Merge branch 'main' of github.com:vllm-project/vllm into marlin-mxfp4…
jinzhen-lin 2032ded
update CMakeLists.txt
jinzhen-lin 7b5632d
Merge branch 'main' into marlin-mxfp4-bias
mgoin 62357e8
fix gptq marlin bias permute
jinzhen-lin 3143ff4
fix get_kernel_cache_size
jinzhen-lin 590a3b1
Merge branch 'marlin-mxfp4-bias' of github.com:jinzhen-lin/vllm into …
jinzhen-lin b764bbb
add bias permute for fp8 marlin
jinzhen-lin 69fe493
add missing bias permute
jinzhen-lin 1dff08b
Merge branch 'main' into marlin-mxfp4-bias
mgoin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.