Skip to content

[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend#23289

Merged
LucasWilkinson merged 1 commit intovllm-project:mainfrom
MatthewBonanni:feature/fp8_mla_cutlass_blackwell
Sep 3, 2025
Merged

[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend#23289
LucasWilkinson merged 1 commit intovllm-project:mainfrom
MatthewBonanni:feature/fp8_mla_cutlass_blackwell

Conversation

@MatthewBonanni
Copy link
Collaborator

@MatthewBonanni MatthewBonanni commented Aug 20, 2025

Purpose

Enable FP8 KV cache support on Blackwell in the CUTLASS_MLA backend.

Test Plan

Correctness

VLLM_ATTENTION_BACKEND=CUTLASS_MLA lm_eval --model vllm --model_args '{"pretrained": "deepseek-ai/DeepSeek-V2-Lite-Chat", "trust_remote_code": true, "kv_cache_dtype": "fp8"}' --tasks gsm8k --batch_size auto

Performance

V2 Lite
VLLM_ATTENTION_BACKEND=CUTLASS_MLA vllm bench throughput --model=deepseek-ai/DeepSeek-V2-Lite-Chat --dataset-name=random --input-len=8192 --output-len=1024 --num-prompts=1000 --kv-cache-dtype=fp8

V2 (with EP4)
VLLM_ATTENTION_BACKEND=CUTLASS_MLA vllm bench throughput --model=deepseek-ai/DeepSeek-V2 --dataset-name=random --input-len=8192 --output-len=1024 --num-prompts=1000 --kv-cache-dtype=fp8 --tensor-parallel-size 4 --enable-expert-parallel

Test Result

Correctness

With kv_cache_dtype=auto:

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6664|±  | 0.013|
|     |       |strict-match    |     5|exact_match|↑  |0.6619|±  | 0.013|

With kv_cache_dtype=fp8:

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6664|±  | 0.013|
|     |       |strict-match    |     5|exact_match|↑  |0.6611|±  | 0.013|

Performance

V2 Lite:
With --kv-cache-dtype=auto: Throughput: 4.20 requests/s, 38668.98 total tokens/s, 4296.91 output tokens/s
With --kv-cache-dtype=fp8: Throughput: 4.74 requests/s, 43678.48 total tokens/s, 4853.57 output tokens/s

V2:
With --kv-cache-dtype=auto: Throughput: 0.81 requests/s, 7509.07 total tokens/s, 834.41 output tokens/s
With --kv-cache-dtype=fp8: Throughput: 1.08 requests/s, 9971.05 total tokens/s, 1107.99 output tokens/s

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation ci/build deepseek Related to DeepSeek models frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models gpt-oss Related to GPT-OSS models rocm Related to AMD ROCm speculative-decoding v1 tpu Related to Google TPUs labels Aug 20, 2025
@mergify
Copy link

mergify bot commented Aug 20, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @MatthewBonanni.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 20, 2025
@MatthewBonanni MatthewBonanni force-pushed the feature/fp8_mla_cutlass_blackwell branch from 28d207d to 69fd772 Compare August 25, 2025 14:09
@mergify mergify bot removed tpu Related to Google TPUs needs-rebase labels Aug 25, 2025
@MatthewBonanni
Copy link
Collaborator Author

@Mergifyio refresh

@mergify
Copy link

mergify bot commented Aug 25, 2025

refresh

✅ Pull request refreshed

@LucasWilkinson LucasWilkinson removed documentation Improvements or additions to documentation new-model Requests to new models rocm Related to AMD ROCm frontend speculative-decoding multi-modality Related to multi-modality (#4194) llama Related to Llama models labels Aug 25, 2025
@mergify mergify bot added documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models labels Sep 3, 2025
@mergify
Copy link

mergify bot commented Sep 3, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @MatthewBonanni.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

1 similar comment
@mergify
Copy link

mergify bot commented Sep 3, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @MatthewBonanni.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
@celsowm
Copy link

celsowm commented Sep 4, 2025

Is it faster than fa3 ?

@elvischenv
Copy link
Contributor

tests/kernels/test_cutlass_mla_decode.py::test_cutlass_mla_decode[torch_dtype1-False-True-64-512-576-1-16-4096-1-128] b=128, s_q=1, mean_sk=4096, h_q=16, h_kv=1, d=576, dv=512, causal=True, varlen=False, torch_dtype=torch.float8_e4m3fn
FAILED



        cos_diff = 1 - 2 * (x * y).sum().item() / max(
            (x * x + y * y).sum().item(), 1e-12)
        if (use_fp8):
>           assert cos_diff < 1e-4
E           assert 1.0 < 0.0001
tests/kernels/test_cutlass_mla_decode.py:22: AssertionError
======================================================================= warnings summary =======================================================================
../usr/local/lib/python3.12/dist-packages/schemathesis/generation/coverage.py:305
  /usr/local/lib/python3.12/dist-packages/schemathesis/generation/coverage.py:305: DeprecationWarning: jsonschema.exceptions.RefResolutionError is deprecated as of version 4.18.0. If you wish to catch potential reference resolution errors, directly catch referencing.exceptions.Unresolvable.
    ref_error: type[Exception] = jsonschema.RefResolutionError,
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=================================================================== short test summary info ====================================================================
FAILED tests/kernels/test_cutlass_mla_decode.py::test_cutlass_mla_decode[torch_dtype1-False-True-64-512-576-1-16-4096-1-128] - assert 1.0 < 0.0001
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================== 1 failed, 24 passed, 1 warning in 13.10s ===========================================================

@MatthewBonanni This test is failed on main. Seeing this in at least 2 recent PRs:
https://buildkite.com/vllm/ci/builds/29559/steps/canvas?sid=01991a6b-6773-4022-8ab6-32198efb2ff7
https://buildkite.com/vllm/ci/builds/29486/steps/canvas?jid=0199166b-7879-4618-adbb-75b54cb44bae

@MatthewBonanni
Copy link
Collaborator Author

@elvischenv hmm, thanks for bringing this up. It looks like it's passing on the most recent nightly:
https://buildkite.com/vllm/ci/builds/29529#
will investigate though

@MatthewBonanni
Copy link
Collaborator Author

MatthewBonanni commented Sep 5, 2025

Is it faster than fa3 ?

@celsowm Thanks for your question! This backend is Blackwell-specific, whereas the recently-merged FA3 backend (#14258) is for Hopper, so it's difficult to make a direct comparison

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek Related to DeepSeek models ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants