Support Flashinfer rope+quant+cache update fusion kernel for TRTLLM attention#36858
Open
elvischenv wants to merge 3 commits intovllm-project:mainfrom
Open
Support Flashinfer rope+quant+cache update fusion kernel for TRTLLM attention#36858elvischenv wants to merge 3 commits intovllm-project:mainfrom
elvischenv wants to merge 3 commits intovllm-project:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This PR introduces support for Flashinfer's fused RoPE, quantization, and KV cache update kernel, which is a great performance optimization for FP8 models on CUDA. The changes are well-structured, adding a new RopeQuantReshapeKVCachePattern to handle the fusion and updating related components to support it.
However, I've found a critical issue in vllm/v1/attention/backends/flashinfer.py where a check for KV cache sharing was removed, which could lead to incorrect behavior for models that use this feature. Please see my comment for details.
76992c4 to
ed31eaa
Compare
ed31eaa to
cb4d5e7
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: elvischenv <219235043+elvischenv@users.noreply.github.com>
Signed-off-by: elvischenv <219235043+elvischenv@users.noreply.github.com>
Signed-off-by: elvischenv <219235043+elvischenv@users.noreply.github.com>
cb4d5e7 to
dd6afc1
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Support Flashinfer RoPE+Quant+KV Cache Update fusion kernel
rope_quantize_fp8_append_paged_kv_cache.Depend on flashinfer-ai/flashinfer#2792: Fixed the padding token issue for the kernel when using full cudagraph
Test Plan && Test Result
Fusion pass unit test
pytest -v -s tests/compile/passes/test_rope_kvcache_fusion.py::test_rope_quant_kvcache_fusionModel e2e accuracy
Server cmd:
Fused:
Infused:
Model e2e perf
Fused: about 5% perf gain for GPT-OSS-120b TP8 con8
Infused:
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.