-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
[V1] Partial prefill skip for layers reusing shared KV cache #19719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
My concern is whether this optimization is too model specific. It works for models that the first k layers have kv cache. Does it work for models that every m layers share the same kv cache like Hunyuan? |
It only works for the case where the first k layers have kv cache as you said. For general KV sharing cases, it should also apply for last N layers that reuse the KV cache (ie there are no other layers afterwards that have its own KV cache). So I agree it will not apply to a majority of models, but then I'm not sure if there is a better way to implement this kind of functionality. |
heheda12345
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a quick pass on this PR.
And I'm curious about your plan to support piecewise cuda graph. We need cuda graph for num_total_tokens in the first few layers, and num_decode_tokens in the following layers.
vllm/envs.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to add it as a cli arg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This branch is not true for hunyuan-style kv sharing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added logic to detect which layers are 'eligible' for this prefill skip optimization
|
This pull request has merge conflicts that must be resolved before it can be |
LucasWilkinson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would really like to try to keep build signature of the metadata builders as simple as possible so hopefully we can create some nice unit testing infrastructure in the future. Do we really need to add decode_only_common_attn_metadata to the build call signature? can we make the kv sharing layers a different KVSpec and have separate build calls at this level:
vllm/vllm/v1/worker/gpu_model_runner.py
Lines 691 to 709 in 257ab95
| for kv_cache_group_id, kv_cache_group_spec in enumerate( | |
| self.kv_cache_config.kv_cache_groups): | |
| # Prepare for cascade attention if enabled & beneficial. | |
| common_prefix_len = 0 | |
| builder = self.attn_metadata_builders[kv_cache_group_id] | |
| if self.cascade_attn_enabled: | |
| common_prefix_len = self._compute_cascade_attn_prefix_len( | |
| num_scheduled_tokens, | |
| scheduler_output. | |
| num_common_prefix_blocks[kv_cache_group_id], | |
| kv_cache_group_spec.kv_cache_spec, | |
| builder, | |
| ) | |
| attn_metadata_i = (builder.build( | |
| common_prefix_len=common_prefix_len, | |
| common_attn_metadata=common_attn_metadata, | |
| )) |
we should probably be doing this for local attention too but that was added before we had the hybrid-KV cache (which enabled different build calls for different layer groups). We should probably migrate local attention to a scheme like this too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason we need to pass decode_only_common_attn_metadata as a separate arg; is there a reason we can't just use a different build call at the gpu model runner level? i.e. here-ish:
vllm/vllm/v1/worker/gpu_model_runner.py
Lines 691 to 709 in 257ab95
| for kv_cache_group_id, kv_cache_group_spec in enumerate( | |
| self.kv_cache_config.kv_cache_groups): | |
| # Prepare for cascade attention if enabled & beneficial. | |
| common_prefix_len = 0 | |
| builder = self.attn_metadata_builders[kv_cache_group_id] | |
| if self.cascade_attn_enabled: | |
| common_prefix_len = self._compute_cascade_attn_prefix_len( | |
| num_scheduled_tokens, | |
| scheduler_output. | |
| num_common_prefix_blocks[kv_cache_group_id], | |
| kv_cache_group_spec.kv_cache_spec, | |
| builder, | |
| ) | |
| attn_metadata_i = (builder.build( | |
| common_prefix_len=common_prefix_len, | |
| common_attn_metadata=common_attn_metadata, | |
| )) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea I initially had a separate build() call at the model runner level, but I needed to set this as a property of attention metadata for all different backends, and they don't share a common schema. So I thought I could pass the info and let each backend decide what to do with it.
But I do agree that your approach is a better abstraction, will follow up on that
vllm/v1/worker/gpu_model_runner.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we move this logic into metadata builder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved this logic to flash attn metadata builder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I think I missed this so not sure what the code looked like at this point but I think ideally we would keep this common metadata manipulation outside of the metadata builders so we can naturally just support all the backends (assuming we can keep a clean build interface). This is important for blackwell where FlashInfer has the best perf. I actually want to do something similar for local-attention since that could also be done via pure CommonAttentionMetadata manipulation and would enable iRoPe for FlashInfer.
see: #19719 (comment)
541f2a5 to
a9783c3
Compare
|
May be unrelated to this PR. We also need an elegant way to skip preparing kv for layers that don't need them. |
|
@sarckk Here is a PR for v0 YOCO optimization. #20702 Though it is simplified due to ignoring chunked prefill and cuda graph, you can take a look and check whether there are anything you can learn. |
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
|
This pull request has merge conflicts that must be resolved before it can be |
|
this PR is still being worked on, we are going to first decouple kv cache group and attention metadata builder to allow different layers to have different metadata builders EDIT: see #21590 for updated PR |
Signed-off-by: Yong Hoon Shin <[email protected]>
|
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
|
redo in #22628 |
Motivation
KV cache techniques like SwiftKV reduce computation required during prefill. This is harder to implement in V1 where the scheduler groups tokens for prefill and decode in the same batch. This PR adds instrumentation to support prefill compute savings in V1 in KV cache sharing setups where KV sharing is used such that certain tokens can be skipped during prefill (as KV target layers have already populated the necessary key/value tensors required for decoding).
Example
Let's say we have a 24 layer model where first 12 layers allocate their own KV caches and next 12 layers re-use the shared KV cache of its corresponding KV target layer. Then given input prompt sequence of N tokens, we can skip prefill for N-1 tokens for the last 12 layers, because the key/value tensors used for decoding is already populated in the KV caches of the first 12 layers. Because vLLM v1 scheduler does not distinguish prefill/decode and employs continuous batching, we can instead perform forward on the last 12 layers with a reduced input size.
For example, if we have request 0 and request 1 with 4 prompt tokens each, then we might have tokens batched as such:
For the first 12 self-attention layers, we can do forward with the full input
[0, 1, 2, 3, 4, 5, 6, 7], while for the last 12 cross-attention layers, we can do forward with the last token for each request[3,7], as these are the only positions where valid logits are required to sample output tokens from.Frontend changes
This PR adds a new
--kv-sharing-skip-prefillarg which is added to theCacheConfig. This causes FlashAttention backend to compute an extra set of metadata assuming prefill skip, but changes are still required on model side to take advantage of this.Attention metadata
Attention metadata needs to be changed to account for the different query offsets and max lengths in the shared KV layers for which N-1 tokens are skipped during prefill.
Correctness Test
Unit test show outputs are roughly equivalent with and without this optimization (exact numerics will differ as batched mm op will yield slightly different results depending on batch size)
Perf comparison
Set up: single batch and input length of 8192. Using compile+piecewise cuda graph
TestQwen2ForCausalLMmodel forward trace with optimization (enable_kv_sharing_truncated_prefill=True)second layer group takes 9.7ms
Trace without optimization (
enable_kv_sharing_truncated_prefill=False)second layer group takes 16.6ms