Update FAQ on interleaving sliding windows support#29796
Update FAQ on interleaving sliding windows support#29796heheda12345 merged 2 commits intovllm-project:mainfrom
Conversation
Clarify handling of interleaving sliding windows in models. Signed-off-by: Finbarr Timbers <finbarrtimbers@gmail.com>
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
|
Documentation preview: https://vllm--29796.org.readthedocs.build/en/29796/ |
There was a problem hiding this comment.
Code Review
This pull request successfully updates the documentation in docs/contributing/model/basic.md by removing outdated information regarding interleaved sliding windows support. The change aligns with the provided context that this functionality was fixed previously. This is a positive update that improves the accuracy of the documentation. No specific review comments are provided as the changes are minor documentation updates and do not introduce any issues of high or critical severity.
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
Signed-off-by: Finbarr Timbers <finbarrtimbers@gmail.com> Signed-off-by: Hashem Hashemi <hashem.hashemi@amd.com>
Signed-off-by: Finbarr Timbers <finbarrtimbers@gmail.com> Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
Purpose
Removes outdated documentation indicating that interleaved sliding windows are not supported in KV cache block allocation. This was fixed in February (#13296).
Test Plan
I will calculate the KV cache for various window sizes for a model with interleaved attention layers and look at the results.
Test Result
I have verified this manually by looking at the KV cache generated for Olmo 3. For Olmo 3 7B, which has 3 SWA layers followed by a global attention layer, the KV cache for generating 6144 tokens is 2.58GiB per request [1], while for 34,048 tokens, it's 5.14 GiB per request [2]. If SWA was not supported, I would expect the KV cache to be 5.5x bigger; instead it's 2x bigger.
[1]
[gpu_worker.py:298] Available KV cache memory: 57.18 GiB,[kv_cache_utils.py:1091] Maximum concurrency for 6,144 tokens per request: 23.73x[2]
[gpu_worker.py:298] Available KV cache memory: 57.18 GiB,[kv_cache_utils.py:1091] Maximum concurrency for 34,048 tokens per request: 11.12x.Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.