Skip to content

[KVConnector][LMCache] Enable Support for cross-layer Layout#33395

Open
Shaoting-Feng wants to merge 3 commits intovllm-project:mainfrom
Shaoting-Feng:lmcache_cross_layers
Open

[KVConnector][LMCache] Enable Support for cross-layer Layout#33395
Shaoting-Feng wants to merge 3 commits intovllm-project:mainfrom
Shaoting-Feng:lmcache_cross_layers

Conversation

@Shaoting-Feng
Copy link
Copy Markdown
Contributor

@Shaoting-Feng Shaoting-Feng commented Jan 30, 2026

Purpose

Required for compatibility with the new KV cache shape introduced by vLLM PR #27743.

Test Plan

Note: This change depends on LMCache PR #2498.
The implementation has been validated on both:

  • Non-MLA model: meta-llama/Llama-3.1-8B-Instruct
  • MLA model: deepseek-ai/DeepSeek-V2-Lite-Chat

Command:

LMCACHE_CHUNK_SIZE=8 CUDA_VISIBLE_DEVICES=3 \
vllm serve <MODEL> \
  --kv-transfer-config \
  '{"kv_connector":"LMCacheConnectorV1","kv_role":"kv_both","kv_connector_extra_config":{"enable_cross_layers_blocks":true}}' \
  --port 8177 \
  --no-enable-prefix-caching \
  --enforce-eager

Test Result

curl http://localhost:8177/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": <MODEL>,
    "prompt": "Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts",
    "max_tokens": 10,
    "temperature": 0
  }'

Both models work.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Shaoting Feng <shaotingf@uchicago.edu>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for cross-layer KV cache layout in the LMCache connector, introducing a property to enable this feature and a method for registering the cross-layer KV cache. My review identified a critical bug in the boolean logic of the prefer_cross_layer_blocks property that would cause it to almost always be enabled. Additionally, I found two issues in the new register_cross_layers_kv_cache method: an unused parameter that should be passed to the underlying engine and an incorrect log message. I've provided code suggestions to address these findings.

cross_layers_kv_cache: kv cache of all layers
"""
if hasattr(self._lmcache_engine, "register_cross_layers_kv_cache"):
self._lmcache_engine.register_cross_layers_kv_cache(cross_layers_kv_cache)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The cross_layers_attn_backend parameter is unused in this method call. The base class KVConnectorBase_V1 includes this parameter in its register_cross_layers_kv_cache signature, suggesting it's intended to be used. It should be passed to the underlying _lmcache_engine's method to ensure correct functionality, assuming the engine's method expects it.

            self._lmcache_engine.register_cross_layers_kv_cache(cross_layers_kv_cache, cross_layers_attn_backend)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But lmcache engine doesn't need it.

Signed-off-by: Shaoting Feng <shaotingf@uchicago.edu>
@mergify
Copy link
Copy Markdown

mergify bot commented Jan 30, 2026

Hi @Shaoting-Feng, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

Signed-off-by: Shaoting Feng <shaotingf@uchicago.edu>
@robertgshaw2-redhat robertgshaw2-redhat changed the title [KVConnector] Enable LMCache connector support for cross-layer KV cache layout [KVConnector][LMCache] Enable Support for cross-layer Layout Jan 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant