Skip to content

Conversation

@ltd0924
Copy link
Collaborator

@ltd0924 ltd0924 commented Nov 8, 2025

Motivation

Standardize the acquisition method of the unified KV cache shape in the scenario of prefix cache and PD disaggregation, reducing the cost of integrating different attention backends with the prefix cache.

Modifications

  • Modify the return value of the function get_kv_cache_shapeto distinguish the shapes of key cache and value cache.

Usage or Command

No change.

Accuracy Tests

The existing CI already covers it, no need to add.

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


ltd0924 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link

paddle-bot bot commented Nov 8, 2025

Thanks for your contribution!

@ltd0924 ltd0924 changed the title [Feature] support unified cache backend [KVCache] support unified cache backend Nov 9, 2025
self.mla_cache = envs.FD_ATTENTION_BACKEND == "MLA_ATTN"
for i in range(self.model_config.num_hidden_layers):
key_cache_name = f"key_caches_{i}_rank{local_rank}.device{self.device_id}"
if not self.mla_cache:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里和gpu_model_runner不统一了,是不是也按照if value_cache_shape 进行判断

yuanlehome
yuanlehome previously approved these changes Nov 10, 2025
self.head_dim,
)
]
return key_cache_shape, value_cache_shape
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样返回担心有风险,改成key_cache_shape与value_cache_shape非同一个list吧,key_cache_shape和value_cache_shape是同一个list,但调用方不知道改了其中一个返回值,其实另一个返回值也会发生变化

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the KV cache shape handling across the codebase to support separate key and value cache shapes, particularly for Multi-Head Latent Attention (MLA) architectures where value cache may not be needed. The refactoring changes the get_kv_cache_shape() method to return a tuple of two shapes instead of a single shape.

Key Changes:

  • Modified all attention backends to return separate key_cache_shape and value_cache_shape from get_kv_cache_shape()
  • Updated model runners and cache managers to handle separate shapes for key and value caches
  • Replaced MLA-specific flags (self.mla_cache) with shape-based logic (checking if value_cache_shape is empty)
  • Updated cache transfer and cache messager modules to pass shape information as command-line arguments

Reviewed Changes

Copilot reviewed 18 out of 18 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
tests/cache_manager/test_cache_transfer_manager.py Updated test Args to use new shape-based parameters
fastdeploy/model_executor/layers/attention/*.py Modified all attention backends to return tuple of (key_cache_shape, value_cache_shape)
fastdeploy/worker/*_model_runner.py Updated model runners to unpack and use separate cache shapes
fastdeploy/cache_manager/*.py Refactored cache managers to compute and pass shapes as command-line arguments
fastdeploy/demo/offline_disaggregated_demo.py Changed demo configuration (model path and port)

@chang-wenbin chang-wenbin self-requested a review November 12, 2025 06:53
@Jiang-Jia-Jun Jiang-Jia-Jun merged commit 5bf48de into PaddlePaddle:develop Nov 12, 2025
13 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants