Skip to content

[KVConnector] Add KV events to KV Connectors#28309

Merged
NickLucche merged 16 commits intovllm-project:mainfrom
hickeyma:send-events-worker-to-scheduler
Dec 11, 2025
Merged

[KVConnector] Add KV events to KV Connectors#28309
NickLucche merged 16 commits intovllm-project:mainfrom
hickeyma:send-events-worker-to-scheduler

Conversation

@hickeyma
Copy link
Contributor

@hickeyma hickeyma commented Nov 7, 2025

Purpose

Enable KV connectors to pass KV Events to vLLM. vLLM can then publish the events using its existing KV Events publisher in the engine.

The motivation to emit KV Events by connectors is to help KV-cache-aware routing make more efficient decisions, based on not only KV caches in GPU, but also KV caches in CPU and local disk.

This PR includes updates to the the LMCache connector to implement the KVConnectorKVEvents interface that enables KV events to be passed to vLLM from LMCache.

This is part of the solution for:

Note: This is related to #28252 and from feedback by @markmc in #28252 (comment)

Test Plan

pytest tests/v1/kv_connector/unit/test_lmcache_connector.py

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to send KV cache events from the worker side to the scheduler side, which is a crucial feature for connectors that generate these events. The changes are logical, but I've identified a critical bug in the lmcache_connector.py that could lead to event loss, and an incorrect type hint in kv_connector_model_runner_mixin.py. Addressing these issues will improve the correctness and maintainability of the new functionality.

@hickeyma hickeyma force-pushed the send-events-worker-to-scheduler branch from fc375f8 to 499a425 Compare November 7, 2025 17:27
hickeyma added a commit to hickeyma/vllm that referenced this pull request Nov 7, 2025
This commit follows recommendation from @markmc to use
a specific event property instead of piggybacking on
stats. PR vllm-project#28309 adds the events property to KVConnectorOutput and
this commit picks up the new property and uses it to pass the events
from worker side to scheduler side.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
@njhill
Copy link
Member

njhill commented Nov 7, 2025

@orozery

@orozery
Copy link
Collaborator

orozery commented Nov 9, 2025

@hickeyma I think you're missing aggregation code in KVOutputAggregator.
This means adding more state to KVOutputAggregator to aggregate per-block KVEvents.

@njhill Let's think how this change fits to the longer-term future.
Right now, scheduler can send arbitrary metadata to the workers (KVConnectorMetadata).
On the other hand, workers are limited to an explicit KVConnectorOutput, which includes:

  • request-level completions (finished_sending, finished_recving)
  • abstract stats
  • block-level errors
  • (just noticed this) expected_finished_count - looks like a niche bootstrap configuration param

At the time when I introduced worker->scheduler output aggregation (#19555), my initial suggestion was to allow abstract metadata to flow from worker connectors to the scheduler connector.
I think that the current approach where we limit the worker output has 2 drawbacks:

  1. It makes it harder to implement connector logic (e.g. the current PR is needed).
  2. For those connectors who does succeed in pushing their logic to KVConnectorOutput, the result is a growing-over-time KVConnectorOutput which contains a scattered list of fields which are practically connector-specific.

My view is that we should rename KVConnectorStats to KVConnectorWorkerMeta (which could support get_stats() -> KVConnectorStats).
This will allow free us from maintaining the list of connector-specific worker output fields, and delegate it entirely to the connector implementations themselves.

cc @sdavidbd

@markmc
Copy link
Member

markmc commented Nov 10, 2025

@njhill Let's think how this change fits to the longer-term future. Right now, scheduler can send arbitrary metadata to the workers (KVConnectorMetadata). On the other hand, workers are limited to an explicit KVConnectorOutput,

I agree with this in general - e.g. in an earlier iteration of #26172 I found myself adding NIXL-specific semantics to the contents of finished_recving, which was only possible because I happened to be able to use the update_connector_output() to extract the NIXL-specific data and translate the contents to what the scheduler expected. An obscure example, but I did think it was a weird contrast with the complete flexibility of connector-specific KVConnectorMetadata subclasses

My view is that we should rename KVConnectorStats to KVConnectorWorkerMeta (which could support get_stats() -> KVConnectorStats).

I'm not sure I agree with this, though:

  1. Although KVConnectorStats is a recent addition, we should by default assume that we need to maintain compatibility and break any potential external connectors that might have already adopted
  2. KVConnectorStats seems like a useful abstraction that should probably be adopted by all connectors over time
  3. KVConnectorStats is very specifically about metrics and integration with Prometheus/console stat logging - for example the aggregate() and reduce() methods - so not easily repurposed as a generic worker->scheduler channel
  4. KVEventsBatch also seems like a useful, connector-independent abstraction, that ties in nicely with the existing public KV events publishing API, so if connectors have to use a connector-specific metadata channel for them, we'd probably see a lot of duplication

But I do think it could be positive to add KVConnectorWorkerMetdata anyway ...

@NickLucche
Copy link
Collaborator

Thanks for the great breakdown @orozery.
Still, I agree with @markmc as in I think the KVConnectorStats is a connector-agnostic abstraction with a specific purpose.
I also like that the design direction that was followed for worker->scheduler comm was to have a clearer interface instead of a loose container.

the result is a growing-over-time KVConnectorOutput

I understand your point here, but I believe that adding more fields to KVConnectorOutput was kinda part of the original design where it was expected that it would naturally grow to contain more stuff (I think this PR is a good example of that).
Naturally nobody would want the struct to grow indefinitely if we sense that this is the direction in which we're heading.

@orozery
Copy link
Collaborator

orozery commented Nov 10, 2025

I agree with the usefulness of KVConnectorStats for multiple connectors.
And I think that any field which may be useful for multiple connectors has a good reason to stay explicit (including KVConnectorStats).

My main point is actually on the other hand, for fields that are useful only for a single connector (and I think this may be the case here?), we should have an abstract field that each connector can use freely (similar to KVConnectorMetadata).

@markmc
Copy link
Member

markmc commented Nov 10, 2025

for fields that are useful only for a single connector, we should have an abstract field that each connector can use freely (similar to KVConnectorMetadata).

Agree 👍

(and I think this may be the case here?),

Honestly, I don't yet fully understand the motivation for the lmcache connector to emit its own events (as per #28252 and LMCache/LMCache#1846) in addition to the KV events already emitted by vLLM ... so I don't have a strong sense either way whether this need is highly-specific to lmcache

@hickeyma
Copy link
Contributor Author

hickeyma commented Nov 12, 2025

Thanks @njhill @orozery @markmc @NickLucche for the feedback. I will get to it in due course as bit delayed this week because I'm at KubeCon.

Honestly, I don't yet fully understand the motivation for the lmcache connector to emit its own events (as per #28252 and LMCache/LMCache#1846) in addition to the KV events already emitted by vLLM ... so I don't have a strong sense either way whether this need is highly-specific to lmcache

@markmc The reason that a connector like LMCache would need to generate their own events is down to specific details or information. How LMcache stores the tokens/hashes (e.g. LMcache stores tokens with a granularity of 256 as a block and vLLM granularity is 16 as a block) and where it stores them (e.g. LMCache might store them to disk) will differ to vLLM. Therefore, a consumer of the events would be missing the full context of the cache if it only consumed the vLLM events.

@hickeyma
Copy link
Contributor Author

hickeyma commented Nov 12, 2025

I understand your point here, but I believe that adding more fields to KVConnectorOutput was kinda part of the original design where it was expected that it would naturally grow to contain more stuff (I think this PR is a good example of that).
Naturally nobody would want the struct to grow indefinitely if we sense that this is the direction in which we're heading.

I agree with @NickLucche about having a separate field. @orozery I understand you reservation about only 1 connector possibly using it but still think the separation is best to clearly identify it.

@KuntaiDu
Copy link
Collaborator

Honestly, I don't yet fully understand the motivation for the lmcache connector to emit its own events (as per #28252 and LMCache/LMCache#1846) in addition to the KV events already emitted by vLLM ... so I don't have a strong sense either way whether this need is highly-specific to lmcache

On LMCache-side, the motivation for LMCache to emit KVEvents is to help the KV-cache-aware routing logics make best decisions, based on not only KV caches in GPU (where current vLLM does), but also KV caches in CPU and local disk (where only LMCache has visibility).

@KuntaiDu
Copy link
Collaborator

@ApostaC Can you also take a look?

@ApostaC
Copy link
Collaborator

ApostaC commented Nov 12, 2025

I like the idea of separating the KVEvents. My understanding is that KVEvents is becoming a standard format of fine-grained KV cache monitoring, and there will be other components using it in the future. Therefore, leaving a standard interface for it makes a lot of sense to me.

Regarding KVConnectorWorkerMeta, I would say it's more like the runtime statistics generated during the inference rather than "metadata" (which sounds more like static stuff). Thus, I would prefer to use output or stats in the name.

cc @orozery @hickeyma @markmc

@hickeyma hickeyma force-pushed the send-events-worker-to-scheduler branch from 1763194 to d6a23a6 Compare November 17, 2025 10:42
@ApostaC
Copy link
Collaborator

ApostaC commented Nov 17, 2025

Quick follow-up regarding the KV event aggregator:

@hickeyma There is a class that combines worker processes' the connector outputs to a single output and send it back to the scheduler class:

class KVOutputAggregator:
"""Utility class to aggregate the output of all workers into a single
output corresponding to Rank 0 for scheduler."""

IIUC, since KV events should work for all the connectors, it's better to add such aggregation logic for KV events in KVOutputAggregator as well. It should be almost the same as how it deals with kv_connector_stats now.

cc @orozery @NickLucche @njhill (please correct me if my understanding is wrong).

@hickeyma hickeyma force-pushed the send-events-worker-to-scheduler branch from 7bcaa13 to 7ad9600 Compare November 18, 2025 10:15
@hickeyma hickeyma requested a review from markmc as a code owner November 18, 2025 12:13
@hickeyma hickeyma force-pushed the send-events-worker-to-scheduler branch 2 times, most recently from 7a2e362 to 6487bda Compare November 18, 2025 14:48
@hickeyma hickeyma requested a review from NickLucche December 9, 2025 13:35
@hickeyma
Copy link
Contributor Author

hickeyma commented Dec 9, 2025

Thanks for the review @NickLucche. Unit tests now added.

@hickeyma hickeyma force-pushed the send-events-worker-to-scheduler branch from dc3c961 to f10c419 Compare December 9, 2025 14:08
@KuntaiDu
Copy link
Collaborator

Thanks @NickLucche for the review!

@ApostaC
Copy link
Collaborator

ApostaC commented Dec 10, 2025

Hey @hickeyma , can you do "update branch" to include the fix of LMCache unit tests? Otherwise, this looks good to me!

Copy link
Collaborator

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution and thanks @KuntaiDu @NickLucche for reviewing!

Copy link
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the only missing part here is to provide a generic implementation of get_kv_connector_kv_cache_events for the MultiConnector, such that it handles getting these events from multiple connectors, handling the case where only a subset of the requested connectors implement the get_kv_connector_kv_cache_events method (+quick test case).

I believe changes to multi_connector.py here #22188 could be a good reference.
@hickeyma

This is required for when worker side operations like CPU offloading
generate KV cache events. This commit enables theses events to be passed
to the scheduler side so that they can be published by the engine.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Update comments:
- vllm-project#28309 (review)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
The changes to the connector is for a separate PR and this
PR is independent of it for now.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
LMCache connector changes are not part of this PR as this PR deals
with adding ability to pass KV events between worker and scheduler sides
and not a specific connector usage of this.

The connector implementation is in a separate PR.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
It is part of the aggregation of kv_connector_output from
all workers. For KV cache events, this means combining events
from all workers, remvoing any duplications.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Clarify that it is the workser side that calls the method.

Review comment:

- vllm-project#28309 (comment)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Equality methods are unnecessary as its provided by the parent class
'msgspec.Struct' which they extend from.

Review comment:
https://github.com/vllm-project/vllm/pull/28309/files#r2539044714

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Review comments:
- vllm-project#28309 (comment)
- vllm-project#28309 (comment)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Workers will generate duplicate kv events in LMCache. This
commit adds capability to aggregate on events from workers by returning
only those that were emitted by all workers.

It also provide an abstract class KVConnectorKVEevnts that is implemented
by connectors to handle how they emit events.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Review comments:
- vllm-project#28309 (comment)
- vllm-project#28309 (comment)
- vllm-project#28309 (comment)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
All workers may not be processed by the same instance running
`aggregate()` for the model runner. Therefore, events needs to be persisted
across KVConnector output processing until the events have been retrieved
in `take_events()`. This commit persisted the events until all worker have been
completed.

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Review comment:
vllm-project#28309 (comment)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
Comment:
vllm-project#28309 (review)

Signed-off-by: Martin Hickey <martin.hickey@ie.ibm.com>
@hickeyma
Copy link
Contributor Author

hickeyma commented Dec 11, 2025

think the only missing part here is to provide a generic implementation of get_kv_connector_kv_cache_events for the MultiConnector, such that it handles getting these events from multiple connectors, handling the case where only a subset of the requested connectors implement the get_kv_connector_kv_cache_events method (+quick test case).

Discussed this with @NickLucche on slack, and its agreed that the implementation to MultiConnector can be pushed out to another PR. I have added a TODO comment to MultiConnector as a placeholder as suggested. I'll push a separate PR shortly for this.

Copy link
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM as per @hickeyma comment on follow up work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build kv-connector ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants