[Core] Use key-only cache for BaseMultiModalProcessor#23018
[Core] Use key-only cache for BaseMultiModalProcessor#23018DarkLight1337 merged 86 commits intovllm-project:mainfrom
BaseMultiModalProcessor#23018Conversation
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant refactoring of the multimodal input caching mechanism. The core change is the introduction of the CachedMultiModalInputExchanger abstraction, which separates caching logic for the frontend (P0) and the core engine (P1). This enables a "key-only" cache in P0, reducing its memory footprint. The changes are extensive, touching on core engine logic, model implementations, and tests. While the overall refactoring appears to be a solid improvement, I have identified two critical bugs in the implementation that could lead to runtime errors and incorrect caching behavior. These issues need to be addressed to ensure the stability and correctness of the new caching system.
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
BaseMultiModalProcessor
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
…#23018) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
…#23018) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: Xiao Yu <xiao.yu@amd.com>
…#23018) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
…#23018) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
…#23018) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Purpose
Currently both P0 and P1 store multi-modal processor outputs. This PR makes it so that only one process needs to store the processor outputs, halving the memory usage overall.
MultiModalProcessorSenderCache(only stores hashes and metadata of processor outputs) while P1 usesMultiModalReceiverCache(stores both hashes and processor outputs).MultiModalProcessorOnlyCache(stores both hashes and processor outputs) while P1 uses no caching.Key changes:
vllm.multimodal.cache. The old definitions inv1.engine.mm_input_cachehave been removed.MultiModalRegistryandProcessorintoInputPreprocessorclass.BaseMultiModalProcessorinstead ofProcessorclass.Processor cache is now required to be explicitly created in model runner to perform profiling.Test Plan
Added simple tests to check the interface of
BaseMultiModalCache.Test Result
The new tests pass.
(Optional) Documentation Update
Updated
docs/configuration/optimization.md.Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.