[Quickfix] update CachedRequestState as NewRequestData changed#2367
[Quickfix] update CachedRequestState as NewRequestData changed#2367wangxiyuan merged 3 commits intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request updates CachedRequestState to rename the mm_inputs attribute to mm_kwargs, aligning with changes in NewRequestData. The change is applied to the class definition and its instantiation. However, the review identified that not all usages of this attribute were updated in vllm_ascend/worker/model_runner_v1.py, which will lead to runtime errors. A critical comment has been added to address this incomplete refactoring.
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: MengqingCao <cmq0113@163.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2367 +/- ##
==========================================
+ Coverage 75.74% 76.18% +0.44%
==========================================
Files 118 120 +2
Lines 13525 13532 +7
==========================================
+ Hits 10245 10310 +65
+ Misses 3280 3222 -58
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: MengqingCao <cmq0113@163.com>
| from vllm_ascend.utils import vllm_version_is | ||
|
|
||
| # Import specific patches for different versions | ||
| if vllm_version_is("0.10.0"): |
There was a problem hiding this comment.
this can be left. while we can add it back later.
There was a problem hiding this comment.
this can be left. while we can add it back later.
ok, will add it back in next pr then
…project#2367) ### What this PR does / why we need it? 1. update `CachedRequestState` as `NewRequestData` changed in vllm-project/vllm#22570 2. drop maintenance of vllm v0.10.0 in the branch main ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.0 - vLLM main: vllm-project/vllm@92ff41a --------- Signed-off-by: MengqingCao <cmq0113@163.com>
…project#2367) ### What this PR does / why we need it? 1. update `CachedRequestState` as `NewRequestData` changed in vllm-project/vllm#22570 2. drop maintenance of vllm v0.10.0 in the branch main ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.0 - vLLM main: vllm-project/vllm@92ff41a --------- Signed-off-by: MengqingCao <cmq0113@163.com>
…project#2367) ### What this PR does / why we need it? 1. update `CachedRequestState` as `NewRequestData` changed in vllm-project/vllm#22570 2. drop maintenance of vllm v0.10.0 in the branch main ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.0 - vLLM main: vllm-project/vllm@92ff41a --------- Signed-off-by: MengqingCao <cmq0113@163.com>
What this PR does / why we need it?
CachedRequestStateasNewRequestDatachanged in [Core] Use individual MM items in P0/P1 cache and model runner vllm#22570Does this PR introduce any user-facing change?
N/A
How was this patch tested?
CI passed with existing test.