[Refactor] Separate _prepare_inputs to _prepare_inputs and _preprocess#6191
[Refactor] Separate _prepare_inputs to _prepare_inputs and _preprocess#6191wangxiyuan merged 5 commits intovllm-project:mainfrom
_prepare_inputs to _prepare_inputs and _preprocess#6191Conversation
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request refactors the _prepare_inputs method by splitting it into _prepare_inputs and _preprocess, aligning it with the upstream vLLM implementation. This is a good step towards improving code readability and maintainability.
My review focuses on ensuring the refactoring is clean and doesn't introduce new issues. I've identified a couple of minor cleanup opportunities: an unused parameter in the refactored _prepare_inputs method and a confusing TODO comment that appears to be a typo. Addressing these will further improve the code quality.
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
|
@wangxiyuan CI has passed. |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
|
CI failed |
|
I think it's related to #6041 |
Yes. It seems to be not resulted by this refactor PR. |
|
you can rebase now. |
|
@wangxiyuan Ready now. |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com>
…rocess` (vllm-project#6191) ### What this PR does / why we need it? Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The `preprocess` logic is same as GPUModelRunner. We don't need to maintain it anymore. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI. - vLLM version: v0.14.0 - vLLM main: vllm-project/vllm@d682094 --------- Signed-off-by: gcanlin <canlinguosdu@gmail.com>
What this PR does / why we need it?
Align with upstream vLLM. This PR will help downstream vLLM-Omni reduce the cost for maintaining the _prepare_inputs. Besides, it helps vLLM-Ascend code more readable. In the future, we can follow closer to vLLM. The
preprocesslogic is same as GPUModelRunner. We don't need to maintain it anymore.Does this PR introduce any user-facing change?
No
How was this patch tested?
CI.