Skip to content

Conversation

@yiz-liu
Copy link
Collaborator

@yiz-liu yiz-liu commented May 7, 2025

What this PR does / why we need it?

Fix output tensor shape in vanilla_chunked_prefill function.

Does this PR introduce any user-facing change?

None.

How was this patch tested?

Run offline inference on DeepSeek models.

…oader in NPUModelRunnerBase

Signed-off-by: Yizhou Liu <[email protected]>
@yiz-liu yiz-liu force-pushed the fix-output-shape branch from e96f48f to 368a39e Compare May 7, 2025 07:58
@yiz-liu yiz-liu changed the title [Bugfix] Fix output tensor shape in vanilla_chunked_prefill function [Bugfix] Fix output tensor shape in vanilla_chunked_prefill and update import paths for model_loader May 7, 2025
Copy link
Collaborator

@jianzs jianzs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make sure the vllm-ascend main branch works with vllm versions 0.8.5 and 0.8.5.post1.

@yiz-liu yiz-liu force-pushed the fix-output-shape branch 2 times, most recently from 3e53cb9 to 87badbf Compare May 7, 2025 09:56
@ganyi1996ppo
Copy link
Collaborator

Looks CI is incomplete, can you trigger it again with a new commit?

@yiz-liu yiz-liu force-pushed the fix-output-shape branch 4 times, most recently from 10a64bc to ef7fb88 Compare May 7, 2025 12:24
@yiz-liu
Copy link
Collaborator Author

yiz-liu commented May 7, 2025

@jianzs @Yikun I don't think vllm_version_is is working properly, got any idea? If it is not working, then #753 might be broken too?

@yiz-liu yiz-liu force-pushed the fix-output-shape branch 10 times, most recently from 046deab to eb8ccd2 Compare May 8, 2025 04:00
@yiz-liu yiz-liu force-pushed the fix-output-shape branch from eb8ccd2 to 6903a00 Compare May 8, 2025 04:00
@yiz-liu
Copy link
Collaborator Author

yiz-liu commented May 8, 2025

@jianzs @Yikun I don't think vllm_version_is is working properly, got any idea? If it is not working, then #753 might be broken too?

NVM, I found a workaround.

) -> None:
from vllm.model_executor.model_loader.loader import ShardedStateLoader
if vllm_version_is("0.8.5") or vllm_version_is("0.8.5.post1"):
from vllm.model_executor.model_loader.loader import ShardedStateLoader # type: ignore[import] # isort: skip # noqa
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a reasonable approach to skip code static checking. we could not install 2 version of vllm in the same python env, thus I agree to skip this in v0.8.5

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we ensure vllm-ascend main branch is compatible with both vllm version 0.8.5 and 0.8.5.post1?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but this is just skip code format static check of 0.8.5 in CI of main branch. This has no impact on the features.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using vLLM v0.8.5 without checking this condition for v0.8.5 will trigger the else branch, causing a problem.

@jianzs
Copy link
Collaborator

jianzs commented May 8, 2025

Merge this pull request ASAP; many others are blocked. @MengqingCao @Yikun

@ganyi1996ppo ganyi1996ppo merged commit 2e3520e into vllm-project:main May 8, 2025
14 checks passed
@yiz-liu yiz-liu deleted the fix-output-shape branch May 8, 2025 06:20
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Oct 16, 2025
…e import paths for model_loader (vllm-project#773)

<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->
Fix output tensor shape in vanilla_chunked_prefill function.

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
None.

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
Run offline inference on DeepSeek models.

---------

Signed-off-by: Yizhou Liu <[email protected]>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
…e import paths for model_loader (vllm-project#773)

<!--  Thanks for sending a pull request!

BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html

-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case
and bug description.

- Fixes #
-->
Fix output tensor shape in vanilla_chunked_prefill function.

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
None.

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
Run offline inference on DeepSeek models.

---------

Signed-off-by: Yizhou Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants