Conversation
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com> Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello @wxsIcey, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request includes several updates and fixes for the vLLM-Ascend project. It updates version compatibility in documentation, adjusts tolerance ranges in tests, updates dependencies, and adds version checks for conditional module imports and behavior adjustments. The changes also include refactoring and enhancements to unit tests for Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the codebase to be compatible with recent changes in the upstream vLLM repository. The changes primarily involve adapting to new or moved APIs, especially around compilation passes and global configuration contexts in tests. While most of the adaptations are correct, I've found a recurring issue with how vLLM versions are being checked. The current implementation uses an exact equality check, which is brittle and will likely break with future vLLM updates. I've provided suggestions to make these version checks more robust. The rest of the changes, including test updates and documentation, look good.
| from vllm_ascend.utils import vllm_version_is | ||
|
|
||
| # isort: off | ||
| if vllm_version_is("v0.15.0"): | ||
| from vllm.compilation.inductor_pass import get_pass_context # type: ignore | ||
| from vllm.compilation.vllm_inductor_pass import VllmInductorPass # type: ignore | ||
| else: | ||
| from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore | ||
| from vllm.compilation.passes.vllm_inductor_pass import VllmInductorPass # type: ignore | ||
| # isort: on |
There was a problem hiding this comment.
Using vllm_version_is for an exact equality check against 'v0.15.0' is brittle. This will cause incorrect import paths for any other vLLM version, including older versions like 0.14.0, future patch releases like 0.15.1, or development versions. This can lead to ImportError and makes maintenance harder.
A more robust approach is to use version range comparisons based on when the upstream API change occurred. For example:
from packaging.version import Version
import vllm
# isort: off
# The import paths for compilation passes changed in vLLM.
# Use version checking to maintain compatibility.
# Please verify the exact vLLM version where this change occurred.
if Version(vllm.__version__).base_version <= "0.15.0":
from vllm.compilation.inductor_pass import get_pass_context # type: ignore
from vllm.compilation.vllm_inductor_pass import VllmInductorPass # type: ignore
else:
from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore
from vllm.compilation.passes.vllm_inductor_pass import VllmInductorPass # type: ignore
# isort: on| from vllm_ascend.utils import vllm_version_is | ||
|
|
||
| # isort: off | ||
| if vllm_version_is("v0.15.0"): | ||
| from vllm.compilation.inductor_pass import get_pass_context # type: ignore | ||
| from vllm.compilation.vllm_inductor_pass import VllmInductorPass # type: ignore | ||
| else: | ||
| from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore | ||
| from vllm.compilation.passes.vllm_inductor_pass import VllmInductorPass # type: ignore | ||
| # isort: on |
There was a problem hiding this comment.
Similar to my other comments, using vllm_version_is for an exact equality check is brittle. This will cause incorrect import paths for any vLLM version other than 'v0.15.0', leading to ImportError and making maintenance difficult. A more robust approach is to use version range comparisons based on when the upstream API change occurred.
from packaging.version import Version
import vllm
# isort: off
# The import paths for compilation passes changed in vLLM.
# Use version checking to maintain compatibility.
# Please verify the exact vLLM version where this change occurred.
if Version(vllm.__version__).base_version <= "0.15.0":
from vllm.compilation.inductor_pass import get_pass_context # type: ignore
from vllm.compilation.vllm_inductor_pass import VllmInductorPass # type: ignore
else:
from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore
from vllm.compilation.passes.vllm_inductor_pass import VllmInductorPass # type: ignore
# isort: on| from vllm_ascend.utils import vllm_version_is | ||
|
|
||
| # isort: off | ||
| if vllm_version_is("v0.15.0"): | ||
| from vllm.compilation.inductor_pass import get_pass_context # type: ignore | ||
| else: | ||
| from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore | ||
| # isort: on |
There was a problem hiding this comment.
The use of vllm_version_is for an exact equality check is brittle and will likely cause this to fail on vLLM versions other than 'v0.15.0'. To ensure forward and backward compatibility, please use a version range comparison with packaging.version.Version based on when the get_pass_context path changed upstream.
| from vllm_ascend.utils import vllm_version_is | |
| # isort: off | |
| if vllm_version_is("v0.15.0"): | |
| from vllm.compilation.inductor_pass import get_pass_context # type: ignore | |
| else: | |
| from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore | |
| # isort: on | |
| from vllm_ascend.utils import vllm_version_is | |
| from packaging.version import Version | |
| import vllm | |
| # isort: off | |
| if Version(vllm.__version__).base_version <= "0.15.0": | |
| from vllm.compilation.inductor_pass import get_pass_context # type: ignore | |
| else: | |
| from vllm.compilation.passes.inductor_pass import get_pass_context # type: ignore | |
| # isort: on |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?