[Lint]Style: Convert vllm-ascend/ to ruff format(Batch #3)#5978
[Lint]Style: Convert vllm-ascend/ to ruff format(Batch #3)#5978wangxiyuan merged 6 commits intovllm-project:mainfrom
vllm-ascend/ to ruff format(Batch #3)#5978Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request continues the effort of applying ruff format to the vllm-ascend/ directory, targeting a new batch of files. The changes are purely stylistic and include reformatting long lines, updating type hints to modern syntax (e.g., using | for Optional), and removing unused imports. These modifications enhance code readability and consistency. The changes are correct and align with the PR's objective. I have no further comments.
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
vllm-ascend/ to ruff format(Batch #3)
1869a6b to
193584f
Compare
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
e64801d to
4b6e1a4
Compare
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local>
…) (vllm-project#5978) ### What this PR does / why we need it? **Scope of Changes**: | File Path | | :--- | | `vllm_ascend/attention/mla_v1.py` | | `vllm_ascend/attention/sfa_v1.py` | | `vllm_ascend/core/recompute_scheduler.py` | | `vllm_ascend/core/scheduler_dynamic_batch.py` | | `vllm_ascend/distributed/device_communicators/npu_communicator.py` | | `vllm_ascend/distributed/device_communicators/pyhccl.py` | | `vllm_ascend/distributed/device_communicators/pyhccl_wrapper.py` | ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@2c24bc6 --------- Signed-off-by: MrZ20 <2609716663@qq.com> Co-authored-by: Soren <user@SorendeMac-mini.local>
What this PR does / why we need it?
Scope of Changes:
vllm_ascend/attention/mla_v1.pyvllm_ascend/attention/sfa_v1.pyvllm_ascend/core/recompute_scheduler.pyvllm_ascend/core/scheduler_dynamic_batch.pyvllm_ascend/distributed/device_communicators/npu_communicator.pyvllm_ascend/distributed/device_communicators/pyhccl.pyvllm_ascend/distributed/device_communicators/pyhccl_wrapper.pyDoes this PR introduce any user-facing change?
How was this patch tested?