Skip to content

[Chore] Migrate V0 attention utils#31891

Merged
DarkLight1337 merged 2 commits intovllm-project:mainfrom
DarkLight1337:rm-v0-attn-utils
Jan 7, 2026
Merged

[Chore] Migrate V0 attention utils#31891
DarkLight1337 merged 2 commits intovllm-project:mainfrom
DarkLight1337:rm-v0-attn-utils

Conversation

@DarkLight1337
Copy link
Copy Markdown
Member

@DarkLight1337 DarkLight1337 commented Jan 7, 2026

Purpose

Refactoring to remove V0 code

  • vllm.attention.backends.utils.PAD_SLOT_ID -> vllm.v1.attention.backends.utils.PAD_SLOT_ID (existing)
  • vllm.attention.backends.utils.get_mla_dims -> vllm.v1.attention.backends.mla.common.get_mla_dims

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 requested a review from Isotr0py January 7, 2026 10:35
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 7, 2026
@mergify mergify bot added rocm Related to AMD ROCm v1 labels Jan 7, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the location of several attention utilities. PAD_SLOT_ID is now sourced from vllm.v1.attention.backends.utils, while get_mla_dims and the MLADims dataclass have been moved to vllm.v1.attention.backends.mla.common. The original vllm.attention.backends.utils file is removed as part of this change. All corresponding import paths have been updated correctly throughout the codebase. The refactoring is clean and improves code organization. I have no further comments.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@Isotr0py Isotr0py enabled auto-merge (squash) January 7, 2026 12:14
@DarkLight1337 DarkLight1337 disabled auto-merge January 7, 2026 13:17
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) January 7, 2026 13:17
@DarkLight1337 DarkLight1337 merged commit b665bbc into vllm-project:main Jan 7, 2026
57 checks passed
@DarkLight1337 DarkLight1337 deleted the rm-v0-attn-utils branch January 7, 2026 13:44
therealnaveenkamal added a commit to therealnaveenkamal/vllm that referenced this pull request Jan 7, 2026
Signed-off-by: Naveenraj Kamalakannan <therealnaveenkamal@gmail.com>
yugong333 pushed a commit to yugong333/vllm that referenced this pull request Jan 9, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Jan 13, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
aipaes pushed a commit to aipaes/vllm-ascend that referenced this pull request Jan 15, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
akh64bit pushed a commit to akh64bit/vllm that referenced this pull request Jan 16, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Jan 31, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Jan 31, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
Upgrade vllm commit to 0109 (bde38c11df0ea066a740efe9b77fff5418be45df)

1. remove `init_cached_hf_modules ` due to
vllm-project/vllm#31786
2. fix spec_decode e2e test due to
vllm-project/vllm#29821 break
3. fix `vllm.v1.attention.backends.utils` duo to
vllm-project/vllm#31891
4. fix `self.seq_lens - query_lens` on same device due to
vllm-project/vllm#31773
5. skip model_runner_v2 e2e test due to `'_OpNamespace' '_C' object has
no attribute 'get_cuda_view_from_cpu_tensor'`

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@2f4e654

Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants