Skip to content

[Bugfix][ROCm] Strip block_size before attention backend validation#36274

Merged
houseroad merged 3 commits intovllm-project:mainfrom
jennyyyyzhen:fix-rocm-block-size-544
Mar 11, 2026
Merged

[Bugfix][ROCm] Strip block_size before attention backend validation#36274
houseroad merged 3 commits intovllm-project:mainfrom
jennyyyyzhen:fix-rocm-block-size-544

Conversation

@jennyyyyzhen
Copy link
Contributor

@jennyyyyzhen jennyyyyzhen commented Mar 6, 2026

Purpose

The ROCm attention backend refactor (#35246) introduced validate_configuration calls that reject irregular block_size because it is not in the BlockSize type (1, 8, 16, 32, 64, 128, 256). The CUDA platform avoids this by stripping block_size before validation. Apply the same fix to the ROCm platform.

Test Plan

Can start Qwen3 Next correctly

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

The ROCm attention backend refactor (vllm-project#35246) introduced
validate_configuration calls that reject block_size=544 because
it is not in the BlockSize type (1, 8, 16, 32, 64, 128, 256).
The CUDA platform avoids this by stripping block_size before
validation. Apply the same fix to the ROCm platform.

Signed-off-by: jennyyyyzhen <yzhen@hmc.edu>
@mergify mergify bot added rocm Related to AMD ROCm bug Something isn't working labels Mar 6, 2026
@github-project-automation github-project-automation bot moved this to Todo in AMD Mar 6, 2026
@jennyyyyzhen jennyyyyzhen marked this pull request as ready for review March 6, 2026 18:37
@jennyyyyzhen jennyyyyzhen requested a review from tjtanaa as a code owner March 6, 2026 18:37
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug on the ROCm platform where attention backend validation was failing for irregular block_size values. The fix, which mirrors the existing logic for the CUDA platform, involves stripping the block_size from the attention selector configuration before the validation step. This is a clean and targeted solution that resolves the issue, allowing models with non-standard block sizes to run correctly on ROCm. The change is well-contained and appears correct.

@jennyyyyzhen
Copy link
Contributor Author

@houseroad can you help review?

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@houseroad houseroad added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 10, 2026
@houseroad houseroad merged commit 428bc71 into vllm-project:main Mar 11, 2026
43 checks passed
@github-project-automation github-project-automation bot moved this from Todo to Done in AMD Mar 11, 2026
wendyliu235 pushed a commit to wendyliu235/vllm-public that referenced this pull request Mar 18, 2026
…llm-project#36274)

Signed-off-by: jennyyyyzhen <yzhen@hmc.edu>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
fxdawnn pushed a commit to fxdawnn/vllm that referenced this pull request Mar 19, 2026
…llm-project#36274)

Signed-off-by: jennyyyyzhen <yzhen@hmc.edu>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants