Skip to content

[ROCm][CI] fix get_valid_backends#32787

Merged
tjtanaa merged 1 commit intovllm-project:mainfrom
ROCm:fix_attn_backend_import
Jan 22, 2026
Merged

[ROCm][CI] fix get_valid_backends#32787
tjtanaa merged 1 commit intovllm-project:mainfrom
ROCm:fix_attn_backend_import

Conversation

@divakar-amd
Copy link
Copy Markdown
Contributor

@divakar-amd divakar-amd commented Jan 21, 2026

This PR fixes the "get_valid_backends" check

Details - the getattr is overridden in interface.py which returns None if the attribute is not found instead of raising an AttributeError

pytest -s -v tests/v1/spec_decode/test_acceptance_length.py

Note: There is more debugging ongoing, might need another PR to fully fix this test. Preliminary analysis hints towards the refactored cudagraph PR causing issue for this test.

Signed-off-by: Divakar Verma <divakar.verma@amd.com>
@mergify mergify bot added rocm Related to AMD ROCm speculative-decoding v1 labels Jan 21, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a potential issue where hasattr might incorrectly evaluate due to the custom __getattr__ implementation in vllm/platforms/interface.py. By using getattr(current_platform.__class__, "get_valid_backends", None), the code now reliably checks for the presence of the get_valid_backends method on the class itself. The platform-specific fallbacks for ROCm and other platforms are also appropriate. This is a good fix that improves the robustness of the backend selection logic.

@divakar-amd divakar-amd marked this pull request as draft January 21, 2026 17:03
@divakar-amd divakar-amd marked this pull request as ready for review January 21, 2026 18:53
Copy link
Copy Markdown
Collaborator

@tjtanaa tjtanaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will go with this first, however, we should start supporting the new abstraction on ROCm as well and define get_valid_backends in the platform/rocm.py.

@tjtanaa tjtanaa enabled auto-merge (squash) January 22, 2026 02:38
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 22, 2026
get_valid_backends = getattr(current_platform.__class__, "get_valid_backends", None)
if get_valid_backends is None:
if current_platform.is_rocm():
return ["TRITON_ATTN"]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why TRITON_ATTN is used here with a code comment?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DarkLight1337 Triton is AMD's default attention backend, and we are not compatible with the flash attn backend that is used for the CUDA equivalent tests. Let us know if you'd like us to put a comment there still.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the purpose of the comment is to tell other developers who are not as familiar with ROCm about this

@tjtanaa tjtanaa merged commit 49d9653 into vllm-project:main Jan 22, 2026
28 of 30 checks passed
monajafi-amd pushed a commit to monajafi-amd/vllm that referenced this pull request Jan 23, 2026
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: mohammad najafi <mohammad.najafi@amd.com>
cwazai pushed a commit to cwazai/vllm that referenced this pull request Jan 25, 2026
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: 陈建华 <1647430658@qq.com>
lapy pushed a commit to lapy/vllm that referenced this pull request Jan 27, 2026
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm speculative-decoding v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants