[FEAT] [Multi-modal] Reorganizing ViT Attention Backend and decouple from Text Attention Backend#27919
[FEAT] [Multi-modal] Reorganizing ViT Attention Backend and decouple from Text Attention Backend#27919tjtanaa wants to merge 34 commits intovllm-project:mainfrom
Conversation
… unittest Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
… from most implementation Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
|
@codex review |
|
Codex Review: Didn't find any major issues. 👍 ℹ️ About Codex in GitHubCodex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback". |
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
ywang96
left a comment
There was a problem hiding this comment.
Thanks for working on this amazing effort! I will review this PR tonight!
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
|
@ywang96 @DarkLight1337 I have rebased it with upstream again. It contains the enablement of I would like to suggest the further enablement of self.flash_attn_varlen_func = maybe_get_vit_flash_attn_backend(
self.attn_backend,
)will be done in another PR after this one. |
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
|
This pull request has merge conflicts that must be resolved before it can be |
|
Any progress of this PR? |
Purpose
This is an implementation of the RFC #27821
1. It decouples the ViT Attention Backend selection logic from the Text Attention Backend.
Redefining
_MHA_Backendfollowing the discussion in https://github.com/vllm-project/vllm/pull/27061/files#r2443909604 .2. Make ViT Backend selection Platform specific information
Make sure that the ViT attention is a platform specific. We should determine the supported ViT backend through platform interface. Any overriding logic should be performed in the platform interface as only the platform interface knew what other ViT backend it can fallback to. We should avoid adding overriding logic to the
model.pyfiles.So, we allow
get_vit_attn_backendin the platform interface has to be able to access the--mm-encoder-attn-backend.In the platform interface, we should only return
_MHA_Backend, we should not return the functions. The flash attention functions should only be returned throughmaybe_get_vit_flash_attn_backend. If the model only supports a specific type of attention, the ViT overriding logic will be implemented explicitly in the model definition file (model.py).Test Plan
Add unit tests to tests:
Nonecurrent_platform.get_supported_vit_attn_backends()Setup unit tests for all affected model:
tests/models/multimodal/generation/test_ovis2_5.pytests/models/multimodal/generation/test_maverick.pytests/models/multimodal/generation/test_dots_ocr.pytests/models/multimodal/generation/test_keye.pytests/models/multimodal/generation/test_qwen2_5_vl.pytests/models/multimodal/generation/test_qwen2_vl.pytests/models/multimodal/generation/test_ovis2_5.pytests/models/multimodal/generation/test_qwen3_omni_moe_thinker.pytests/models/multimodal/generation/test_ernie45_vl.pytests/models/multimodal/generation/test_glm4_1v.pyTest Result
All unit tests passed
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.