[CI/Build] Reorganize models tests#17459
Conversation
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Isotr0py
left a comment
There was a problem hiding this comment.
Overall LGTM. Let's add missing mixed modality for other omni models in a following PR!
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Are you referring to MM vit encoder + text decoder? Because I missed the news about good old encoder-decoder support on V1 🤔 |
From my understanding (correct me if I'm wrong @ywang96 ), the V1 model runner is designed to accommodate models that use cross-attention without having to define a separate encoder-decoder model runner. |
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
<arch/task> -> <modality>, we now nest the directories by<modality> -> <task>to better align with our test grouping in CI.tests/models/quantization, removing the need forquant_modelpytest mark since they can now be conveniently selected.