-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Support conditional torch.compile per module #22269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a compile_cond argument to the support_torch_compile decorator, allowing for conditional compilation of models. The implementation is clean and well-tested. I've found one minor issue in the new test file that should be addressed. Overall, this is a great addition that increases the flexibility of the compilation framework.
a51f22b to
8e302bb
Compare
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nit around the name but otherwise lgtm
8bf5031 to
d48647a
Compare
|
Can you merge from main to fix the CI failures? |
58b56f2 to
bd849d6
Compare
|
@sarckk failures appear related, could you take a look? |
Head branch was pushed to by a user without write access
e11fe22 to
5d00f1e
Compare
|
Fixed tests that were broken due to changes in #20059. |
5d00f1e to
30df952
Compare
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
Signed-off-by: Yong Hoon Shin <[email protected]>
30df952 to
7c416e5
Compare
|
@ProExpertProg thanks for the rebases. CI seems quite noisy, should we keep rebasing until tests pass? |
|
Yeah check if these failures are happening on main and restart ci |
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]> Signed-off-by: Duncan Moss <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
…oject#22269) Signed-off-by: Yong Hoon Shin <[email protected]>
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.Purpose
Required fro #22628
#21044 shows an example of compiling multiple submodules within a model. We also want to provide the user the flexibility of triggering this on/off.
This PR adds a new arg
enable_ifto thesupport_torch_compiledecorator, which is a function that takes inVllmConfigand returns aboolof whether to enable compile or not.Test Plan
added new unit tests
Test Result
Tests pass
cc: @zou3519 @BoyuanFeng