-
-
Notifications
You must be signed in to change notification settings - Fork 11.4k
chore: disable enable_cpp_symbolic_shape_guards #23048
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: disable enable_cpp_symbolic_shape_guards #23048
Conversation
Signed-off-by: Xiao Liu <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request disables the enable_cpp_symbolic_shape_guards configuration in PyTorch Dynamo to improve compilation time. The change is implemented in a backward-compatible way, using a try-except block to handle different PyTorch versions gracefully. This ensures the code works correctly whether the configuration flag is present or not. The implementation is sound and I have no concerns.
| # Note: this config is not available in torch 2.6, we can skip | ||
| # if the config doesn't exist |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you using PyTorch 2.6?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess my higher-level question is, why does vLLM CPU use PyTorch 2.6 instead of PyTorch 2.7?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is due to the issue with 2.7.0 on x86 cpu: #19258
|
@xiszishu are you able to merge main into this branch? |
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]> Signed-off-by: Duncan Moss <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
Signed-off-by: Xiao Liu <[email protected]>
|
what was the compile time win amount Contributor |
Reimplementation of #20836, compatible with torch 2.6.0 now
Purpose
Disable enable_cpp_symbolic_shape_guards to optimize compute time while compatible with the older version pytorch (required by cpu only setup).
Test Plan
Test Result
(Optional) Documentation Update
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.