-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce new precision layout in PL #16783
Conversation
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
⚡ Required checks status: All passing 🟢Groups summary🟢 pytorch_lightning: Tests workflow
These checks are required after the changes to 🟢 pytorch_lightning: Azure GPU
These checks are required after the changes to 🟢 pytorch_lightning: Azure HPU
These checks are required after the changes to 🟢 pytorch_lightning: Azure IPU
These checks are required after the changes to 🟢 pytorch_lightning: Docs
These checks are required after the changes to 🟢 lightning_app: Examples
These checks are required after the changes to 🟢 mypy
These checks are required after the changes to 🟢 installThese checks are required after the changes to 🟢 link-check
These checks are required after the changes to Thank you for your contribution! 💜
|
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #16783 +/- ##
=========================================
- Coverage 82% 65% -16%
=========================================
Files 441 421 -20
Lines 31692 31472 -220
=========================================
- Hits 25879 20563 -5316
- Misses 5813 10909 +5096 |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@@ -839,6 +841,7 @@ def get_defaults(cls): | |||
|
|||
@RunIf(min_cuda_gpus=1) # trigger this test on our GPU pipeline, because we don't install the package on the CPU suite | |||
@pytest.mark.skipif(not package_available("lightning_colossalai"), reason="Requires Colossal AI Strategy") | |||
@pytest.mark.skip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is skipped here, since the validation for the precision argument is hardcoded in https://github.com/Lightning-AI/lightning-colossalai . I tried to first change it there, but these changes won't become effective until the strategy was released to pypi again.
After this was merged, I will post a follow-up PR in https://github.com/Lightning-AI/lightning-colossalai which will then be released and then have a follow-up PR to again add this test.
Discussed offline with @awaelchli
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
What does this PR do?
Adds the new precision inputs from #16767 to PL.
From now on, the plugins and strategies internally only care about the new values (
'64-true'
,'32-true'
,'16-mixed'
,'bf16-mixed'
) while the connector translates all other supported values (64
,'64'
,32
,'32'
,16
,'16'
,'bf16'
) to these values and raises a warning for the mixed precision ones accordingly.Fixes #9956
There will be a follow-up PR to re-enable testing the external colossal-ai strategy once these changes are reflected and released within this strategy.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
cc @Borda @carmocca @justusschock @awaelchli