-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove _call_accelerator_hook
Trainer method
#10999
Remove _call_accelerator_hook
Trainer method
#10999
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
@four4fish why is this a breaking change? It doesn't affect users |
for customized accelerator user who may override this function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please be sure to fill out the checklist (especially filling out a changelog entry) before this is merged
What does this PR do?
After #10890 we only have one remaining usage of
_call_accelerator_hook
. We can remove this call by moving the linetorch.cuda.empty_cache()
fromon_train_start
tosetup
inaccelerators/gpu.py
. Then we can remove the_call_accelerator_hook
Trainer method entirely.fix #10905
Does your PR introduce any breaking changes? If yes, please list them.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃