-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(trainer): add enable_benchmarking option #803
Conversation
@Ir1d I wee that Will made some changes, could you check it and push so it triggers the new CI |
@Borda you can re-trigger the build on travis page since you are a member of the org. I don't have other things to push in this PR |
well the point is that we moved to GitHub CI and that need to add a commit to update all these things lol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
almost there, keep rolling ;]
Hello @Ir1d! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-02-25 13:52:46 UTC |
The checks are passed and now ready to land. |
@Ir1d this needs a test. Just run a simple model for a few steps with this flag enabled. Tests are not just for functionality but also for test coverage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀 but the test shall be added lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
@Ir1d mind doing a rebase? |
@Ir1d What's the purpose of the test you wrote? It has nothing to do with the benchmarking option and just tests if the trainer completes. Or am i wrong? Maybe assert the value of |
@awaelchli Feel free to enlighten me on how to test this benchmarking option. The tests are added for test coverage, as suggested above by william. |
@Ir1d Ok, I see. Maybe just assert the value of torch.backends.cudnn.benchmark before and after fit? If trainer option is true, we would expect it to stay true during fit. |
Thank you for your suggestions @awaelchli , I've added the assert statements. |
@Ir1d @awaelchli I love this spirit! ⚡ |
nice, great addition! GPU tests pass on my machine. |
nvm, I think merge of #889 fixed it. |
* feat(trainer): add enable_benchmarking option closes Lightning-AI#370 * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * Update trainer.py * add test * try to make the lint work * fix typo * add test, verify torch.backends.cudnn.benchmark * make lint happy * make lint happy Co-authored-by: William Falcon <[email protected]>
closes #370
I don't think we need tests for this option for now. Tests can be added when this flag includes more options.
Before submitting
What does this PR do?
Fixes #370 (issue).
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃