-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auto_scale_batch_size doesnt use 'binsearch' #3780
Comments
@Borda any idea? |
@edenlightning it seems that this was changed during one of the refactors. |
@SkafteNicki mind fix it? |
@Borda i am not sure if there is anything to fix, as I think the intention with the refactors was that the user should call |
I see, then just update the docs... :] |
It actually is described correctly in the docs |
I don't see anything wrong here. Seems like you are running on mnist. Mnist has 60000 samples, and it seems like you can fit it all in gpu memory. The batch size finder never go higher than the len of the train dataloader. In this case there will be no difference between modes (power and binsearch), as the binary search will first kick in after the power scaling fails the first time. |
ok so i guess this is just a matter of documentation. Can you help clarify this behaviour in the docs? |
Yes, will send i PR :] |
I tried to following and it's still using power:
Did we remove support? or is that a bug?
The text was updated successfully, but these errors were encountered: