Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto_scale_batch_size doesnt use 'binsearch' #3780

Closed
edenlightning opened this issue Oct 2, 2020 · 10 comments · Fixed by #3894
Closed

auto_scale_batch_size doesnt use 'binsearch' #3780

edenlightning opened this issue Oct 2, 2020 · 10 comments · Fixed by #3894
Assignees
Labels
bug Something isn't working docs Documentation related
Milestone

Comments

@edenlightning
Copy link
Contributor

I tried to following and it's still using power:

#####################
# 1. Init Model
##################### 

model = LitAutoEncoder()

#####################
# 2. Init Trainer
##################### 
trainer = pl.Trainer(auto_scale_batch_size='binsearch')

#####################
# 3. Tune
#####################
trainer.fit(model)

Did we remove support? or is that a bug?

@edenlightning edenlightning added bug Something isn't working help wanted Open to be worked on labels Oct 2, 2020
@edenlightning edenlightning added this to the 0.9.x milestone Oct 2, 2020
@edenlightning edenlightning added the docs Documentation related label Oct 2, 2020
@edenlightning
Copy link
Contributor Author

@Borda any idea?

@SkafteNicki
Copy link
Member

@edenlightning it seems that this was changed during one of the refactors.
No matter what the auto_scale_batch_size argument is set to, the tuning will not run as part of fit anymore.
Instead user should call trainer.tune().

@Borda
Copy link
Member

Borda commented Oct 2, 2020

@SkafteNicki mind fix it?
@edenlightning where is this example so we shall edit it also there and make it as a tested example...

@SkafteNicki
Copy link
Member

@Borda i am not sure if there is anything to fix, as I think the intention with the refactors was that the user should call trainer.tune().
cc: @williamFalcon

@Borda
Copy link
Member

Borda commented Oct 2, 2020

I see, then just update the docs... :]

@SkafteNicki
Copy link
Member

It actually is described correctly in the docs
@edenlightning is it an old example you have the code from?

@edenlightning edenlightning modified the milestones: 0.9.x, 1.0 Oct 4, 2020
@edenlightning
Copy link
Contributor Author

sorry, wrong screenshot.

Screenshot - 2020-10-04T121904.655.png

its just using power instead of binsearch

@SkafteNicki
Copy link
Member

I don't see anything wrong here.

Seems like you are running on mnist. Mnist has 60000 samples, and it seems like you can fit it all in gpu memory. The batch size finder never go higher than the len of the train dataloader.

In this case there will be no difference between modes (power and binsearch), as the binary search will first kick in after the power scaling fails the first time.

@edenlightning edenlightning removed the help wanted Open to be worked on label Oct 5, 2020
@edenlightning
Copy link
Contributor Author

ok so i guess this is just a matter of documentation. Can you help clarify this behaviour in the docs?

@SkafteNicki
Copy link
Member

Yes, will send i PR :]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working docs Documentation related
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants