Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of optim.lr_scheduler.ReduceLROnPlateau #298

Closed
vikmary opened this issue Oct 4, 2019 · 3 comments · Fixed by #320
Closed

Support of optim.lr_scheduler.ReduceLROnPlateau #298

vikmary opened this issue Oct 4, 2019 · 3 comments · Fixed by #320
Labels
feature Is an improvement or enhancement help wanted Open to be worked on

Comments

@vikmary
Copy link
Contributor

vikmary commented Oct 4, 2019

Hi,
I'm using pytorch-lightning as a base for a side project. My configure_optimizers looks this way:

def configure_optimizers(self):
  """
  return whatever optimizers we want here
  :return: list of optimizers
  """
  optimizer = optim.Adam(self.parameters(), lr=self.params.learning_rate)
  scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer,
                                                   mode='min',
                                                   factor=0.1,
                                                   patience=10,
                                                   min_lr=1e-6,
                                                   verbose=True)
  return [optimizer], [scheduler]

But that doesn't work out of the box, because the ReduceLROnPlateau scheduler needs scheduler.step(val_loss) call instead of scheduler.step(num_epochs).

The scheduler improves greatly nn convergence, it would nice to support it.

Thank you

@vikmary vikmary added feature Is an improvement or enhancement help wanted Open to be worked on labels Oct 4, 2019
@vikmary vikmary changed the title Support of torch.optim.lr_scheduler.ReduceLROnPlateau Support of optim.lr_scheduler.ReduceLROnPlateau Oct 4, 2019
@williamFalcon
Copy link
Contributor

williamFalcon commented Oct 4, 2019

install from master and it should work. we pushed a fix for this a few weeks ago

@vikmary
Copy link
Contributor Author

vikmary commented Oct 5, 2019

Are you talking about this fix?
e0c5406

It doesn't solve the issue.

It should be scheduler.step(val_loss, epochs=nb_epochs), not scheduler.step(nb_epochs). Right now scheduler is changing learning rate based on epoch number, not validation loss.

@williamFalcon
Copy link
Contributor

got it. thanks for pointing that out. want to submit a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants