Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learning rate stepping option #941

Merged
merged 34 commits into from
Mar 5, 2020
Merged
Changes from 1 commit
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
20f32b4
remove deprecated args to learning rate step function
Feb 18, 2020
ef237f1
step based scheduler
Feb 25, 2020
67ae533
mixing models for testing
Feb 25, 2020
efe19e0
merge
Feb 25, 2020
e640403
fix styling
Feb 25, 2020
2e674e8
tests
Feb 25, 2020
4b96634
update documentation
Feb 25, 2020
b3a8d09
smaller fix
Feb 25, 2020
5c876c2
merge
Feb 26, 2020
8fa7a03
update to dict structure
Feb 26, 2020
12a526c
updated test
Feb 26, 2020
a3ac63f
update documentation
Feb 26, 2020
fc4847b
update CHANGELOG.md
Feb 26, 2020
4167975
fix styling
Feb 26, 2020
6e2d712
fix problems with trainer io
Feb 26, 2020
7d18fab
fix tests
Feb 26, 2020
215b85f
rebase
Feb 27, 2020
f01597d
simplification of code
Feb 27, 2020
55d9661
fix styling
Feb 27, 2020
2766910
change from batch to step
Feb 28, 2020
2c848b9
update to tests
Feb 28, 2020
1906239
fix styling
Feb 28, 2020
fc0ae09
fixed some logic
Feb 28, 2020
44207bc
Update pytorch_lightning/core/lightning.py
Borda Feb 28, 2020
1bcbf11
Merge branch 'master' into lr_stepping_option
williamFalcon Mar 3, 2020
ec15729
duplicated test
Borda Mar 3, 2020
2e5e9ba
fix test on amp
Mar 4, 2020
8dc4c31
small update to tests
Mar 4, 2020
284afe5
added monitor key for ReduceLROnPlateau
Mar 4, 2020
436ac59
Merge branch 'master' into lr_stepping_option
williamFalcon Mar 4, 2020
167886f
Merge branch 'master' into lr_stepping_option
williamFalcon Mar 5, 2020
bf4c2bb
Update trainer.py
williamFalcon Mar 5, 2020
383ed9a
Update training_loop.py
williamFalcon Mar 5, 2020
1f42822
fix test after introducing monitor keyword
Mar 5, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix problems with trainer io
Nicki Skafte committed Feb 26, 2020
commit 6e2d712315f7e630e0624663e30fdd15043bbaf7
4 changes: 2 additions & 2 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
@@ -1034,7 +1034,7 @@ def init_optimizers(

# single output, single optimizer
if isinstance(optimizers, Optimizer):
return [optimizers], None
return [optimizers], []

# two lists, optimizer + lr schedulers
elif len(optimizers) == 2 and isinstance(optimizers[0], list):
@@ -1044,7 +1044,7 @@ def init_optimizers(

# single list or tuple, multiple optimizer
elif isinstance(optimizers, (list, tuple)):
return optimizers, None
return optimizers, []

# unknown configuration
else:
6 changes: 3 additions & 3 deletions pytorch_lightning/trainer/training_io.py
Original file line number Diff line number Diff line change
@@ -82,7 +82,7 @@
# restore the lr schedulers
lr_schedulers = checkpoint['lr_schedulers']
for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
scheduler.load_state_dict(lrs_state)
scheduler['scheduler'].load_state_dict(lrs_state)

# uses the model you passed into trainer
model.load_state_dict(checkpoint['state_dict'])
@@ -344,7 +344,7 @@ def dump_checkpoint(self):
# save lr schedulers
lr_schedulers = []
for i, scheduler in enumerate(self.lr_schedulers):
SkafteNicki marked this conversation as resolved.
Show resolved Hide resolved
lr_schedulers.append(scheduler.state_dict())
lr_schedulers.append(scheduler['scheduler'].state_dict())

checkpoint['lr_schedulers'] = lr_schedulers

@@ -431,7 +431,7 @@ def restore_training_state(self, checkpoint):
# restore the lr schedulers
lr_schedulers = checkpoint['lr_schedulers']
for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers):
scheduler.load_state_dict(lrs_state)
scheduler['scheduler'].load_state_dict(lrs_state)

# ----------------------------------
# PRIVATE OPS