Skip to content

Commit

Permalink
[TTS] Fix TTS recipes with PTL 2.0 (#7188)
Browse files Browse the repository at this point in the history
Signed-off-by: Ryan <[email protected]>
  • Loading branch information
rlangman committed Aug 9, 2023
1 parent 5cffd9a commit ada4fe5
Show file tree
Hide file tree
Showing 7 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion examples/tts/conf/audio_codec/encodec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ trainer:
num_nodes: 1
devices: 1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 32 # Vector quantization only works with 32-bit precision.
max_epochs: ${max_epochs}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion examples/tts/conf/hifigan/hifigan.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ trainer:
num_nodes: 1
devices: 1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 32
max_steps: ${model.max_steps}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion examples/tts/conf/hifigan/hifigan_44100.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ trainer:
num_nodes: 1
devices: -1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 16
max_steps: ${model.max_steps}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion examples/tts/conf/hifigan_dataset/hifigan_22050.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ trainer:
num_nodes: 1
devices: 1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 16
max_epochs: ${max_epochs}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion examples/tts/conf/hifigan_dataset/hifigan_44100.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ trainer:
num_nodes: 1
devices: 1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 16
max_epochs: ${max_epochs}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion examples/tts/conf/univnet/univnet.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ trainer:
num_nodes: 1
devices: 1
accelerator: gpu
strategy: ddp
strategy: ddp_find_unused_parameters_true
precision: 32
max_steps: ${model.max_steps}
accumulate_grad_batches: 1
Expand Down
2 changes: 1 addition & 1 deletion nemo/collections/tts/models/audio_codec.py
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ def training_step(self, batch, batch_idx):
self.log_dict(metrics, on_step=True, sync_dist=True)
self.log("t_loss", train_loss_mel, prog_bar=True, logger=False, sync_dist=True)

def training_epoch_end(self, outputs):
def on_train_epoch_end(self):
self.update_lr("epoch")

def validation_step(self, batch, batch_idx):
Expand Down

0 comments on commit ada4fe5

Please sign in to comment.