Skip to content

Commit

Permalink
Fix fp16 (NVIDIA#6543) (NVIDIA#6544)
Browse files Browse the repository at this point in the history
Signed-off-by: MaximumEntropy <[email protected]>
Co-authored-by: Sandeep Subramanian <[email protected]>
Signed-off-by: hsiehjackson <[email protected]>
  • Loading branch information
2 people authored and hsiehjackson committed Jun 2, 2023
1 parent 42691c3 commit e7f2210
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions examples/nlp/language_modeling/megatron_gpt_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,8 @@ def main(cfg) -> None:
pretrained_cfg.activations_checkpoint_granularity = None
pretrained_cfg.activations_checkpoint_method = None
pretrained_cfg.precision = trainer.precision
if trainer.precision == "16":
pretrained_cfg.megatron_amp_O2 = False
model = MegatronGPTModel.restore_from(
restore_path=cfg.gpt_model_file,
trainer=trainer,
Expand Down

0 comments on commit e7f2210

Please sign in to comment.