You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if args.output_dir is not None:
print_rank_0('saving the final model ...', args.global_rank)
model = convert_lora_to_linear_layer(model)
if args.global_rank == 0:
save_hf_format(model, tokenizer, args)
if args.zero_stage == 3:
# For zero stage 3, each gpu only has a part of the model, so we need a special save function
save_zero_three_model(model,
args.global_rank,
args.output_dir,
zero_stage=args.zero_stage)
该部分代码在判断 if args.output_dir is not None: 后的内容应该需要缩进
The text was updated successfully, but these errors were encountered:
首先感谢老师的工作。
第四章分布式训练 4.4.2LLAMA分布式训练的示例代码(书中第115页),训练过后保存模型的部分似乎出现缩进错误,如下所示:
该部分代码在判断
if args.output_dir is not None:
后的内容应该需要缩进The text was updated successfully, but these errors were encountered: