Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

代码示例的缩进问题 #32

Open
YFDing0208 opened this issue Oct 30, 2023 · 1 comment
Open

代码示例的缩进问题 #32

YFDing0208 opened this issue Oct 30, 2023 · 1 comment

Comments

@YFDing0208
Copy link

首先感谢老师的工作。
第四章分布式训练 4.4.2LLAMA分布式训练的示例代码(书中第115页),训练过后保存模型的部分似乎出现缩进错误,如下所示:

if args.output_dir is not None:
print_rank_0('saving the final model ...', args.global_rank)
model = convert_lora_to_linear_layer(model)

if args.global_rank == 0:
    save_hf_format(model, tokenizer, args)

if args.zero_stage == 3:
    # For zero stage 3, each gpu only has a part of the model, so we need a special save function
    save_zero_three_model(model,
                                          args.global_rank,
                                          args.output_dir,
                                          zero_stage=args.zero_stage)

该部分代码在判断 if args.output_dir is not None: 后的内容应该需要缩进

@qzhangFDU
Copy link
Contributor

嗯 为了显示效果 这里应该都是在 model后面的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants