Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example for how I may continue fine tuning the peft model #44

Open
therealadityashankar opened this issue Mar 17, 2023 · 6 comments
Open

Comments

@therealadityashankar
Copy link

Hey!, I apologize if this is a rather generic question

I'm not able to find good examples on how I may continue training with peft over on the peft repository from a stored peft checkpoint,
and since the fine tuning code only shows how to fine tune from scratch, I'd be greatful if I could be given an example on how I may fine tune alpaca from the stored peft checkpoint instead of scratch

thanks, and I really appreciate all the work put into this project!

@tloen
Copy link
Owner

tloen commented Mar 18, 2023

To load from a saved LoRA adapter, replace

model = get_peft_model(model, config)

with

from peft import PeftModel
model = PeftModel.from_pretrained(model, <PATH>)

To load from a training checkpoint, replace

trainer.train()

with

trainer.train(resume_from_checkpoint=<PATH>)

@zachNA2
Copy link

zachNA2 commented Mar 20, 2023

The Path leads to what? Is it the hf_ckpt converted folder? To the previous fine tuned alpaca-lora folder?

@roguh
Copy link

roguh commented Mar 20, 2023

@bui-thanh-lam
Copy link

To load from a training checkpoint, replace

trainer.train()

with

trainer.train(resume_from_checkpoint=<PATH>)

I tried resume training using "resume_from_checkpoint", but all the keys from saved model are mismatched.
Can you confirm that it works properly?

@edwardelric1202
Copy link

@bui-thanh-lam I met the same question, have you solved this?

@edwardelric1202
Copy link

@tloen Hi, if I use "trainer.train(resume_from_checkpoint=)", should I make any change on "training_args" and the initialization of Trainer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants