-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example for how I may continue fine tuning the peft model #44
Comments
To load from a saved LoRA adapter, replace model = get_peft_model(model, config) with from peft import PeftModel
model = PeftModel.from_pretrained(model, <PATH>) To load from a training checkpoint, replace trainer.train() with trainer.train(resume_from_checkpoint=<PATH>) |
The Path leads to what? Is it the hf_ckpt converted folder? To the previous fine tuned alpaca-lora folder? |
It could be a peft_config object: https://github.com/huggingface/peft/blob/main/src/peft/peft_model.py |
I tried resume training using "resume_from_checkpoint", but all the keys from saved model are mismatched. |
@bui-thanh-lam I met the same question, have you solved this? |
@tloen Hi, if I use "trainer.train(resume_from_checkpoint=)", should I make any change on "training_args" and the initialization of Trainer? |
Hey!, I apologize if this is a rather generic question
I'm not able to find good examples on how I may continue training with peft over on the peft repository from a stored peft checkpoint,
and since the fine tuning code only shows how to fine tune from scratch, I'd be greatful if I could be given an example on how I may fine tune alpaca from the stored peft checkpoint instead of scratch
thanks, and I really appreciate all the work put into this project!
The text was updated successfully, but these errors were encountered: