-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about saving peft checkpoint #565
Comments
Hello @nhanph! No, the removal is not useless. If you check contents of trlx/trlx/trainer/accelerate_base_trainer.py Line 309 in bcd237f
model.save_pretrained with heads_only=True , only value heads will be kept there.
|
Thank you @maxreciprocate , I got the point about saving model's value head now. My original question is from my observation when running ILQL training script that I see a |
🐛 Describe the bug
From my understand, when saving checkpoints for peft models (see here), trlx removes
pytorch_model.bin
before callingsave_pretrained
which makes the removal useless in my opinion.Is this intentional or we should move the removal code after
save_pretrained
is called?Here is an example of a directory resulting from
save_pretrained
:Which trlX version are you using?
0.7.0
Additional system and package information
3.10.12
The text was updated successfully, but these errors were encountered: