finetune_lora upgrades#2086
Conversation
for more information, see https://pre-commit.ci
…; fixed test_cli and other errors
for more information, see https://pre-commit.ci
…s/litgpt into finetune_lora_upgrade divergent
|
UPDATE: tested it in a 4xA100 environment, works well for multi-gpu setup |
Borda
left a comment
There was a problem hiding this comment.
LGTM, just confused about the legacy staff
Essentially it's the exact lora implementation before this PR. The changes are quite breaking so my rationale is it may be apt to still give users the option to run the previous version of lora.py, hence the name |
KaelanDt
left a comment
There was a problem hiding this comment.
Looks great! I think we should add some tests to new functions though, left some comments
|
Hi, any updates on this PR? |
let's check the failing tests, pls |
Just checked, it's an error with authorization that is unrelated to the code. Also, I noticed that pytests for |
|
Hi @Borda there are still HF Authorization related issues in the Would like your feedback on this, thanks! |
Yes, it is still there, so in such a case I can open a "shadow" PR for you that will be running your contribution as |
|
just failing on GPU, all the rest is fine |
… into finetune_lora_upgrade need to merge
for more information, see https://pre-commit.ci
… into finetune_lora_upgrade new remote changes
for more information, see https://pre-commit.ci
… into finetune_lora_upgrade changes in remote
for more information, see https://pre-commit.ci
|
@ysjprojects nice push! 💟 |
|
@Borda hi, the test cases are fixed! The PR merged by itself, not sure if it's the intended effect. |
Integrated some improvements to finetune_lora based on @lantiga's reference implementation.
Most of the optimizations are there, but I did not include the model registry feature because litgpt does not currently support litmodels and adding litmodels support is a PR in itself IMO.
Still WIP as I need to run some experiments (and potentially having to modify some test cases)
closes #2104