Skip to content

Conversation

@lun-4
Copy link
Contributor

@lun-4 lun-4 commented May 17, 2023

I noticed that the finetuning scripts assume that the dataset is going to be instructional, but what I plan to do isn't such, so I took it to draft out an implementation of dataset preparation and relevant changes to LLaMA-Adapter to support such tuning.

The dataset preparation was copied from prepare_alpaca.py, which then had a lot of its setup stripped out of the instructional details.

I'm testing this code at the moment by training a model, it seems to be working well, but I'm opening this code to discussion and review beforehand.

Copy link
Collaborator

@lantiga lantiga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @lun-4, looks good!

Could you:

  • add the same instruction_tuning parameter (defaulting to True) to lora.py and full.py?
  • also add the corresponding CLI option to the finetune_ scripts?

Thanks a lot!

@lantiga
Copy link
Collaborator

lantiga commented May 29, 2023

Thank you @lun-4, merging! A quick howto would be super appreciated :-)

@lantiga lantiga merged commit ffba202 into Lightning-AI:main May 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants