Skip to content

Conversation

RohitMidha23
Copy link
Contributor

@RohitMidha23 RohitMidha23 commented Aug 12, 2024

For #254

  • Added quantized_model.yaml and peft_model.yaml which showcase the usage of Quantization and PEFT models.

  • Added a short note on Authentication with HF_TOKEN.

@NathanHB
Copy link
Member

Hi ! Thanks for the PR it's really helpful :) You only need to revert the indentation of lists in the readme and should be good to go

@RohitMidha23
Copy link
Contributor Author

@NathanHB thanks for the review. Reverted the formatting, which seems to have auto-applied haha!

@RohitMidha23 RohitMidha23 requested a review from NathanHB August 16, 2024 08:24

If you want to evaluate a model trained with `peft`, check out [examples/model_configs/peft_model.yaml](./examples/model_configs/peft_model.yaml).

Currently, `lighteval` supports `adapter` and `delta` weights to be applied to the base model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a code snippet t show how to use those configs would be great :)

@RohitMidha23 RohitMidha23 requested a review from NathanHB August 23, 2024 23:12
@NathanHB NathanHB merged commit 8883a77 into huggingface:main Aug 27, 2024
2 checks passed
hynky1999 pushed a commit that referenced this pull request May 22, 2025
- Added quantized_model.yaml and peft_model.yaml which showcase the usage of Quantization and PEFT models.
- Added a short note on Authentication with HF_TOKEN.

---------

Co-authored-by: Nathan Habib <[email protected]>
NathanHB added a commit that referenced this pull request Sep 19, 2025
- Added quantized_model.yaml and peft_model.yaml which showcase the usage of Quantization and PEFT models.
- Added a short note on Authentication with HF_TOKEN.

---------

Co-authored-by: Nathan Habib <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants