We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
I am fairly new with LLM in general (only started to study 2 weeks ago). So if I say/ask something silly, please excuse me.
And I stumble upon this blog post from HuggingFace https://huggingface.co/blog/trl-peft
After a quick check it seem that training.py currently not support load_in_8bit. And I wonder if there are any specific reason to not do so?
training.py
load_in_8bit
(I also want try to add such support to flan-alpaca)
flan-alpaca
The text was updated successfully, but these errors were encountered:
Hi, it should be possible to support load_in_8bit for training since we already support lora, contributions are welcome!
Sorry, something went wrong.
No branches or pull requests
Hello,
I am fairly new with LLM in general (only started to study 2 weeks ago). So if I say/ask something silly, please excuse me.
And I stumble upon this blog post from HuggingFace
https://huggingface.co/blog/trl-peft
After a quick check it seem that
training.py
currently not supportload_in_8bit
.And I wonder if there are any specific reason to not do so?
(I also want try to add such support to
flan-alpaca
)The text was updated successfully, but these errors were encountered: