You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have enabled 4-bit quantization for fine tuning mistralai/Mistral-7B-v0.1. Seems like Ludwig 0.10.1 depends on bitsandbytes < 0.41.0. But when I run the trainer I get the following warning:
You are calling `save_pretrained` to a 4-bit converted model, but your `bitsandbytes` version doesn't support it. If you want to save 4-bit models, make sure to have `bitsandbytes>=0.41.3` installed.
Describe the bug
I have enabled 4-bit quantization for fine tuning mistralai/Mistral-7B-v0.1. Seems like Ludwig 0.10.1 depends on bitsandbytes < 0.41.0. But when I run the trainer I get the following warning:
To Reproduce
Steps to reproduce the behavior:
model.yaml
):ludwig train --config model.yaml --dataset "ludwig://alpaca"
Expected behavior
Should not show the warning on
bitsandbytes
version not supportingsave_pretrained
for 4-bit quantization.Environment (please complete the following information):
@alexsherstinsky
The text was updated successfully, but these errors were encountered: