You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IkhlasAlhussien
changed the title
[Usage]: Loading a model with BitsandBytes quantization with 8bit
[Usage]: Loading a model with bitsandbytes quantization with 8bit
Sep 23, 2024
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
How can I load a model using bitsandbytes quantization in 8-bit format? I'm currently loading the model with the following code:
This loads the model in 4-bit format, but I can't figure out how to load it in 8-bit. What should I change to load the model in 8-bit instead?
The text was updated successfully, but these errors were encountered: