Enable 4-bit quantization on AMD Radeon GPUs#3748
Enable 4-bit quantization on AMD Radeon GPUs#3748danielhanchen merged 9 commits intounslothai:mainfrom
Conversation
Summary of ChangesHello @sstamenk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Unsloth's compatibility with AMD GPUs by enabling 4-bit quantization for Radeon (Navi) series. The core change involves updating the device capability detection for HIP environments, allowing the system to correctly identify and utilize AMD GPUs that meet the block size requirements for efficient 4-bit model loading. This improvement broadens the range of hardware that can benefit from memory-optimized models, particularly for users leveraging AMD's ROCm platform. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enables 4-bit quantization on AMD Radeon GPUs by updating the detection logic for bitsandbytes capabilities. The change replaces a fragile source code inspection with a much more robust check against the ROCM_WARP_SIZE_64 flag from bitsandbytes. This is a significant improvement. I've added one comment regarding the error handling logic within the try...except block, which could lead to incorrect flag values if an import fails. Addressing this would make the implementation even more robust.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request is a great improvement for enabling 4-bit quantization on AMD Radeon GPUs. The change from inspecting source code to checking the ROCM_WARP_SIZE_64 flag from bitsandbytes is much more robust and maintainable. The improved error handling and logging for when bitsandbytes is not found are also valuable additions. I have one minor suggestion to make the exception handling even more specific, but overall, this is a solid contribution.
unsloth/device_type.py
Outdated
| except Exception: | ||
| print( | ||
| "Unsloth: `bitsandbytes` is not installed - 4bit QLoRA unallowed, but 16bit and full finetuning works!" | ||
| ) | ||
| ALLOW_PREQUANTIZED_MODELS = False | ||
| ALLOW_BITSANDBYTES = False |
There was a problem hiding this comment.
While catching Exception is an improvement over a bare except, it's good practice to be more specific. The most likely error here is an ImportError if bitsandbytes isn't installed or if the required components are missing. Catching ImportError specifically makes the error handling more precise and the log message more accurate for the user.
I've also slightly rephrased the message to be more general, as the problem could be a missing installation, an incorrect version, or a broken setup.
| except Exception: | |
| print( | |
| "Unsloth: `bitsandbytes` is not installed - 4bit QLoRA unallowed, but 16bit and full finetuning works!" | |
| ) | |
| ALLOW_PREQUANTIZED_MODELS = False | |
| ALLOW_BITSANDBYTES = False | |
| except ImportError: | |
| print( | |
| "Unsloth: `bitsandbytes` is not installed or is not set up correctly. 4bit QLoRA unallowed, but 16bit and full finetuning works!" | |
| ) | |
| ALLOW_PREQUANTIZED_MODELS = False | |
| ALLOW_BITSANDBYTES = False |
|
Oh fantastic! Thank you! Great that pre-quantized models work! |
for more information, see https://pre-commit.ci
|
@danielhanchen Thanks for the review! Could this page also be updated to reflect these changes? https://docs.unsloth.ai/get-started/install-and-update/amd#troubleshooting |
Block size 64, which is required for 4-bit quantized models, was enabled for Radeon GPUs in Bitsandbytes with #1748.