Skip to content

Enable 4-bit quantization on AMD Radeon GPUs#3748

Merged
danielhanchen merged 9 commits intounslothai:mainfrom
sstamenk:amd_4bit
Dec 20, 2025
Merged

Enable 4-bit quantization on AMD Radeon GPUs#3748
danielhanchen merged 9 commits intounslothai:mainfrom
sstamenk:amd_4bit

Conversation

@sstamenk
Copy link
Copy Markdown
Contributor

Block size 64, which is required for 4-bit quantized models, was enabled for Radeon GPUs in Bitsandbytes with #1748.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @sstamenk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Unsloth's compatibility with AMD GPUs by enabling 4-bit quantization for Radeon (Navi) series. The core change involves updating the device capability detection for HIP environments, allowing the system to correctly identify and utilize AMD GPUs that meet the block size requirements for efficient 4-bit model loading. This improvement broadens the range of hardware that can benefit from memory-optimized models, particularly for users leveraging AMD's ROCm platform.

Highlights

  • AMD Radeon 4-bit Quantization: Enabled 4-bit quantization support for AMD Radeon (Navi) GPUs by ensuring compatibility with the required block size of 64, leveraging recent updates in Bitsandbytes.
  • Device Compatibility Logic Refinement: Refactored the logic for ALLOW_PREQUANTIZED_MODELS on HIP devices, switching from inspecting source code to directly using ROCM_WARP_SIZE_64 from bitsandbytes.cextension for more accurate and robust detection of compatible block sizes.
  • Code Clean-up: Removed an unnecessary inspect import from unsloth/device_type.py.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables 4-bit quantization on AMD Radeon GPUs by updating the detection logic for bitsandbytes capabilities. The change replaces a fragile source code inspection with a much more robust check against the ROCM_WARP_SIZE_64 flag from bitsandbytes. This is a significant improvement. I've added one comment regarding the error handling logic within the try...except block, which could lead to incorrect flag values if an import fails. Addressing this would make the implementation even more robust.

@sstamenk
Copy link
Copy Markdown
Contributor Author

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great improvement for enabling 4-bit quantization on AMD Radeon GPUs. The change from inspecting source code to checking the ROCM_WARP_SIZE_64 flag from bitsandbytes is much more robust and maintainable. The improved error handling and logging for when bitsandbytes is not found are also valuable additions. I have one minor suggestion to make the exception handling even more specific, but overall, this is a solid contribution.

Comment on lines +99 to +104
except Exception:
print(
"Unsloth: `bitsandbytes` is not installed - 4bit QLoRA unallowed, but 16bit and full finetuning works!"
)
ALLOW_PREQUANTIZED_MODELS = False
ALLOW_BITSANDBYTES = False
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While catching Exception is an improvement over a bare except, it's good practice to be more specific. The most likely error here is an ImportError if bitsandbytes isn't installed or if the required components are missing. Catching ImportError specifically makes the error handling more precise and the log message more accurate for the user.

I've also slightly rephrased the message to be more general, as the problem could be a missing installation, an incorrect version, or a broken setup.

Suggested change
except Exception:
print(
"Unsloth: `bitsandbytes` is not installed - 4bit QLoRA unallowed, but 16bit and full finetuning works!"
)
ALLOW_PREQUANTIZED_MODELS = False
ALLOW_BITSANDBYTES = False
except ImportError:
print(
"Unsloth: `bitsandbytes` is not installed or is not set up correctly. 4bit QLoRA unallowed, but 16bit and full finetuning works!"
)
ALLOW_PREQUANTIZED_MODELS = False
ALLOW_BITSANDBYTES = False

@danielhanchen
Copy link
Copy Markdown
Contributor

Oh fantastic! Thank you! Great that pre-quantized models work!

@danielhanchen danielhanchen merged commit 191a951 into unslothai:main Dec 20, 2025
1 check passed
@sstamenk sstamenk deleted the amd_4bit branch December 20, 2025 04:28
@sstamenk
Copy link
Copy Markdown
Contributor Author

@danielhanchen Thanks for the review! Could this page also be updated to reflect these changes? https://docs.unsloth.ai/get-started/install-and-update/amd#troubleshooting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants