Skip to content

Do online fp8 quantization while loading weights instead of in process_weights_after_loading, reducing memory overhead#17945

Open
fxmarty-amd wants to merge 17 commits intosgl-project:mainfrom
fxmarty-amd:online-fp8-quantization-loader
Open

Do online fp8 quantization while loading weights instead of in process_weights_after_loading, reducing memory overhead#17945
fxmarty-amd wants to merge 17 commits intosgl-project:mainfrom
fxmarty-amd:online-fp8-quantization-loader

Conversation

@fxmarty-amd
Copy link
Contributor

@fxmarty-amd fxmarty-amd commented Jan 29, 2026

As per title.

The current implementation of fp8 online quantization first initializes and loads all weights in bf16, and only after quantizes them in process_weights_after_loading. This is inefficient in terms of GPU memory, and may lead to OOM during loading, although the quantized FP8 model should fit in memory.

This PR moves to doing online quantization in the weight loader, similar to #7392.

Fixes #2895
Fixes #8337

Left to do before merge:

  • do the same for Fp8MoEMethod
  • find out why online fp8 accuracy is so low (even on current main), for the test Qwen/Qwen2.5-1.5B-Instruct model (and also qwen3 8B) => is it expected? or bugged?

@github-actions github-actions bot added the quant LLM Quantization label Jan 29, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @fxmarty-amd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant optimization for FP8 quantization by implementing an "online" quantization strategy. Instead of loading all model weights in a higher precision (bf16) and then quantizing them in a separate post-loading step, the weights are now quantized directly as they are loaded. This change is crucial for reducing peak GPU memory usage, preventing potential Out-Of-Memory errors, and improving the efficiency of loading large language models.

Highlights

  • Online FP8 Quantization: The primary change is shifting FP8 quantization from a post-loading step to an "online" process that occurs concurrently with weight loading.
  • Memory Optimization: This refactoring aims to significantly reduce GPU memory consumption by avoiding the intermediate storage of weights in bf16 before quantization, mitigating Out-Of-Memory (OOM) issues.
  • New CopyNumelCounter: A TorchDispatchMode based utility, CopyNumelCounter, is introduced to accurately track the number of elements copied during the weight loading process, especially when narrow operations are involved.
  • Refactored Weight Loading Logic: The Fp8LinearMethod.create_weights method is updated to dynamically wrap the original weight loader with an online_fp8_weight_loader when online quantization is active.
  • Simplified Post-Loading Process: The Fp8LinearMethod.process_weights_after_loading method is streamlined, as the core quantization logic is now handled earlier during the online loading phase.
  • Memory Verification Test: A new test, test_online_quantization.py, is added to specifically measure and assert the peak GPU memory usage during online FP8 quantization, ensuring the memory optimization goal is met.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the fp8 online quantization logic to perform quantization during weight loading, which is a great memory optimization. The implementation looks mostly correct, but I've found a couple of issues. There's a redundant line in the new weight loader that should be removed for clarity. More importantly, I've identified a potential bug in process_weights_after_loading where the weight tensor is transposed, which seems to lead to a shape mismatch in the subsequent matrix multiplication. Please see my detailed comments.

Comment on lines +757 to +760
if not self.quant_config.is_checkpoint_fp8_serialized and _use_hip_int4:
raise NotImplementedError(
f"Online MOE FP8 quantization (is_checkpoint_fp8_serialized={self.quant_config.is_checkpoint_fp8_serialized}) along SGLANG_INT4_WEIGHT=1 is not supported at the moment. Please open an issue."
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not supported on main branch either.

@fxmarty-amd fxmarty-amd changed the title Do online fp8 quantization while loading weights instead of in process_weights_after_loading Do online fp8 quantization while loading weights instead of in process_weights_after_loading, Jan 29, 2026
@fxmarty-amd fxmarty-amd changed the title Do online fp8 quantization while loading weights instead of in process_weights_after_loading, Do online fp8 quantization while loading weights instead of in process_weights_after_loading, reducing memory overhead Jan 29, 2026
@fxmarty-amd
Copy link
Contributor Author

cc @HaiShaw can you have a look?

@zianglih zianglih mentioned this pull request Feb 9, 2026
5 tasks
@fxmarty-amd
Copy link
Contributor Author

Hi @kkHuang-amd @HaiShaw what do you think? Happy to address comments and fix conflicts accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

quant LLM Quantization

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Unexpected memory usage when using --quantization fp8 [Bug] DeepSeek V3 OOM when quantizing bf16 to fp8 (8xh200)

3 participants