Conversation
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new example for Qwen3OmniMoeForConditionalGeneration, including a patch to support accelerated offloading. The example script demonstrates how to perform one-shot quantization with GPTQ and generate sample outputs. The changes are well-structured and the example is clear. My review includes a suggestion to improve the performance of the patch file by using more efficient tensor operations, and a comment on improving the clarity of the example script's save directory naming.
|
Related to #1673 as well |
|
Hi, I pass model.thinker to quantize, but model.save_pretrained saved a full bf16 model |
|
@Sekri0 Thanks, it worked |
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
47593d9 to
bbb3f7e
Compare
|
Documentation update |
brian-dellabetta
left a comment
There was a problem hiding this comment.
One comment regarding patch file placement
dsikka
left a comment
There was a problem hiding this comment.
Small nits. LGTM. Thanks!
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
|
Hi! Any way to quant qwen3 omni using awq? |
|
After quantifying with the script, I encounter the following error |
|
@kylesayrs the ignore layers solve this error: #2125 (comment) |


Purpose
get_input_embeddingsnot auto‑handled for Qwen3OmniMoeForConditionalGeneration #1872Qwen3OmniMoeForConditionalGenerationChanges
model.thinkertooneshot, sincemodeldoes not implement a forward method (the thinker module is aPreTrainedModelthat contains all of the parameters worth quantizing)fast_pos_embed_interpolateto support accelerate offloadingimage_grid_thw, but leavepixel_valuesand other inputs unsqueezedTesting
output.wav