Enhance Autoround to support multiple cards tuning#2157
Enhance Autoround to support multiple cards tuning#2157brian-dellabetta merged 34 commits intovllm-project:mainfrom
Conversation
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @yiliu30, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enhances AutoRoundModifier to support multi-GPU tuning by integrating auto_round's device_map functionality. This is primarily achieved by adding a device_map parameter to the modifier and introducing a new context manager, suspend_accelerate_hooks, to correctly handle models with Hugging Face Accelerate hooks. The changes are well-supported by a new example for a large model and a new test case for multi-GPU execution. The implementation is solid, but I've identified a potential edge case in the new suspend_accelerate_hooks function that could lead to a crash if a model has no parameters, for which I've provided a suggestion.
brian-dellabetta
left a comment
There was a problem hiding this comment.
Hi AutoRound team, I think these changes make sense, though we are refactoring some things that overlap with these changes. Please see comments.
Can you point me to the logic in the auto round repo that handles the multi-gpu parallelization work? I'd like to see how you're handling it
Hi @brian-dellabetta , here is the logic for multi-gpu devices, https://github.com/intel/auto-round/blob/b53ead7d77746385d700152c7f00960f18fb9d85/auto_round/compressors/base.py#L1560-L1562. We take a block, its input, and the list of available devices, then assign each submodule to one of those devices. The accelerator’s Inside set_auto_device_map_for_block_with_tuning, we estimate the block’s memory requirements based on its parameters, input, batch size, and a few heuristic factors. Using this estimate, we assign devices to the submodules to make memory usage stays as balanced as possible across all GPUs. The final mapping is then attached to each module as its |
Signed-off-by: yiliu30 <yi4.liu@intel.com>
kylesayrs
left a comment
There was a problem hiding this comment.
Related to #2180. I've done some basic modeling and it seems like AutoRound could have improved performance from deeper integration and utilizing the DP strategy detailed in the RFC.
In the meantime, this PR looks great, thanks!
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
|
Hi @dsikka could you help retrigger the CI. Thanks! |
brian-dellabetta
left a comment
There was a problem hiding this comment.
If you guys are in agreement to keep suspend_accelerate_hooks separate from our other implementation and remove in future, I'm good with these changes.
Thanks for adding this!
Given AutoRound uses block‑level reconstruction loss to fine‑tune quantization parameters, which requires running backward passes on each block. For large model, like Qwen3-235B, a single GPU often doesn’t have enough memory to hold an entire block during backward computation. To address this, we use the HF accelerator to dispatch the module across multiple devices.
In this PR, we enable this feature on LLMC side:
device_idsfor tuning with multiple cardsignoreto Autoround skipping layersQwen/Qwen3-235B-A22Bas example for multiple cardsTest plan
Example results
cc @hshen14 @thuang6 @wenhuach21