Skip to content

[Offload] Switch to torch offloader from accelerate#2148

Merged
kylesayrs merged 17 commits intomainfrom
kylesayrs/required-torch-offloader-changes
Jan 28, 2026
Merged

[Offload] Switch to torch offloader from accelerate#2148
kylesayrs merged 17 commits intomainfrom
kylesayrs/required-torch-offloader-changes

Conversation

@kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Dec 18, 2025

Co-requisites

Changes

  • Perform tracing in the disable_onloading context (any tensors that are mentioned will be referenced as meta tensors to avoid excess onloading)
  • Dispatch model after tracing (not really necessary)
  • dispatch_for_sequential is now an alias for offload_model
  • dispatch_for_generation is now an alias for dispatch_model
  • Add a get_main_device utility to help with centralized device getting
  • untie_word_embeddings is now simpler
  • Fix fusing test by initializing tensors without gradients

Testing

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a significant refactoring of the model offloading and modifier initialization processes within the llmcompressor library. The primary goal is to enhance the clarity and efficiency of modifier lifecycle management by moving certain setup tasks to the on_start hook. Concurrently, the underlying offloading implementation has been modernized to more directly utilize the compressed-tensors library, thereby reducing reliance on accelerate for specific offloading functionalities. These changes contribute to a cleaner codebase, improved modularity, and better integration with updated external library practices.

Highlights

  • Modifier Lifecycle Hook Refactoring: The invocation of initialization-related methods like _set_resolved_mappings (AWQ), _infer_sequential_targets and get_layers (SparseGPT), _module_names preparation (GPTQ), and _resolve_mappings (SmoothQuant) has been moved from the on_initialize method to the on_start method across several modifiers. This ensures these operations are performed when the modifier is actively starting its process, rather than during its initial setup.
  • Offloading Mechanism Update: The model offloading logic has been updated to reduce direct dependencies on the accelerate library. Imports related to accelerate.hooks and compressed_tensors.utils.offloaded_dispatch have been replaced with compressed_tensors.offload.dispatch_model and a new remove_dispatch utility in llmcompressor.utils.dev. This change aligns the offloading implementation more closely with the compressed-tensors library.
  • Sequential Tracer Simplification: The SequentialTracer in llmcompressor/pipelines/sequential/helpers.py has been simplified. The offloaded parameter and its associated logic, which previously tracked modules with offloaded parameters, have been removed. This streamlines the tracing process by no longer needing to explicitly manage offloaded modules during graph capture.
  • Untying Word Embeddings Refinement: The untie_word_embeddings function in llmcompressor/utils/transformers.py has been refactored. It now directly uses module.register_parameter to untie weights, eliminating the need for has_offloaded_params and register_offload_parameter from compressed_tensors.
  • Dispatch Function Relocation and Refinement: The dispatch_for_sequential utility has been relocated from llmcompressor/pipelines/sequential/helpers to llmcompressor/utils/helpers. Additionally, a new remove_dispatch function has been introduced in llmcompressor/utils/dev to provide a unified way to remove both accelerate and compressed_tensors dispatch hooks from a module.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kylesayrs kylesayrs changed the title [TorchOffloader] [TorchOffloader] Prepare for torch offloader compatibility Dec 18, 2025
@kylesayrs kylesayrs changed the title [TorchOffloader] Prepare for torch offloader compatibility [TorchOffloader] Switch to torch offloader from accelerate Dec 18, 2025
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the model offloading mechanism, replacing the accelerate-based implementation with a new TorchOffloader from the compressed-tensors library. The changes are consistently applied across various components, including modifiers, pipelines, and utility functions. A notable improvement is the shift of setup logic in several modifiers from on_initialize to on_start, enhancing modularity and flexibility. The tests have also been updated to align with these changes. While the refactoring is well-executed, I've identified a critical import issue that will cause a runtime error.

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs changed the title [TorchOffloader] Switch to torch offloader from accelerate [Offload] Switch to torch offloader from accelerate Dec 31, 2025
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@mergify
Copy link
Contributor

mergify bot commented Jan 14, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kylesayrs.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jan 14, 2026
@mergify mergify bot removed the needs-rebase label Jan 22, 2026
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs marked this pull request as ready for review January 26, 2026 06:57
@kylesayrs kylesayrs requested a review from dsikka as a code owner January 26, 2026 06:57
Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no new comments 🚢

HDCharles
HDCharles previously approved these changes Jan 27, 2026
@kylesayrs kylesayrs added the ready When a PR is ready for review label Jan 27, 2026
@kylesayrs kylesayrs enabled auto-merge (squash) January 27, 2026 03:29
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs dismissed stale reviews from HDCharles and brian-dellabetta via a59ba78 January 27, 2026 21:06
@kylesayrs kylesayrs force-pushed the kylesayrs/required-torch-offloader-changes branch from 6677753 to a59ba78 Compare January 27, 2026 21:06
@mergify
Copy link
Contributor

mergify bot commented Jan 27, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs merged commit 37de437 into main Jan 28, 2026
13 checks passed
@kylesayrs kylesayrs deleted the kylesayrs/required-torch-offloader-changes branch January 28, 2026 01:05
dsikka pushed a commit that referenced this pull request Jan 28, 2026
## Purpose ##
* Fixes #2068
* Offloading issue was fixed by
#2148

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Feb 10, 2026
## Co-requisites ##
* Compressed Tensors: [[TorchOffloader] Remove
Accelerate](vllm-project/compressed-tensors#530)

## Changes ##
* Perform tracing in the `disable_onloading` context (any tensors that
are mentioned will be referenced as meta tensors to avoid excess
onloading)
* Dispatch model after tracing (not really necessary)
* `dispatch_for_sequential` is now an alias for `offload_model`
* `dispatch_for_generation` is now an alias for `dispatch_model`
* Add a `get_main_device` utility to help with centralized device
getting
* `untie_word_embeddings` is now simpler
* Fix fusing test by initializing tensors without gradients

## Testing ##
*
https://github.com/neuralmagic/llm-compressor-testing/actions/runs/21348973738
* Optional TODO: add more tracing tests

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Feb 10, 2026
## Purpose ##
* Fixes vllm-project#2068
* Offloading issue was fixed by
vllm-project#2148

---------

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments