Conversation
42dc5c7 to
355dd6c
Compare
e30978b to
3511775
Compare
89fb74e to
0a7f261
Compare
17 tasks
81159f7 to
1748ba5
Compare
5 tasks
1748ba5 to
3c8e73b
Compare
Collaborator
|
@Mergifyio refresh |
✅ Pull request refreshed |
|
This pull request has merge conflicts that must be resolved before it can be |
3c8e73b to
c57ad93
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
74aaf94
d6438b5 to
74aaf94
Compare
HDCharles
previously approved these changes
Jan 27, 2026
brian-dellabetta
previously approved these changes
Jan 27, 2026
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
ea21652
HDCharles
approved these changes
Jan 27, 2026
brian-dellabetta
approved these changes
Jan 27, 2026
kylesayrs
added a commit
to vllm-project/llm-compressor
that referenced
this pull request
Jan 28, 2026
## Co-requisites ## * Compressed Tensors: [[TorchOffloader] Remove Accelerate](vllm-project/compressed-tensors#530) ## Changes ## * Perform tracing in the `disable_onloading` context (any tensors that are mentioned will be referenced as meta tensors to avoid excess onloading) * Dispatch model after tracing (not really necessary) * `dispatch_for_sequential` is now an alias for `offload_model` * `dispatch_for_generation` is now an alias for `dispatch_model` * Add a `get_main_device` utility to help with centralized device getting * `untie_word_embeddings` is now simpler * Fix fusing test by initializing tensors without gradients ## Testing ## * https://github.com/neuralmagic/llm-compressor-testing/actions/runs/21348973738 * Optional TODO: add more tracing tests --------- Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
cajeonrh
pushed a commit
to cajeonrh/llm-compressor
that referenced
this pull request
Feb 10, 2026
## Co-requisites ## * Compressed Tensors: [[TorchOffloader] Remove Accelerate](vllm-project/compressed-tensors#530) ## Changes ## * Perform tracing in the `disable_onloading` context (any tensors that are mentioned will be referenced as meta tensors to avoid excess onloading) * Dispatch model after tracing (not really necessary) * `dispatch_for_sequential` is now an alias for `offload_model` * `dispatch_for_generation` is now an alias for `dispatch_model` * Add a `get_main_device` utility to help with centralized device getting * `untie_word_embeddings` is now simpler * Fix fusing test by initializing tensors without gradients ## Testing ## * https://github.com/neuralmagic/llm-compressor-testing/actions/runs/21348973738 * Optional TODO: add more tracing tests --------- Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
See purpose section of #529
Co-requisites
Prerequisites
Testing