Skip to content

Issues: vllm-project/llm-compressor

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Too much CPU memory usage leads to Killed. bug Something isn't working
#1119 opened Feb 3, 2025 by HelloCard
[Clarification] Regarding KV Cache quantization and FP8 Scales question Further information is requested
#1104 opened Jan 27, 2025 by nelaturuharsha
Performance (reproducible) Issue question Further information is requested
#1058 opened Jan 11, 2025 by Thunderbeee
Does llmcompressor support hybrid sparsity? enhancement New feature or request
#1037 opened Jan 6, 2025 by jiangjiadi
quant method about kv cache bug Something isn't working
#1024 opened Jan 2, 2025 by sitabulaixizawaluduo
Wandb logging cannot be disabled bug Something isn't working
#976 opened Dec 13, 2024 by rmakarovv
Error when quantizing LLama 3.3 70b to FP8 bug Something isn't working
#963 opened Dec 6, 2024 by Syst3m1cAn0maly
Qwen2VL FP8_DYNAMIC Failed bug Something isn't working
#951 opened Dec 4, 2024 by LugerW-A
Several wandb init bug Something isn't working
#934 opened Nov 26, 2024 by fzyzcjy
[USAGE] FP8 W8A8 (+KV) with LORA Adapters enhancement New feature or request
#164 opened Sep 11, 2024 by paulliwog
ProTip! Type g i on any issue or pull request to go back to the issue listing page.