-
Notifications
You must be signed in to change notification settings - Fork 177
Issues: pytorch/ao
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature Request] Support of
int8_dynamic_activation_int8_weight
with asymmetrically quantized weights
#1320
opened Nov 20, 2024 by
sanchitintel
1 task
int8_dynamic_activation_int8_weight
uses zero-points for weight when activation is asymmetrically quantized
#1317
opened Nov 20, 2024 by
sanchitintel
attempting to run aten.abs.default, this is not supported
with latest torchtitan + torchao
#1313
opened Nov 19, 2024 by
lchu-ibm
torch.compile(sync_float8_amax_and_scale_history) not working with triton latest main
#1311
opened Nov 19, 2024 by
goldhuang
[NF4] Various bugs in how NF4 handles Something isn't working
.to()
to move to a different device
bug
#1310
opened Nov 19, 2024 by
gau-nernst
[AQT] Failed to move compiled module with AQT to a different device
bug
Something isn't working
#1309
opened Nov 19, 2024 by
gau-nernst
pip install torchao cannot get latest versions (only 0.1 and 2 other version in the same level)
#1300
opened Nov 17, 2024 by
moreAImore
SEO not helping ao
topic: documentation
Use this tag if this PR adds or improves documentation
#1298
opened Nov 16, 2024 by
msaroufim
[QAT] Low-bit FSDP all-gather for QAT
distributed
good first issue
Good for newcomers
qat
#1224
opened Nov 5, 2024 by
gau-nernst
CPUoffloadOptimizer issues
bug
Something isn't working
optimizer
#1209
opened Nov 1, 2024 by
felipemello1
Does Float8Linear support Tensor Parallelism and Sequence Parallelism?
#1198
opened Oct 30, 2024 by
zigzagcai
Add codebook (look up table based) quantization flow in torchao
good first issue
Good for newcomers
#1195
opened Oct 29, 2024 by
jerryzh168
[low-bit optim] Add COAT optimizer
good first issue
Good for newcomers
optimizer
#1190
opened Oct 29, 2024 by
gau-nernst
[FLOAT8] Add Hardware Compatibility Check for FP8 Quantization
float8
good first issue
Good for newcomers
#1188
opened Oct 29, 2024 by
drisspg
torchao.float8 + torch.compile does not work on HuggingFace's Mixtral model
#1200
opened Oct 28, 2024 by
vkuzo
Unable to save checkpoints when Use low bit optimizers with FSDP1 or FSDP2
#1185
opened Oct 28, 2024 by
nighting0le01
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.