Enable model.to(device)
for int8 weight only quantized model#486
Merged
jerryzh168 merged 1 commit intopytorch:mainfrom jerryzh168:to-deviceJul 8, 2024
+22-3
model.to(device)
for int8 weight only quantized model#486