Skip to content

Enable model.to(device) for int8 weight only quantized model #1630

Enable model.to(device) for int8 weight only quantized model

Enable model.to(device) for int8 weight only quantized model #1630

Annotations

2 warnings

This job succeeded