You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue arises when using coremltools.optimize.torch.quantization to quantize a model with w8a8 for Conv3d layers. The quantization process does not work properly, causing errors or incorrect behavior when converting the model to Core ML format.
Stack Trace
Traceback (most recent call last):
File "/Users/silveryu/Developer/lightsvd/diffuserskit/torch_quant.py", line 72, in <module>
ct.convert(traced_model,
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
mlmodel = mil_convert(
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 288, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 108, in __call__
return load(*args, **kwargs)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 88, in load
return _perform_torch_convert(converter, debug)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 151, in _perform_torch_convert
prog = converter.convert()
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1383, in convert
self.convert_const()
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1251, in convert_const
self._add_const(name, val)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1206, in _add_const
compression_op = self._construct_compression_op(val.detach().numpy(), name)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1189, in _construct_compression_op
result = self._construct_quantization_op(val, compression_info, param_name, result)
File "/opt/homebrew/anaconda3/envs/lightsvd/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 953, in _construct_quantization_op
raise ValueError(
ValueError: In conv1.weight, the `weight` should have same rank as `scale`, but got (1, 1, 3, 3, 3) vs (1, 1)
🐞Describing the bug
coremltools.optimize.torch.quantization
to quantize a model withw8a8
forConv3d
layers. The quantization process does not work properly, causing errors or incorrect behavior when converting the model to Core ML format.Stack Trace
To Reproduce
Conv3d
layer usingw8a8
quantization.System environment (please complete the following information):
Additional context
Conv3d
. Other layers, likeConv2d
, seem to work fine.The text was updated successfully, but these errors were encountered: