Skip to content

Commit fd65df8

Browse files
Michael Maitlandfacebook-github-bot
authored andcommitted
Fix incorrect kernel mappings in Cadence HiFi functions.yaml (pytorch#15216)
Summary: Fixed two bugs in fbcode/executorch/backends/cadence/aot/functions_hifi.yaml: 1. cadence::quantize_per_tensor_asym16u was incorrectly mapped to impl::HiFi::quantize_per_tensor_asym16s_out instead of asym16u_out 2. cadence::dequantize_per_tensor_asym32s was incorrectly mapped to impl::HiFi::dequantize_per_tensor_asym16s_out instead of asym32s_out The second bug caused runtime failures when dequantizing 32-bit signed integer tensors, as the wrong function expecting 16-bit integers was called, resulting in "Unhandled input dtype" assertion failures. Differential Revision: D84865463
1 parent 06ea3d6 commit fd65df8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

backends/cadence/aot/functions_hifi.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@
306306
variants: function
307307
kernels:
308308
- arg_meta: null
309-
kernel_name: impl::HiFi::quantize_per_tensor_asym16s_out
309+
kernel_name: impl::HiFi::quantize_per_tensor_asym16u_out
310310

311311
- func: cadence::quantize_per_tensor_asym32s.out(Tensor input, float scale, int zero_point, int quant_min, int quant_max, ScalarType dtype, *, Tensor(a!) out) -> Tensor(a!)
312312
variants: function
@@ -348,7 +348,7 @@
348348
variants: function
349349
kernels:
350350
- arg_meta: null
351-
kernel_name: impl::HiFi::dequantize_per_tensor_asym16s_out
351+
kernel_name: impl::HiFi::dequantize_per_tensor_asym32s_out
352352

353353
- func: cadence::quantized_conv2d_nchw.out(Tensor input, Tensor weight, Tensor bias, int[] stride, SymInt[] padding, int[] dilation, int groups, int input_zero_point, Tensor weight_zero_point, Tensor bias_scale, float out_scale, int out_zero_point, Tensor out_multiplier, Tensor out_shift, *, Tensor(a!) out) -> Tensor(a!)
354354
kernels:

0 commit comments

Comments
 (0)