Commit 032108d
committed
Update base for Update on "Arm backend: Add 16A8W support for view and transpose operations"
Add 16A8W quantization support for view and transpose operations in ExecutorTorch ARM backend.
This follows the pattern established for linear, mul, sigmoid, tanh, and slice operations, extending int16 support to view and transpose operations.
Changes:
- Add INT16 dtype validation support in op_transpose.py
- Add test_view_tensor_16a8w_tosa_INT test function
- Enable test_view.py in test targets configuration
The 16A8W configuration uses 16-bit activations with 8-bit weights, enabling higher precision for activations while maintaining weight efficiency.
Differential Revision: [D80511313](https://our.internmc.facebook.com/intern/diff/D80511313/)
cc digantdesai freddan80 per zingo oscarandersson8218
[ghstack-poisoned]1 parent 1892dad commit 032108d
File tree
0 file changed
+0
-0
lines changed0 file changed
+0
-0
lines changed
0 commit comments