You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! As part of my investigation of deploying a graph neural network (GNN), built from the TF-GNN library on mobile (Android), I found that the operator Unsorted_Segment_Sum doesn't support int8. This prevented us from taking advantage of a full quantization of the GNN which relies on the unsorted_segment_x operators for core message passing steps. This resulted in quantized models less performant than the non-quantized ones because of the extra dequantization-quantization layers.
I'd like to request the addition of this data type, since it is a very important operator used by the TF-GNN library itself, and we may see more demand for quantization of GNNs in the future.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
(I also raised this request in the old tensorflow repo: duplicate of tensorflow/tensorflow#81348)
System information
Provide the text output from tflite_convert: Used a custom converter that performs post training full quantization using representative datasets.
Hi! As part of my investigation of deploying a graph neural network (GNN), built from the TF-GNN library on mobile (Android), I found that the operator Unsorted_Segment_Sum doesn't support int8. This prevented us from taking advantage of a full quantization of the GNN which relies on the unsorted_segment_x operators for core message passing steps. This resulted in quantized models less performant than the non-quantized ones because of the extra dequantization-quantization layers.
I'd like to request the addition of this data type, since it is a very important operator used by the TF-GNN library itself, and we may see more demand for quantization of GNNs in the future.
Thanks in advance!
The text was updated successfully, but these errors were encountered: