-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantized TANH operator support in TF Lite Frontend #8024
Conversation
Please, can you review? @mbaret @manupa-arm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Legitimate
3299748
to
f76093e
Compare
Could you re-trigger the CI? It's been very flaky lately. |
Change-Id: I70df765e1562fa586ed0ffd0e07b8858f7fbb831
f76093e
to
9ae8819
Compare
Thanks @NicolaLancellotti @manupa-arm @d-smirnov this is now merged. |
…pache#8024) Change-Id: I70df765e1562fa586ed0ffd0e07b8858f7fbb831
…pache#8024) Change-Id: I70df765e1562fa586ed0ffd0e07b8858f7fbb831
Currently, the
TANH
operator with quantized input and output tensors is lowered totanh
without any prior dequantization and posterior quantization.This pr adds the dequantization and quantization operators to the lowering of
TANH
.