Skip to content

Conversation

@Johnson9009
Copy link
Contributor

Current the Relay qnn.dequantize lack of the support to "float16" output, only can dequantize a quant value to "float32", this PR add the lack function, so as "uint16" of qnn.quantize.

@tvm-bot
Copy link
Collaborator

tvm-bot commented Jul 5, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

Copy link
Contributor

@leandron leandron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @Johnson9009 for the PR and for fixing documentation issues along the way.

@Johnson9009
Copy link
Contributor Author

@leandron @ibsidorenko @junrushao @tqchen @Hzfengsy
Is there anything need to change? If not, please help to check if we can merge it? Thanks.

@leandron
Copy link
Contributor

leandron commented Jul 7, 2023

Thanks @Johnson9009 @ibsidorenko, this is merged now.

@leandron leandron merged commit d9d6a88 into apache:main Jul 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants