Skip to content

Conversation

@comaniac
Copy link
Contributor

A quick patch to fix rsub conversion in PyTorch. The original implementation use float(inputs[2]) for alpha, which implies both data0 data1 must be float32. As a result, I got type error when converting a FP16 model.

cc @t-vi @masahi

@masahi masahi merged commit e6af874 into apache:main Jan 28, 2022
@comaniac comaniac deleted the fix_pt_rsub branch January 28, 2022 17:54
sunggg pushed a commit to sunggg/tvm that referenced this pull request Jan 29, 2022
* [PyTorch] Fix rsub type

* fix
ylc pushed a commit to ylc/tvm that referenced this pull request Feb 16, 2022
* [PyTorch] Fix rsub type

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants