-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][Frontend][Onnx] If Operator Support #6730
Conversation
@mbrookhart @masahi @tmoreau89 @soiferj can you guys take a look at this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @jwfromm !
@jwfromm please run CI again |
f1b154e
to
65454b7
Compare
I'm pretty confused why this is failing CI. It looks like on the GPU test, the onnxruntime produces the wrong value (getting a -2 instead of the correct value of 1) while TVM produces the proper result. It seems like on the CPU node, the test passes. I'm not able to replicate the error on my local GPU machine. Does anyone have any ideas (@masahi) why that might happen? |
Maybe the old version of onnxruntime had a bug? ORT on CI is fairly old (v1.00) while the latest one is v1.5. You probably have a newer ORT locally. We'll update our PyTorch version soon-ish, sometime next month #6594. Maybe we can also update ORT together if desired. @jwfromm The new ONNX version v1.8 should also be out soon onnx/onnx#3072 |
Good point, I'll try to check out using v1.00. Thanks! |
@jwfromm something is wrong with CI |
Sorry for the delay. It seems like the only path forward on this for now is to disable GPU tests until onnxruntime is updated to a newer version. |
@masahi, I ended up removing onnxruntime from the if test for now. Since the correct result in this case is very clear, there's not much need to generate with onnxruntime. Once we update to a newer ort version I'll switch back. |
thanks @jwfromm @mbrookhart @tmoreau89 |
* If operator support in ONNX. * Small tweak. * Added uses_gpu tag. * Disable test on GPU until onnxruntime version is updated. * Use parametrize_target to specify CPU only. * Just dont use onnxruntime for now i guess.
* If operator support in ONNX. * Small tweak. * Added uses_gpu tag. * Disable test on GPU until onnxruntime version is updated. * Use parametrize_target to specify CPU only. * Just dont use onnxruntime for now i guess.
* If operator support in ONNX. * Small tweak. * Added uses_gpu tag. * Disable test on GPU until onnxruntime version is updated. * Use parametrize_target to specify CPU only. * Just dont use onnxruntime for now i guess.
This surprisingly small PR adds support for the If operator in the Onnx frontend. This turned out to be a pretty direct conversion to relay!