-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose #5368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
#4271 - is it good to include that as well or is that a bit out of sync ? |
@hariharans29 Thank you for bringing this up. I have seen that PR before. BTW, is there any reason why that PR has not been merged? Did it pass all the CIs? |
Thanks Jacky. I bought it up because that was also trying to address some aspect of the
I couldn't find a reviewer and it sort of slipped through. Yes, I think it passed all CIs. If you could review it and provide feedback, that will be great. |
| // total padding size | ||
| int64_t paddings = std::max<int64_t>(0, (in_size - 1) * stride + adj + (kernel - 1) * dilation + 1 - *out_size); | ||
| if (pad_type == AutoPadType::SAME_UPPER) { // pad more on head when paddings are odd. | ||
| if (pad_type == AutoPadType::SAME_LOWER) { // pad more on head when paddings are odd. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please share some context on this toggle ? - As far as i can see, the way it currently is in master seems all right as per the spec - https://github.com/onnx/onnx/blob/master/docs/Operators.md#ConvTranspose.
I think this change is causing tests to break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is more context: onnx/onnx#3019
Actually the document in ONNX needs to be updated as well. I think that's the reason why ORT here has incorrect behavior in the first place...
I remember there is one deprecated model test failed. Let me rerun CIs. Thank you.
|
/azp run Windows GPU CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline |
|
Azure Pipelines successfully started running 4 pipeline(s). |
|
/azp run Linux CPU CI Pipeline |
b6b518d to
3d2d542
Compare
617895d to
8a4845f
Compare
winml/test/model/model_tests.cpp
Outdated
| }; | ||
| allDisabledTests.insert(std::begin(x86DisabledTests), std::end(x86DisabledTests)); | ||
| #endif | ||
| allDisabledTests.insert(ORT_TSTR("cntk_simple_seg")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have prepared the updated cntk_simple_seg by the corrected ORT. I plan to filter it out in this PR. Then after merge I can update this model directly and propose another PR to add it back. In that case it won't cause other PRs to fail.
807bef4 to
1d6a45d
Compare
1d6a45d to
871eeb4
Compare
Description:
To sync the definition of SAME_UPPER/SAME_LOWER among all operators and make it same as ONNX definition, switch the logic of SAME_UPPER and SAME_LOWER in ConvTranspose.
Definition of SAME_UPPER and SAME_LOWER should be as follows:
Motivation and Context
The
auto_padattribute,SAME_UPPERandSAME_LOWERofConvTransposeis different from other operators' (pool and conv related operators)auto_padattribute. The behavior of same attribute should be the same among all operators. Also, it does not meet the definition in ONNX.SAME_UPPERandSAME_LOWERin other operatorsonnxruntime/onnxruntime/core/providers/cpu/nn/pool_attributes.h
Line 149 in c20fcf2
https://github.com/onnx/onnx/blob/b2ed660d0a065b8346816f2c3a95d79ca79b88c9/onnx/defs/nn/defs.cc#L1222
Update spec for Convtranspose to make it sync onnx/onnx#3019
cc @askhade