Skip to content

Conversation

@jcwchen
Copy link
Contributor

@jcwchen jcwchen commented Oct 4, 2020

Description:
To sync the definition of SAME_UPPER/SAME_LOWER among all operators and make it same as ONNX definition, switch the logic of SAME_UPPER and SAME_LOWER in ConvTranspose.

Definition of SAME_UPPER and SAME_LOWER should be as follows:

if auto_pads == 'SAME_UPPER':
  pad_head = paddings / 2  # smaller one 
  pad_tail = paddings - paddings / 2  # larger one
elif auto_pads == 'SAME_LOWER':
  pad_head = paddings - paddings / 2  # larger one
  pad_tail = paddings / 2  # smaller one

Motivation and Context
The auto_pad attribute, SAME_UPPER and SAME_LOWER of ConvTranspose is different from other operators' (pool and conv related operators) auto_pad attribute. The behavior of same attribute should be the same among all operators. Also, it does not meet the definition in ONNX.

cc @askhade

@jcwchen jcwchen requested a review from a team as a code owner October 4, 2020 00:34
@hariharans29
Copy link
Member

#4271 - is it good to include that as well or is that a bit out of sync ?

@jcwchen
Copy link
Contributor Author

jcwchen commented Oct 7, 2020

#4271 - is it good to include that as well or is that a bit out of sync ?

@hariharans29 Thank you for bringing this up. I have seen that PR before.
The current modification for this PR is breaking something in CIs. Bringing more modification might complicate debugging. I am not sure what is that PR for, but I think the purpose should be quite different from this one.

BTW, is there any reason why that PR has not been merged? Did it pass all the CIs?

@hariharans29
Copy link
Member

hariharans29 commented Oct 7, 2020

#4271 - is it good to include that as well or is that a bit out of sync ?

@hariharans29 Thank you for bringing this up. I have seen that PR before.
The current modification for this PR is breaking something in CIs. Bringing more modification might complicate debugging. I am not sure what is that PR for, but I think the purpose should be quite different from this one.

Thanks Jacky. I bought it up because that was also trying to address some aspect of the auto_pad attribute for the ConvTranspose op.

BTW, is there any reason why that PR has not been merged? Did it pass all the CIs?

I couldn't find a reviewer and it sort of slipped through. Yes, I think it passed all CIs. If you could review it and provide feedback, that will be great.

// total padding size
int64_t paddings = std::max<int64_t>(0, (in_size - 1) * stride + adj + (kernel - 1) * dilation + 1 - *out_size);
if (pad_type == AutoPadType::SAME_UPPER) { // pad more on head when paddings are odd.
if (pad_type == AutoPadType::SAME_LOWER) { // pad more on head when paddings are odd.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please share some context on this toggle ? - As far as i can see, the way it currently is in master seems all right as per the spec - https://github.com/onnx/onnx/blob/master/docs/Operators.md#ConvTranspose.

I think this change is causing tests to break.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is more context: onnx/onnx#3019
Actually the document in ONNX needs to be updated as well. I think that's the reason why ORT here has incorrect behavior in the first place...
I remember there is one deprecated model test failed. Let me rerun CIs. Thank you.

@jcwchen
Copy link
Contributor Author

jcwchen commented Dec 11, 2020

/azp run Windows GPU CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@jcwchen
Copy link
Contributor Author

jcwchen commented Dec 11, 2020

/azp run Linux CPU CI Pipeline

@jcwchen jcwchen closed this Dec 11, 2020
@jcwchen jcwchen reopened this Dec 11, 2020
@jcwchen jcwchen force-pushed the fix-convtrans-same branch 2 times, most recently from b6b518d to 3d2d542 Compare March 23, 2021 18:18
@jcwchen jcwchen changed the title [WIP] Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose Mar 23, 2021
@jcwchen jcwchen force-pushed the fix-convtrans-same branch from 617895d to 8a4845f Compare June 20, 2021 19:34
};
allDisabledTests.insert(std::begin(x86DisabledTests), std::end(x86DisabledTests));
#endif
allDisabledTests.insert(ORT_TSTR("cntk_simple_seg"));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have prepared the updated cntk_simple_seg by the corrected ORT. I plan to filter it out in this PR. Then after merge I can update this model directly and propose another PR to add it back. In that case it won't cause other PRs to fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants