Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-14421] Make global pooling backwards compatible #15026

Closed
wants to merge 5 commits into from

Conversation

jmerkow
Copy link
Contributor

@jmerkow jmerkow commented May 21, 2019

Description

There was a backward compatibility breaking change (#9730) to pooling where pad is explicitly overwritten causing global pooling to produce different results. This causes issues in any networks trained before this was changed as any layers after a global pooling layer will not get different input and therefore provide incorrect results.
To be clear, it is mathematically correct to have padding == 0. The PR will not effect anytime the layer is used without setting pad (the default is zero), it will only effect times when global_pool=True and pad is non-zero. It also adds a warning that adding padding with global_pool=True leads to incorrect results and should not be done. This PR is to provide backward compatibility only.

Issue: #14421

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-14421], where Updating mxnet from 1.0.0, networks give different outputs #14421 refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Re-allow padding values to be passed when global_pool=true in pooling layers instead of forcing them to zero.
  • Added a warning that this isn't good to do.

Comments

This is to allow backward compatibility only. It should not affect anyone unless they have explicitly set global_pool=True and padding to non-zero. This is for any networks trained before the change was made to work, and should not adversely affect any users.

@@ -178,6 +178,12 @@ template<typename xpu, typename DType>
class PoolingOp {
public:
void Init(PoolingParam p) {
if (p.global_pool && (p.pad[0] || p.pad[1])) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the best place for this. Can someone suggest a better place? @piiswrong

@karan6181
Copy link
Contributor

@mxnet-label-bot add [Operator, C++, pr-awaiting-review]

@marcoabreu marcoabreu added C++ Related to C++ Operator pr-awaiting-review PR is waiting for code review labels May 23, 2019
@leleamol
Copy link
Contributor

leleamol commented Jun 7, 2019

The C++ label is meant for API defined in cpp-package. Removing the label as the issue is not related to those API.
@mxnet-label-bot remove [C++]

@marcoabreu marcoabreu removed the C++ Related to C++ label Jun 7, 2019
@piyushghai
Copy link
Contributor

@TaoLv Can you take a look at this PR from @jmerkow ?

@jmerkow
Copy link
Contributor Author

jmerkow commented Jun 17, 2019

Im not sure this should be merged, but it could provide assistance to those who have trained with the previous behavior and need a work around.

@larroy
Copy link
Contributor

larroy commented Jun 20, 2019

@jmerkow could you look into the CI failures?

@sandeep-krishnamurthy
Copy link
Contributor

Thank you @jmerkow for all the effort in debugging the root cause and getting this PR. This PR and the issue is a great reference for people facing similar issue. As you rightly mentioned, I think we cannot merge this PR because that would be like backward compatibility of a bug. We should probably have this workaround suggested to users instead and encourage to upgrade to latest version of MXNet.

@jmerkow
Copy link
Contributor Author

jmerkow commented Jun 27, 2019

I agree probably not worth merging it’s an edge case, but hopefully if someone comes across this issue they’ll find this and see the PR as a work around. We’ve started the process of retraining or deprecating networks effected by this bug (a long process...) so well eventually be able to upgrade.

@sandeep-krishnamurthy
Copy link
Contributor

I agree probably not worth merging it’s an edge case, but hopefully if someone comes across this issue they’ll find this and see the PR as a work around. We’ve started the process of retraining or deprecating networks effected by this bug (a long process...) so well eventually be able to upgrade.

Thank you. Closing the PR and the issue.

Sorry, it is unfortunate you people have to retrain a lot of models due to this bug. Keep us updated and cut us if you find any other issues.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Operator pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants