Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1215] Allow dynamic shape exists in imperative mode #13283

Merged
merged 10 commits into from
Nov 20, 2018

Conversation

junrushao
Copy link
Member

@junrushao junrushao commented Nov 15, 2018

Description

It is the PR I took over from @zheng-da at #12400. All credits go to Da Zheng.

This PR relaxes the constraint on NDArrays that shape used to be pre-determined. For unittesting, this PR introduces an operator called BooleanMask in contrib, which is a practical use case that actually produces dynamic shape. Note that the support for boolean mask is very experimental, which only allows 2-d inputs, and 1-d mask, because it serves only the functionality of testing. We could improve its better in future PRs.

This is an initial step for the roadmap #12732.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Introducing new methods in NDArray allowing delayed assignment of their shapes.
  • Introducing boolean mask operator, which we believe will be useful in the future.
  • Add unit test.

Comments

No comments.

TODO

  • Refactor the boolean index operator
  • Write backward pass of the boolean index operator
  • Test coverage

Copy link
Contributor

@ZhennanQin ZhennanQin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test should be added as well. Overall good approach!

src/operator/contrib/boolean_mask.cc Show resolved Hide resolved
src/operator/contrib/boolean_mask-inl.h Outdated Show resolved Hide resolved
src/imperative/imperative_utils.h Outdated Show resolved Hide resolved
* \param dtype data type of this ndarray
*/
explicit NDArray(Context ctx, int dtype = mshadow::default_type_flag) {
ptr_ = std::make_shared<Chunk>(TShape(mshadow::Shape1(0)), ctx, true, dtype);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not lazily create Chunk when NDArray::Init() is called? Then we don't need to add Chunk::Init() function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zheng-da Do you have any specific consideration about this? I am not sure

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because TShape(mshadow::Shape1(0)) doesn't mean no shape. It may confuse other developer, and also cause Chunk shape mismatch with NDArray, which may have potential risk.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason we want to create a chunk here is to create the var in the chunk. Originally, I wanted to allow async execution. Now we only allow sync execution in the imperative mode. We probably don't need to create a chunk here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this PR is merged without approve? I guess this comment isn't addressed. @junrushao1994 @zheng-da @szha

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZhennanQin Sorry I was too hurry. Personally, I think it is okay just to leave it 0-d for now. In the long term, we could support 0-d tensors in a more systematic approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZhennanQin we discussed it and think it's ok to use 0-dim tensor for now. Actually, we are using 0-dim shape when the shape is unknown in other places, so it should be fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's true that we should have commented it here as well.

@kalyc
Copy link
Contributor

kalyc commented Nov 16, 2018

@mxnet-label-bot add [pr-awaiting-review]
@junrushao1994 please take a look at the CI failure and re-trigger CI

@marcoabreu marcoabreu added the pr-awaiting-review PR is waiting for code review label Nov 16, 2018
@junrushao
Copy link
Member Author

CC: @szha @yidawang @zheng-da

src/operator/contrib/boolean_mask-inl.h Show resolved Hide resolved
src/operator/contrib/boolean_mask.cc Outdated Show resolved Hide resolved
src/operator/contrib/boolean_mask.cc Show resolved Hide resolved
@junrushao
Copy link
Member Author

Have updated according to @zheng-da's comments

@junrushao
Copy link
Member Author

I think we are ready to merge this PR. @zheng-da

@szha szha merged commit 779bdc5 into apache:master Nov 20, 2018
@junrushao
Copy link
Member Author

junrushao commented Nov 20, 2018

We merge this PR per our discussion both online and offline, and are aware of the 0-d tensor introduced in this PR. We sincerely thank @ZhennanQin for the very helpful and exemplary comments!

@ZhennanQin
Copy link
Contributor

The explanation is OK for me. But it's better to post your offline discussion result in this page before merge, then external reviewer can get involved. Thanks.

@junrushao
Copy link
Member Author

junrushao commented Nov 20, 2018

@ZhennanQin I agree with every word of your comment. It is completely my fault pushing this PR too hard, mainly because I don't want to block upcoming work. I will definitely behave next time.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants