Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Numpy] Change semantics of ndim for operators in src/operator/contrib #14409

Merged
merged 2 commits into from
Mar 15, 2019

Conversation

junrushao
Copy link
Member

@junrushao junrushao commented Mar 13, 2019

(This is the PR to the numpy branch, so please be aware it is not aimed for production until the numpy branch is merged successfully)

This PR in part of progress of #14323, which contains improvement of the following contrib operators.

  • adamw-inl.h
  • adaptive_avg_pooling-inl.h
  • bilinear_resize-inl.h
  • bounding_box-inl.h
  • count_sketch-inl.h
  • deformable_convolution-inl.h
  • fft-inl.h
  • ifft-inl.h
  • index_copy-inl.h
  • multi_proposal-inl.h
  • proposal-inl.h
  • quadratic_op-inl.h
  • sync_batch_norm-inl.h
  • transformer-inl.h

In addition to the header files assigned to me, I also change some of the following .cc files which relates to ndim. I am not 100% sure about the coverage, so reviewers please let me know if there is anything I left.

  • boolean_mask.cc
  • dgl_graph.cc
  • nnvm_to_onnx.cc
  • optimizer_op.cc

The contrib operators are mostly irregular workloads coming from many contributors, which is one of the most difficult part (another difficult part is customized/python operators). I am trying my best not to break it. However, if there is anything I didn't take into consideration, please don't hesitate to let me know. Many thanks!

CC: @reminisce @szha @eric-haibin-lin @zheng-da @yzhliu @wkcn

@karan6181
Copy link
Contributor

@mxnet-label-bot add [Numpy, Operator, pr-awaiting-review]

@wkcn
Copy link
Member

wkcn commented Mar 14, 2019

Could we add a cast to be compatible with nnvm::TShape,Tuple?

@reminisce
Copy link
Contributor

@wkcn Are you referring to converting nnvm::Tuple to mxnet::Tuple? I wonder why it's needed?

include/mxnet/tuple.h Outdated Show resolved Hide resolved
src/operator/contrib/adaptive_avg_pooling-inl.h Outdated Show resolved Hide resolved
src/operator/contrib/adaptive_avg_pooling-inl.h Outdated Show resolved Hide resolved
src/operator/contrib/dgl_graph.cc Outdated Show resolved Hide resolved
src/operator/contrib/dgl_graph.cc Outdated Show resolved Hide resolved
src/operator/contrib/index_copy-inl.h Outdated Show resolved Hide resolved
src/operator/contrib/optimizer_op.cc Outdated Show resolved Hide resolved
src/operator/contrib/quadratic_op-inl.h Outdated Show resolved Hide resolved
src/operator/contrib/transformer-inl.h Show resolved Hide resolved
Copy link
Member

@wkcn wkcn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wkcn Are you referring to converting nnvm::Tuple to mxnet::Tuple? I wonder why it's needed?

Yes. I think some custom C++ operators by users may use nnvm::Tuple, so add a cast to be compatible with them. But it may be not necessary.

include/mxnet/tuple.h Outdated Show resolved Hide resolved
include/mxnet/tuple.h Outdated Show resolved Hide resolved
@reminisce
Copy link
Contributor

@junrushao1994 Could you address the CR comments so that we can merge this PR? Thanks.

@junrushao
Copy link
Member Author

@reminisce of course! Will do it today!

@junrushao
Copy link
Member Author

@reminisce Hey I updated accordingly

@reminisce
Copy link
Contributor

Thank @junrushao1994 and @wkcn. This is merged.

@reminisce reminisce merged commit 0731af7 into apache:numpy Mar 15, 2019
reminisce pushed a commit that referenced this pull request Apr 4, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 5, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 5, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 5, 2019
reminisce pushed a commit that referenced this pull request Apr 6, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 10, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 11, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 12, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 13, 2019
reminisce pushed a commit to reminisce/mxnet that referenced this pull request Apr 15, 2019
szha pushed a commit that referenced this pull request Apr 16, 2019
* [numpy] Shape support scalar tensor (#14315)

* Support scalar and zero-size tensors with np.sum

* Add sanity check when ndim is set

* [Numpy] Change semantics of ndim for operators in `src/operator/contrib` (#14409)

* Initial commit

* Address comments

* [WIP] Use new shape definition (#14453)

* Init checkin

* Fix ndarray alloc bug

* Use TShape(0) as default empty tuple params

* Fix bugs

* Fix TShape init value

* Fix infer shape pass shape type and reshape infer shape func

* [numpy] Fix unit tests after introducing numpy compatible shapes (#14487)

* Fix infer shape rnn

* Fix boolean mask and custom op unit tests

* Fix multi proposal

* Fix diag

* Add global switch for backward compatibility and fix infer shape bugs

* Fix slice op infer shape

* Fix rnn infer shape

* Add util funcs for ndim_is_known and dim_size_is_known

* Revert rnn_cell.py

* Fix a bug to pass the test in test_contrib_rnn (#14520)

* fix.

* remove type conversion.

* remove type cast.

* [numpy] Fix test_dynamic_shape.test_dynamic_shape (#14538)

* Initial commit

* Address comments from Jun

* [numpy] Fix numpy import in python2 (#14537)

* Fix several test failures

* Fix subgraph op infer shape

* Fix sparse slice

* Fix deconv infer shape

* Fix numpy import compatibility problem in python2

* fix concat and slice (#14549)

* fix R-package (#14536)

* Fix cpp package build after using new shape definition (#14554)

* Fix pooling_v1 and deformable_convolution param initialization (#14577)

* Fix pooling_v1 param initialization

* Fix deformable_convolution param initialization

* [Numpy] Misc fix (#14612)

* [Numpy] Misc Fix

* fix build

* !shape_is_none => shape_is_known

* Address comments

* Fix

* [Numpy] fix test_operator_gpu.test_upsampling_bilinear_with_type (#14557)

* Fix test_operator_gpu.test_upsampling_bilinear_with_type

* Address comments

* [Numpy] Java/Scala modification (#14625)

* modify jni to support 0 dim/shape

* fix transpose axes default value

* fix shape index bug (#14630)

* fix jni lint (#14634)

* [numpy] Fix numpy branch failing tests in CI (#14639)

* Remove numpy namespaces for operator registration

* Fix bug when shape is compeltely unknown

* Fix singed/unsigned compare warning

* Fix CI

* Fix pylint

* Avoid launching gpu kernels for zero-size output tensors

* Fix test_ndarray

* Fix binary broadcast with zero-size tensors

* Better error message for infer shape failure in imperative

* Fix TShape constructor ambiguity on certain platforms

* Fix mkldnn build failure

* Fix build failure in gpu and cpp test

* Fix gpu cpp test build with mkldnn

* Fix mkldnn cpp test

* Fix concatenating zero-size tensors

* Avoid letting mkldnn handle zero-size tensors in concat

* Fix quantized_concat infer shape

* Try to fix perl c api

* fix invalid ndarray dispose (#14657)

* swig fixes for the changes in c_api.h (#14655)

* Rename np_comp to np_compat for readability

* Fix import error

* Keep old c apis unchanged

* Fix lint

* Rebase and fix build

* Fix R build failure

* Fix Perl build failure

* Rebase with master

* Address cr comments

* Use just one scope to represent numpy compatibility

* Add code comment to NumpyScope object in Scala

* Add use_np_compat decorator

* Fix pylint
kedarbellare pushed a commit to kedarbellare/incubator-mxnet that referenced this pull request Apr 20, 2019
* [numpy] Shape support scalar tensor (apache#14315)

* Support scalar and zero-size tensors with np.sum

* Add sanity check when ndim is set

* [Numpy] Change semantics of ndim for operators in `src/operator/contrib` (apache#14409)

* Initial commit

* Address comments

* [WIP] Use new shape definition (apache#14453)

* Init checkin

* Fix ndarray alloc bug

* Use TShape(0) as default empty tuple params

* Fix bugs

* Fix TShape init value

* Fix infer shape pass shape type and reshape infer shape func

* [numpy] Fix unit tests after introducing numpy compatible shapes (apache#14487)

* Fix infer shape rnn

* Fix boolean mask and custom op unit tests

* Fix multi proposal

* Fix diag

* Add global switch for backward compatibility and fix infer shape bugs

* Fix slice op infer shape

* Fix rnn infer shape

* Add util funcs for ndim_is_known and dim_size_is_known

* Revert rnn_cell.py

* Fix a bug to pass the test in test_contrib_rnn (apache#14520)

* fix.

* remove type conversion.

* remove type cast.

* [numpy] Fix test_dynamic_shape.test_dynamic_shape (apache#14538)

* Initial commit

* Address comments from Jun

* [numpy] Fix numpy import in python2 (apache#14537)

* Fix several test failures

* Fix subgraph op infer shape

* Fix sparse slice

* Fix deconv infer shape

* Fix numpy import compatibility problem in python2

* fix concat and slice (apache#14549)

* fix R-package (apache#14536)

* Fix cpp package build after using new shape definition (apache#14554)

* Fix pooling_v1 and deformable_convolution param initialization (apache#14577)

* Fix pooling_v1 param initialization

* Fix deformable_convolution param initialization

* [Numpy] Misc fix (apache#14612)

* [Numpy] Misc Fix

* fix build

* !shape_is_none => shape_is_known

* Address comments

* Fix

* [Numpy] fix test_operator_gpu.test_upsampling_bilinear_with_type (apache#14557)

* Fix test_operator_gpu.test_upsampling_bilinear_with_type

* Address comments

* [Numpy] Java/Scala modification (apache#14625)

* modify jni to support 0 dim/shape

* fix transpose axes default value

* fix shape index bug (apache#14630)

* fix jni lint (apache#14634)

* [numpy] Fix numpy branch failing tests in CI (apache#14639)

* Remove numpy namespaces for operator registration

* Fix bug when shape is compeltely unknown

* Fix singed/unsigned compare warning

* Fix CI

* Fix pylint

* Avoid launching gpu kernels for zero-size output tensors

* Fix test_ndarray

* Fix binary broadcast with zero-size tensors

* Better error message for infer shape failure in imperative

* Fix TShape constructor ambiguity on certain platforms

* Fix mkldnn build failure

* Fix build failure in gpu and cpp test

* Fix gpu cpp test build with mkldnn

* Fix mkldnn cpp test

* Fix concatenating zero-size tensors

* Avoid letting mkldnn handle zero-size tensors in concat

* Fix quantized_concat infer shape

* Try to fix perl c api

* fix invalid ndarray dispose (apache#14657)

* swig fixes for the changes in c_api.h (apache#14655)

* Rename np_comp to np_compat for readability

* Fix import error

* Keep old c apis unchanged

* Fix lint

* Rebase and fix build

* Fix R build failure

* Fix Perl build failure

* Rebase with master

* Address cr comments

* Use just one scope to represent numpy compatibility

* Add code comment to NumpyScope object in Scala

* Add use_np_compat decorator

* Fix pylint
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* [numpy] Shape support scalar tensor (apache#14315)

* Support scalar and zero-size tensors with np.sum

* Add sanity check when ndim is set

* [Numpy] Change semantics of ndim for operators in `src/operator/contrib` (apache#14409)

* Initial commit

* Address comments

* [WIP] Use new shape definition (apache#14453)

* Init checkin

* Fix ndarray alloc bug

* Use TShape(0) as default empty tuple params

* Fix bugs

* Fix TShape init value

* Fix infer shape pass shape type and reshape infer shape func

* [numpy] Fix unit tests after introducing numpy compatible shapes (apache#14487)

* Fix infer shape rnn

* Fix boolean mask and custom op unit tests

* Fix multi proposal

* Fix diag

* Add global switch for backward compatibility and fix infer shape bugs

* Fix slice op infer shape

* Fix rnn infer shape

* Add util funcs for ndim_is_known and dim_size_is_known

* Revert rnn_cell.py

* Fix a bug to pass the test in test_contrib_rnn (apache#14520)

* fix.

* remove type conversion.

* remove type cast.

* [numpy] Fix test_dynamic_shape.test_dynamic_shape (apache#14538)

* Initial commit

* Address comments from Jun

* [numpy] Fix numpy import in python2 (apache#14537)

* Fix several test failures

* Fix subgraph op infer shape

* Fix sparse slice

* Fix deconv infer shape

* Fix numpy import compatibility problem in python2

* fix concat and slice (apache#14549)

* fix R-package (apache#14536)

* Fix cpp package build after using new shape definition (apache#14554)

* Fix pooling_v1 and deformable_convolution param initialization (apache#14577)

* Fix pooling_v1 param initialization

* Fix deformable_convolution param initialization

* [Numpy] Misc fix (apache#14612)

* [Numpy] Misc Fix

* fix build

* !shape_is_none => shape_is_known

* Address comments

* Fix

* [Numpy] fix test_operator_gpu.test_upsampling_bilinear_with_type (apache#14557)

* Fix test_operator_gpu.test_upsampling_bilinear_with_type

* Address comments

* [Numpy] Java/Scala modification (apache#14625)

* modify jni to support 0 dim/shape

* fix transpose axes default value

* fix shape index bug (apache#14630)

* fix jni lint (apache#14634)

* [numpy] Fix numpy branch failing tests in CI (apache#14639)

* Remove numpy namespaces for operator registration

* Fix bug when shape is compeltely unknown

* Fix singed/unsigned compare warning

* Fix CI

* Fix pylint

* Avoid launching gpu kernels for zero-size output tensors

* Fix test_ndarray

* Fix binary broadcast with zero-size tensors

* Better error message for infer shape failure in imperative

* Fix TShape constructor ambiguity on certain platforms

* Fix mkldnn build failure

* Fix build failure in gpu and cpp test

* Fix gpu cpp test build with mkldnn

* Fix mkldnn cpp test

* Fix concatenating zero-size tensors

* Avoid letting mkldnn handle zero-size tensors in concat

* Fix quantized_concat infer shape

* Try to fix perl c api

* fix invalid ndarray dispose (apache#14657)

* swig fixes for the changes in c_api.h (apache#14655)

* Rename np_comp to np_compat for readability

* Fix import error

* Keep old c apis unchanged

* Fix lint

* Rebase and fix build

* Fix R build failure

* Fix Perl build failure

* Rebase with master

* Address cr comments

* Use just one scope to represent numpy compatibility

* Add code comment to NumpyScope object in Scala

* Add use_np_compat decorator

* Fix pylint
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Numpy Operator pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants