Skip to content

Commit

Permalink
[numpy] Support zero-dim and zero-size tensors in MXNet (apache#14661)
Browse files Browse the repository at this point in the history
* [numpy] Shape support scalar tensor (apache#14315)

* Support scalar and zero-size tensors with np.sum

* Add sanity check when ndim is set

* [Numpy] Change semantics of ndim for operators in `src/operator/contrib` (apache#14409)

* Initial commit

* Address comments

* [WIP] Use new shape definition (apache#14453)

* Init checkin

* Fix ndarray alloc bug

* Use TShape(0) as default empty tuple params

* Fix bugs

* Fix TShape init value

* Fix infer shape pass shape type and reshape infer shape func

* [numpy] Fix unit tests after introducing numpy compatible shapes (apache#14487)

* Fix infer shape rnn

* Fix boolean mask and custom op unit tests

* Fix multi proposal

* Fix diag

* Add global switch for backward compatibility and fix infer shape bugs

* Fix slice op infer shape

* Fix rnn infer shape

* Add util funcs for ndim_is_known and dim_size_is_known

* Revert rnn_cell.py

* Fix a bug to pass the test in test_contrib_rnn (apache#14520)

* fix.

* remove type conversion.

* remove type cast.

* [numpy] Fix test_dynamic_shape.test_dynamic_shape (apache#14538)

* Initial commit

* Address comments from Jun

* [numpy] Fix numpy import in python2 (apache#14537)

* Fix several test failures

* Fix subgraph op infer shape

* Fix sparse slice

* Fix deconv infer shape

* Fix numpy import compatibility problem in python2

* fix concat and slice (apache#14549)

* fix R-package (apache#14536)

* Fix cpp package build after using new shape definition (apache#14554)

* Fix pooling_v1 and deformable_convolution param initialization (apache#14577)

* Fix pooling_v1 param initialization

* Fix deformable_convolution param initialization

* [Numpy] Misc fix (apache#14612)

* [Numpy] Misc Fix

* fix build

* !shape_is_none => shape_is_known

* Address comments

* Fix

* [Numpy] fix test_operator_gpu.test_upsampling_bilinear_with_type (apache#14557)

* Fix test_operator_gpu.test_upsampling_bilinear_with_type

* Address comments

* [Numpy] Java/Scala modification (apache#14625)

* modify jni to support 0 dim/shape

* fix transpose axes default value

* fix shape index bug (apache#14630)

* fix jni lint (apache#14634)

* [numpy] Fix numpy branch failing tests in CI (apache#14639)

* Remove numpy namespaces for operator registration

* Fix bug when shape is compeltely unknown

* Fix singed/unsigned compare warning

* Fix CI

* Fix pylint

* Avoid launching gpu kernels for zero-size output tensors

* Fix test_ndarray

* Fix binary broadcast with zero-size tensors

* Better error message for infer shape failure in imperative

* Fix TShape constructor ambiguity on certain platforms

* Fix mkldnn build failure

* Fix build failure in gpu and cpp test

* Fix gpu cpp test build with mkldnn

* Fix mkldnn cpp test

* Fix concatenating zero-size tensors

* Avoid letting mkldnn handle zero-size tensors in concat

* Fix quantized_concat infer shape

* Try to fix perl c api

* fix invalid ndarray dispose (apache#14657)

* swig fixes for the changes in c_api.h (apache#14655)

* Rename np_comp to np_compat for readability

* Fix import error

* Keep old c apis unchanged

* Fix lint

* Rebase and fix build

* Fix R build failure

* Fix Perl build failure

* Rebase with master

* Address cr comments

* Use just one scope to represent numpy compatibility

* Add code comment to NumpyScope object in Scala

* Add use_np_compat decorator

* Fix pylint
  • Loading branch information
reminisce authored and haohuw committed Jun 23, 2019
1 parent bf02271 commit 1a8d18d
Show file tree
Hide file tree
Showing 158 changed files with 2,838 additions and 1,145 deletions.
6 changes: 3 additions & 3 deletions R-package/src/ndarray.cc
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,9 @@ Rcpp::RObject NDArrayPacker::CreateNDArrayPacker() {
}

Rcpp::Dimension NDArray::dim() const {
mx_uint ndim;
const mx_uint *pshape;
MX_CALL(MXNDArrayGetShape(
int ndim;
const int *pshape;
MX_CALL(MXNDArrayGetShapeEx(
ptr_->handle, &ndim, &pshape));
Rcpp::IntegerVector dat(pshape, pshape + ndim);
std::reverse(dat.begin(), dat.end());
Expand Down
20 changes: 10 additions & 10 deletions R-package/src/symbol.cc
Original file line number Diff line number Diff line change
Expand Up @@ -167,8 +167,8 @@ Symbol::RObjectType Symbol::GetOutput(mx_uint index) const {

// helper function to convert shape into Rcpp vector
inline Rcpp::List BuildShapeData(mx_uint shape_size,
const mx_uint *shape_ndim,
const mx_uint **shape_data,
const int *shape_ndim,
const int **shape_data,
const std::vector<std::string> &names) {
Rcpp::List ret(shape_size);
for (mx_uint i = 0; i < shape_size; ++i) {
Expand All @@ -185,7 +185,7 @@ SEXP Symbol::InferShape(const Rcpp::List& kwargs) const {
<< "Need to pass parameters in key=value style.\n";
std::vector<std::string> keys = kwargs.names();
std::vector<mx_uint> arg_ind_ptr(1, 0);
std::vector<mx_uint> arg_shape_data;
std::vector<int> arg_shape_data;

for (size_t i = 0; i < kwargs.size(); ++i) {
RCHECK(keys[i].length() != 0)
Expand All @@ -197,17 +197,17 @@ SEXP Symbol::InferShape(const Rcpp::List& kwargs) const {
std::vector<const char*> c_keys = CKeys(keys);

mx_uint in_shape_size;
const mx_uint *in_shape_ndim;
const mx_uint **in_shape_data;
const int *in_shape_ndim;
const int **in_shape_data;
mx_uint out_shape_size;
const mx_uint *out_shape_ndim;
const mx_uint **out_shape_data;
const int *out_shape_ndim;
const int **out_shape_data;
mx_uint aux_shape_size;
const mx_uint *aux_shape_ndim;
const mx_uint **aux_shape_data;
const int *aux_shape_ndim;
const int **aux_shape_data;
int complete;

MX_CALL(MXSymbolInferShape(
MX_CALL(MXSymbolInferShapeEx(
handle_, static_cast<mx_uint>(kwargs.size()), dmlc::BeginPtr(c_keys),
dmlc::BeginPtr(arg_ind_ptr), dmlc::BeginPtr(arg_shape_data),
&in_shape_size, &in_shape_ndim, &in_shape_data,
Expand Down
8 changes: 4 additions & 4 deletions cpp-package/include/mxnet-cpp/ndarray.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -397,11 +397,11 @@ inline size_t NDArray::Size() const {
}

inline std::vector<mx_uint> NDArray::GetShape() const {
const mx_uint *out_pdata;
mx_uint out_dim;
MXNDArrayGetShape(blob_ptr_->handle_, &out_dim, &out_pdata);
const int *out_pdata;
int out_dim;
MXNDArrayGetShapeEx(blob_ptr_->handle_, &out_dim, &out_pdata);
std::vector<mx_uint> ret;
for (mx_uint i = 0; i < out_dim; ++i) {
for (int i = 0; i < out_dim; ++i) {
ret.push_back(out_pdata[i]);
}
return ret;
Expand Down
32 changes: 16 additions & 16 deletions cpp-package/include/mxnet-cpp/symbol.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ inline void Symbol::InferShape(

std::vector<const char *> keys;
std::vector<mx_uint> arg_ind_ptr;
std::vector<mx_uint> arg_shape_data;
std::vector<int> arg_shape_data;

for (const auto &arg : arg_shapes) {
keys.push_back(arg.first.c_str());
Expand All @@ -200,40 +200,40 @@ inline void Symbol::InferShape(
arg_ind_ptr.push_back(arg_shape_data.size());

mx_uint in_shape_size;
const mx_uint *in_shape_ndim;
const mx_uint **in_shape_data;
const int *in_shape_ndim;
const int **in_shape_data;
mx_uint out_shape_size;
const mx_uint *out_shape_ndim;
const mx_uint **out_shape_data;
const int *out_shape_ndim;
const int **out_shape_data;
mx_uint aux_shape_size;
const mx_uint *aux_shape_ndim;
const mx_uint **aux_shape_data;
const int *aux_shape_ndim;
const int **aux_shape_data;
int complete;

CHECK_EQ(MXSymbolInferShape(GetHandle(), keys.size(), keys.data(),
arg_ind_ptr.data(), arg_shape_data.data(),
&in_shape_size, &in_shape_ndim, &in_shape_data,
&out_shape_size, &out_shape_ndim, &out_shape_data,
&aux_shape_size, &aux_shape_ndim, &aux_shape_data,
&complete),
CHECK_EQ(MXSymbolInferShapeEx(GetHandle(), keys.size(), keys.data(),
arg_ind_ptr.data(), arg_shape_data.data(),
&in_shape_size, &in_shape_ndim, &in_shape_data,
&out_shape_size, &out_shape_ndim, &out_shape_data,
&aux_shape_size, &aux_shape_ndim, &aux_shape_data,
&complete),
0);

if (complete) {
for (mx_uint i = 0; i < in_shape_size; ++i) {
in_shape->push_back(std::vector<mx_uint>());
for (mx_uint j = 0; j < in_shape_ndim[i]; ++j) {
for (int j = 0; j < in_shape_ndim[i]; ++j) {
(*in_shape)[i].push_back(in_shape_data[i][j]);
}
}
for (mx_uint i = 0; i < aux_shape_size; ++i) {
aux_shape->push_back(std::vector<mx_uint>());
for (mx_uint j = 0; j < aux_shape_ndim[i]; ++j) {
for (int j = 0; j < aux_shape_ndim[i]; ++j) {
(*aux_shape)[i].push_back(aux_shape_data[i][j]);
}
}
for (mx_uint i = 0; i < out_shape_size; ++i) {
out_shape->push_back(std::vector<mx_uint>());
for (mx_uint j = 0; j < out_shape_ndim[i]; ++j) {
for (int j = 0; j < out_shape_ndim[i]; ++j) {
(*out_shape)[i].push_back(out_shape_data[i][j]);
}
}
Expand Down
Loading

0 comments on commit 1a8d18d

Please sign in to comment.