Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Two C++ unit test failing ACTIVATION_PERF.ExecuteBidirectional ACTIVATION_PERF.TimingCPU #13333

Closed
larroy opened this issue Nov 20, 2018 · 7 comments · Fixed by #13409
Closed
Labels
Bug C++ Related to C++ Test

Comments

@larroy
Copy link
Contributor

larroy commented Nov 20, 2018

Description

C++ unit tests failing on CPU without MKL

ACTIVATION_PERF.ExecuteBidirectional ACTIVATION_PERF.TimingCPU
ACTIVATION_PERF.TimingCPU

Environment info (Required)

----------Python Info----------
Version      : 3.5.2
Compiler     : GCC 5.4.0 20160609
Build        : ('default', 'Nov 23 2017 16:37:01')
Arch         : ('32bit', 'ELF')
------------Pip Info-----------
Version      : 18.1
Directory    : /home/piotr/devel/mxnet/mxnet_py3/lib/python3.5/site-packages/pip
----------MXNet Info-----------
Version      : 1.3.1
Directory    : /home/piotr/devel/mxnet/python/mxnet
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform     : Linux-3.4.113-sun7i-armv7l-with-Ubuntu-16.04-xenial
system       : Linux
node         : bananapipro
release      : 3.4.113-sun7i
version      : #16 SMP PREEMPT Wed Jan 24 19:20:59 CET 2018
----------Hardware Info----------
machine      : armv7l
processor    : armv7l
Architecture:          armv7l
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
CPU max MHz:           1008.0000
CPU min MHz:           60.0000
Hypervisor vendor:     (null)
Virtualization type:   full
----------Network Test----------
Setting timeout: 10
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.2887 sec, LOAD: 1.0031 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0704 sec, LOAD: 1.3277 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0747 sec, LOAD: 1.0667 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0291 sec, LOAD: 0.8247 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0304 sec, LOAD: 0.8775 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0290 sec, LOAD: 0.5141 sec.

Package used (Python/R/Scala/Julia):
sef compiled

Build info (Required if built from source)

#!/bin/bash
set -ex
mkdir -p build && cd build
cmake \
    -DUSE_SSE=OFF \
    -DUSE_CUDA=OFF \
    -DUSE_OPENCV=ON \
    -DUSE_OPENMP=ON \
    -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
    -DCMAKE_C_COMPILER_LAUNCHER=ccache \
    -DCMAKE_C_COMPILER=gcc-5 \
    -DCMAKE_CXX_COMPILER=g++-5 \
    -DUSE_MKL_IF_AVAILABLE=OFF \
    -DUSE_SIGNAL_HANDLER=ON \
    -DCMAKE_BUILD_TYPE=Debug \
    -GNinja ..
ninja -j1

MXNet commit hash:
64657c2

Error Message:

[  PASSED  ] 80 tests.
[  FAILED  ] 2 tests, listed below:
[  FAILED  ] ACTIVATION_PERF.ExecuteBidirectional
[  FAILED  ] ACTIVATION_PERF.TimingCPU

[ RUN      ] ACTIVATION_PERF.ExecuteBidirectional
unknown file: Failure
C++ exception with description "[00:14:32] ../src/operator/nn/./activation-inl.h:207: Check failed: inputs.size() == softsign ? 3U : 2U (3 vs. 2) 

Stack trace returned 10 entries:
[bt] (0) tests/mxnet_unit_tests(dmlc::StackTrace[abi:cxx11](unsigned int)+0x63) [0x13939f8]
[bt] (1) tests/mxnet_unit_tests(dmlc::LogMessageFatal::~LogMessageFatal()+0x2f) [0x1393c60]
[bt] (2) tests/mxnet_unit_tests(void mxnet::op::ActivationGradCompute<mshadow::cpu>(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::v
ector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xcb) [0x4665ec4]
[bt] (3) tests/mxnet_unit_tests(std::_Function_handler<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType,
 std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&), void (*)(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::alloc
ator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::_M_invoke(std::_Any_data const&, nn
vm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::
TBlob, std::allocator<mxnet::TBlob> > const&)+0x4d) [0x1757386]
[bt] (4) tests/mxnet_unit_tests(std::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::all
ocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::operator()(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<m
xnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&) const+0x5d) [0x13a016a]
[bt] (5) tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::Execute()+0x14f) [0x13a47a0]
[bt] (6) tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::ExecuteBackward()+0x14f) [0x13a4b7c]
[bt] (7) tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::backward(unsigned int)+0x99) [0x13bac0a]
[bt] (8) tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunGenericOperatorBackward(mxnet::test::op::OpInfo<mxnet::test::op
::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >*, unsigned int)+0xa9) [0x13ad7ae]
[bt] (9) tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunBidirectional(bool, std::vector<nnvm::TShape, std::allocator<nn
vm::TShape> > const&, std::vector<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >
 >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, 
unsigned int)+0x5b) [0x13a4db8]

Minimum reproducible example

(mxnet_py3) piotr@bananapipro:0:~/devel/mxnet (master)+$ build/tests/mxnet_unit_tests --gtest_filter="ACTIVATION_PERF.ExecuteBidirectional"

Note: Google Test filter = ACTIVATION_PERF.ExecuteBidirectional
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from ACTIVATION_PERF
[ RUN      ] ACTIVATION_PERF.ExecuteBidirectional
unknown file: Failure
C++ exception with description "[09:30:57] ../src/operator/nn/./activation-inl.h:207: Check failed: inputs.size() == softsign ? 3U : 2U (3 vs. 2) 

Stack trace returned 10 entries:
[bt] (0) build/tests/mxnet_unit_tests(dmlc::StackTrace[abi:cxx11](unsigned int)+0x63) [0x13939f8]
[bt] (1) build/tests/mxnet_unit_tests(dmlc::LogMessageFatal::~LogMessageFatal()+0x2f) [0x1393c60]
[bt] (2) build/tests/mxnet_unit_tests(void mxnet::op::ActivationGradCompute<mshadow::cpu>(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xcb) [0x4665ec4]
[bt] (3) build/tests/mxnet_unit_tests(std::_Function_handler<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&), void (*)(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::_M_invoke(std::_Any_data const&, nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x4d) [0x1757386]
[bt] (4) build/tests/mxnet_unit_tests(std::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::operator()(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&) const+0x5d) [0x13a016a]
[bt] (5) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::Execute()+0x14f) [0x13a47a0]
[bt] (6) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::ExecuteBackward()+0x14f) [0x13a4b7c]
[bt] (7) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::backward(unsigned int)+0x99) [0x13bac0a]
[bt] (8) build/tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunGenericOperatorBackward(mxnet::test::op::OpInfo<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >*, unsigned int)+0xa9) [0x13ad7ae]
[bt] (9) build/tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunBidirectional(bool, std::vector<nnvm::TShape, std::allocator<nnvm::TShape> > const&, std::vector<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, unsigned int)+0x5b) [0x13a4db8]

" thrown in the test body.
[  FAILED  ] ACTIVATION_PERF.ExecuteBidirectional (294 ms)
[----------] 1 test from ACTIVATION_PERF (294 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (296 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] ACTIVATION_PERF.ExecuteBidirectional

 1 FAILED TEST

(mxnet_py3) piotr@bananapipro:1:~/devel/mxnet (master)+$ build/tests/mxnet_unit_tests --gtest_filter="ACTIVATION_PERF.TimingCPU"

Note: Google Test filter = ACTIVATION_PERF.TimingCPU
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from ACTIVATION_PERF
[ RUN      ] ACTIVATION_PERF.TimingCPU
unknown file: Failure
C++ exception with description "[09:31:32] ../src/operator/nn/./activation-inl.h:207: Check failed: inputs.size() == softsign ? 3U : 2U (3 vs. 2) 

Stack trace returned 10 entries:
[bt] (0) build/tests/mxnet_unit_tests(dmlc::StackTrace[abi:cxx11](unsigned int)+0x63) [0x13939f8]
[bt] (1) build/tests/mxnet_unit_tests(dmlc::LogMessageFatal::~LogMessageFatal()+0x2f) [0x1393c60]
[bt] (2) build/tests/mxnet_unit_tests(void mxnet::op::ActivationGradCompute<mshadow::cpu>(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xcb) [0x4665ec4]
[bt] (3) build/tests/mxnet_unit_tests(std::_Function_handler<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&), void (*)(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::_M_invoke(std::_Any_data const&, nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x4d) [0x1757386]
[bt] (4) build/tests/mxnet_unit_tests(std::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)>::operator()(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&) const+0x5d) [0x13a016a]
[bt] (5) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::Execute()+0x14f) [0x13a47a0]
[bt] (6) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::ExecuteBackward()+0x14f) [0x13a4b7c]
[bt] (7) build/tests/mxnet_unit_tests(mxnet::test::op::CoreOpExecutor<float, float>::backward(unsigned int)+0x99) [0x13bac0a]
[bt] (8) build/tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunGenericOperatorBackward(mxnet::test::op::OpInfo<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >*, unsigned int)+0xa9) [0x13ad7ae]
[bt] (9) build/tests/mxnet_unit_tests(mxnet::test::OperatorRunner<mxnet::test::op::CoreOpProp, mxnet::test::op::CoreOpExecutor<float, float> >::RunBidirectional(bool, std::vector<nnvm::TShape, std::allocator<nnvm::TShape> > const&, std::vector<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&, unsigned int)+0x5b) [0x13a4db8]

" thrown in the test body.
[  FAILED  ] ACTIVATION_PERF.TimingCPU (385 ms)
[----------] 1 test from ACTIVATION_PERF (386 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (388 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] ACTIVATION_PERF.TimingCPU

 1 FAILED TEST

Steps to reproduce

Can be reproduced in QEMU:

ci/build.py -p armv7
ci/build.py -p test.arm_qemu -b && docker run -p2222:2222 -ti mxnetci/build.test.arm_qemu

# In another terminal
rsync -e 'ssh -p2222' -vaP build/mxnet-1.3.1-py2.py3-none-any.whl build/tests/mxnet_unit_tests qemu@localhost:
ssh -p2222 qemu@localhost
$ ./mxnet_unit_tests --gtest_filter="ACTIVATION_PERF.ExecuteBidirectional"
@larroy
Copy link
Contributor Author

larroy commented Nov 20, 2018

@mxnet-label-bot labels [Bug,Test,Arm,C++]

@larroy
Copy link
Contributor Author

larroy commented Nov 20, 2018

The problem is actually not arm specific, this is happening when not compiling on GPU and not MKL, we are not running those tests in CI.

@lebeg
Copy link
Contributor

lebeg commented Nov 20, 2018

Maybe rename and edit the description of the issue then?

@vdantu
Copy link
Contributor

vdantu commented Nov 20, 2018

@mxnet-label-bot add [Bug,Test,Arm,C++]

larroy added a commit to larroy/mxnet that referenced this issue Nov 20, 2018
@larroy
Copy link
Contributor Author

larroy commented Nov 20, 2018

Could reproduce on amd64 with
cmake
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DUSE_MKL_IF_AVAILABLE=OFF
-DUSE_CPP_PACKAGE=ON
-DUSE_CUDA=OFF
-DUSE_OPENMP=ON
-DUSE_OPENCV=ON
-DCMAKE_BUILD_TYPE=Debug
-GNinja ..
ninja -v

@larroy larroy changed the title Two C++ unit test failing on ARMv7 ACTIVATION_PERF.ExecuteBidirectional ACTIVATION_PERF.TimingCPU Two C++ unit test failing ACTIVATION_PERF.ExecuteBidirectional ACTIVATION_PERF.TimingCPU Nov 20, 2018
@larroy
Copy link
Contributor Author

larroy commented Nov 20, 2018

@mxnet-label-bot remove [Arm]

@marcoabreu marcoabreu removed the ARM label Nov 20, 2018
@larroy
Copy link
Contributor Author

larroy commented Nov 23, 2018

ElemwiseShape is asserting because the relu shape is 2, but hardcoded as 3 https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/activation.cc#L176

Introduced in this commit
b2ec05b

PR: #10847

The shape needs to be conditioned on the activation function

larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 27, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 28, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 29, 2018
larroy added a commit to larroy/mxnet that referenced this issue Nov 30, 2018
larroy added a commit to larroy/mxnet that referenced this issue Dec 3, 2018
larroy added a commit to larroy/mxnet that referenced this issue Dec 3, 2018
larroy added a commit to larroy/mxnet that referenced this issue Dec 3, 2018
larroy added a commit to larroy/mxnet that referenced this issue Dec 3, 2018
larroy added a commit to larroy/mxnet that referenced this issue Dec 4, 2018
eric-haibin-lin pushed a commit that referenced this issue Dec 4, 2018
)

* Provide a failing test for ReLU activation shape inference bug

* Fix Activation backward shape inference

fixes: #13333

* Add softsign Activation to test_gluon.py

* Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now

* Don't disable MKLDNN
eric-haibin-lin pushed a commit that referenced this issue Dec 4, 2018
* Provide a failing test for ReLU activation shape inference bug

* Fix Activation backward shape inference

fixes: #13333

* Add softsign Activation to test_gluon.py

* Don't disable MKLDNN
sergeykolychev pushed a commit that referenced this issue Dec 5, 2018
…ile (#13478)

* updated to v1.5.0

* Bumped minor version from 1.4.0 to 1.5.0 on master

* added Anirudh as maintainer for R package

... adding something useful and re-trigger PR check

* Updated license file for clojure, onnx-tensorrt, gtest, R-package

* Get the correct include path in pip package (#13452)

* add find_include_path API

* address reviewer comment

* change return type from list to string

* add unit test

* address reviewer comment

* address reviewer comment

* address reviewer comment

* address reviewer comment

* fix include path problem in pip package

* add comment

* fix lint error

* address reviewer comment

* address reviewer comment

* Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (#13431)

* Skip flaky test #13446 (#13480)

* Rewrite dataloader with process pool, improves responsiveness and reliability (#13447)

* fix recordio.py

* rewrite dataloader with pool

* fix batch as tuple

* fix prefetching

* fix pylint

* picklable function

* use pickle

* add missing commit

* Fix errors in docstrings for subgraph op; use code directive (#13463)

* [MXNET-1158] JVM Memory Management Documentation (#13105)

* update train_mnist

* Add documentation for JVM Memory Management

* update doc

* address nit picks

* address nit picks

* Grammar and clarity edits for memory management doc

* Edits for scala memory management

* Update memory-management.md

* Update memory-management.md

* Update memory-management.md

* capitalization fix

* Update row_sparse tutorial (#13414)

Update row_sparse tutorial

* Add resiliency to onnx export code (#13426)

* Added resiliency to onnx export code

- With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code.

* Fixed name of net in unittest

* Fix pylint

* [MXNET-1185] Support large array in several operators (part 1) (#13418)

* fix a few operators with large arrays (# of elements)

* fix bug in broadcast_div and add tests

* address reviewer comment

* add unit test

* add empty line

* retrigger CI

* [MXNET-1210 ] Gluon Audio - Example (#13325)

* Initialized the example

* Addressed PR comments, about existing synset.txt file - no overwrite

* RST - docstring issues fixed

* added README

* Addressed PR comments

* Addressed PR comments, checking Divide by 0

* Raising error if format is not supported.

* changed a line for ndarray of labels

* Trigger CI

* Trigger CI

* PR comments addressed around skip_header argument

* Addressed PR comments around librosa import

* PR Comments

* Passing lazy=lazy from argument

* Added PR comments, labels to README.MD

* Trigger CI

* Addressing PR Comments in README

* Modified README.md

* Added example under audio folder

* Retrigger CI

* Retrigger CI

* ONNX export: Instance normalization, Shape (#12920)

* ONNX import/export: Make backend_rep common

* ONNX export: Instance Normalization

* ONNX export: Shape operator

* Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495)

* clarify ops faq regarding docs strings (#13492)

* Add graph_compact operator. (#13436)

* add graph_compact.

* fix.

* add doc.

* add tests for graph_compact.

* address comments.

* update docs.

* trigger CI

* Deprecate Jenkinsfile (#13474)

* update github location for sampled_block.py (#13508)

Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py

* #13453 [Clojure] - Add Spec Validations to the Optimizer namespace (#13499)

* ONNX export: Logical operators (#12852)

* Fix cmake options parsing in dev_menu (#13458)

Add GPU+MKLDNN unittests to dev_menu

* Revert "Manually track num_max_thread (#12380)" (#13501)

This reverts commit 7541021.

* Feature/mkldnn static 2 (#13503)

* build mkldnn as static lib

* update makefile to statically build mkldnn

* build static mkldnn

* fix static name

* fix static name

* update static for mac

* rename mkldnn dep in ci

* remove moving mkldnn dynamic lib

* remove commented code

* remove mkldnn dnaymic for unitest

* force static for mkldnn lib

* remove dynamic mkldnn bind

* only link windows

* add mkldnn.mk

* try force linking

* remove mkldnn dynanmic check

* remove test mkldnn install

* fix spacing

* fix index

* add artifacts

* add comment about windows

* remove static

* update makefile

* fix toctree Sphinx errors (#13489)

* fix toctree errors

* nudging file for CI

* Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527)

* [MXNET-1234] Fix shape inference problems in Activation backward (#13409)

* Provide a failing test for ReLU activation shape inference bug

* Fix Activation backward shape inference

fixes: #13333

* Add softsign Activation to test_gluon.py

* Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now

* Don't disable MKLDNN
zhaoyao73 pushed a commit to zhaoyao73/incubator-mxnet that referenced this issue Dec 13, 2018
…che#13409)

* Provide a failing test for ReLU activation shape inference bug

* Fix Activation backward shape inference

fixes: apache#13333

* Add softsign Activation to test_gluon.py

* Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now

* Don't disable MKLDNN
zhaoyao73 pushed a commit to zhaoyao73/incubator-mxnet that referenced this issue Dec 13, 2018
…ile (apache#13478)

* updated to v1.5.0

* Bumped minor version from 1.4.0 to 1.5.0 on master

* added Anirudh as maintainer for R package

... adding something useful and re-trigger PR check

* Updated license file for clojure, onnx-tensorrt, gtest, R-package

* Get the correct include path in pip package (apache#13452)

* add find_include_path API

* address reviewer comment

* change return type from list to string

* add unit test

* address reviewer comment

* address reviewer comment

* address reviewer comment

* address reviewer comment

* fix include path problem in pip package

* add comment

* fix lint error

* address reviewer comment

* address reviewer comment

* Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (apache#13431)

* Skip flaky test apache#13446 (apache#13480)

* Rewrite dataloader with process pool, improves responsiveness and reliability (apache#13447)

* fix recordio.py

* rewrite dataloader with pool

* fix batch as tuple

* fix prefetching

* fix pylint

* picklable function

* use pickle

* add missing commit

* Fix errors in docstrings for subgraph op; use code directive (apache#13463)

* [MXNET-1158] JVM Memory Management Documentation (apache#13105)

* update train_mnist

* Add documentation for JVM Memory Management

* update doc

* address nit picks

* address nit picks

* Grammar and clarity edits for memory management doc

* Edits for scala memory management

* Update memory-management.md

* Update memory-management.md

* Update memory-management.md

* capitalization fix

* Update row_sparse tutorial (apache#13414)

Update row_sparse tutorial

* Add resiliency to onnx export code (apache#13426)

* Added resiliency to onnx export code

- With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code.

* Fixed name of net in unittest

* Fix pylint

* [MXNET-1185] Support large array in several operators (part 1) (apache#13418)

* fix a few operators with large arrays (# of elements)

* fix bug in broadcast_div and add tests

* address reviewer comment

* add unit test

* add empty line

* retrigger CI

* [MXNET-1210 ] Gluon Audio - Example (apache#13325)

* Initialized the example

* Addressed PR comments, about existing synset.txt file - no overwrite

* RST - docstring issues fixed

* added README

* Addressed PR comments

* Addressed PR comments, checking Divide by 0

* Raising error if format is not supported.

* changed a line for ndarray of labels

* Trigger CI

* Trigger CI

* PR comments addressed around skip_header argument

* Addressed PR comments around librosa import

* PR Comments

* Passing lazy=lazy from argument

* Added PR comments, labels to README.MD

* Trigger CI

* Addressing PR Comments in README

* Modified README.md

* Added example under audio folder

* Retrigger CI

* Retrigger CI

* ONNX export: Instance normalization, Shape (apache#12920)

* ONNX import/export: Make backend_rep common

* ONNX export: Instance Normalization

* ONNX export: Shape operator

* Clarify dependency on OpenCV in CNN Visualization tutorial. (apache#13495)

* clarify ops faq regarding docs strings (apache#13492)

* Add graph_compact operator. (apache#13436)

* add graph_compact.

* fix.

* add doc.

* add tests for graph_compact.

* address comments.

* update docs.

* trigger CI

* Deprecate Jenkinsfile (apache#13474)

* update github location for sampled_block.py (apache#13508)

Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py

* apache#13453 [Clojure] - Add Spec Validations to the Optimizer namespace (apache#13499)

* ONNX export: Logical operators (apache#12852)

* Fix cmake options parsing in dev_menu (apache#13458)

Add GPU+MKLDNN unittests to dev_menu

* Revert "Manually track num_max_thread (apache#12380)" (apache#13501)

This reverts commit 7541021.

* Feature/mkldnn static 2 (apache#13503)

* build mkldnn as static lib

* update makefile to statically build mkldnn

* build static mkldnn

* fix static name

* fix static name

* update static for mac

* rename mkldnn dep in ci

* remove moving mkldnn dynamic lib

* remove commented code

* remove mkldnn dnaymic for unitest

* force static for mkldnn lib

* remove dynamic mkldnn bind

* only link windows

* add mkldnn.mk

* try force linking

* remove mkldnn dynanmic check

* remove test mkldnn install

* fix spacing

* fix index

* add artifacts

* add comment about windows

* remove static

* update makefile

* fix toctree Sphinx errors (apache#13489)

* fix toctree errors

* nudging file for CI

* Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (apache#13527)

* [MXNET-1234] Fix shape inference problems in Activation backward (apache#13409)

* Provide a failing test for ReLU activation shape inference bug

* Fix Activation backward shape inference

fixes: apache#13333

* Add softsign Activation to test_gluon.py

* Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now

* Don't disable MKLDNN
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Bug C++ Related to C++ Test
Projects
None yet
4 participants