Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
[mkldnn-v1.0]rebase with master (#16649)
Browse files Browse the repository at this point in the history
* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* Disables test_bulking_operator_gpu due to flakiness (#16611)

* C Api for simplebind, fix comment for trigoops, add atol to assert (#16585)

* C Api for simplebind, fix comment for trigoops, add atol to assert

* fix build issues

* fix lint and add regression test

* fix indent

* api doc and function name change

* fix lint and add infer shape test

* Imagenet inference to nightly fix (#16599)

* split to cd and shell

* comment

* lots of prints

* copy binary at correct location

* remove comments

* add mkl lib

* update docker run build function

* set nvidia docker true to run imagenet inference on GPU

* Revert "set nvidia docker true to run imagenet inference on GPU"

This reverts commit 98f8eef.
As we don't need GPU for compilation.

* Fix python doc build issue (#16630)

* pin the pip versions

* remove nbconvert comment

* Faster general take (#16615)

* Sped up perf of take op when axis != 0

* Formatting and syntax fixes

* Rename Take to specify axis

* Fix line length lint errors

* [Gluon] Don't serialize shared parameters twice (#16582)

Add deduplicate argument (default of False) to save_parameters.

* Fix index overflow bug in einsum (#16589)

* fix index overflow

* check index overflow

* fix index overflow in einsum path

* fix indent

* reduce NPY_MAXARGS

* safe accumulate

* Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)

* Move subgraph pass log to verbose=2

* Run CI

* add npx reshape (#16640)

* RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)

* fix bad encode (#16641)

* [Perl] - ndarray to native array conversion fix (#16635)

* fixing broken links in multiple files - round 3 (#16634)

* add type switch to weight tensor (#16543)

* numpy doc enhancement (#16637)

* Change NDArray to ndarray for npx ops

Add nonzero

boolean mask supports boolean ndarray

Add argmin op and interoperability test for nonzero

Fix vdot, inner, outter docs

Add nonzero to mx.nd.np

Add docs

Fix

* Fix lint

* Fix

* Fix

* Fix get_constant

* Disable float16 test (#16643)

* Fix GetMKLDNNData for delay alloc (#16618)

* Fix GetMKLDNNData for delay alloc

* Run CI

* Run CI

* Run CI

* Run CI

* Run CI

Change-Id: I7ac2796e0ee8439c92fd2bd7a70a23a359b76b12

* Revert "[mkldnn-1.0]Rebase to master (#16648)"

This reverts commit dea3dd2.
  • Loading branch information
ZhennanQin authored and pengzhao-intel committed Oct 28, 2019
1 parent dea3dd2 commit ddbe0b1
Show file tree
Hide file tree
Showing 71 changed files with 2,284 additions and 526 deletions.
9 changes: 9 additions & 0 deletions benchmark/python/einsum/benchmark_einsum.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,15 @@ def test_np_einsum():
cost = measure_cost(500, np.einsum, *args, optimize=True)
print("Greedy einsum: {} ms".format(cost * 1000))

print("RNN Use Case:")
a = np.random.uniform(0, 1, size=(64, 128, 512))
b = np.random.uniform(0, 1, size=(128, 512, 2, 2))
args = ['bij, ijkl->bkl', a, b]
cost = measure_cost(2, np.einsum, *args, optimize=True)
print('Greedy einsum: {} ms'.format(cost * 1000))
cost = measure_cost(2, np.einsum, *args)
print('Basic einsum: {} ms'.format(cost * 1000))

print('Inner Product:')
a = np.ones(6000000)
b = np.ones(6000000)
Expand Down
5 changes: 3 additions & 2 deletions ci/docker/runtime_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -1482,8 +1482,9 @@ nightly_test_installation() {
nightly_test_imagenet_inference() {
set -ex
echo $PWD
cp /work/mxnet/build/cpp-package/example/imagenet_inference .
/work/mxnet/cpp-package/example/inference/unit_test_imagenet_inference.sh
cp /work/mxnet/build/cpp-package/example/imagenet_inference /work/mxnet/cpp-package/example/inference/
cd /work/mxnet/cpp-package/example/inference/
./unit_test_imagenet_inference.sh
}

#Runs a simple MNIST training example
Expand Down
19 changes: 9 additions & 10 deletions docs/python_docs/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,12 @@ dependencies:
- matplotlib
- notebook
- pip:
# using nbconvert master until v5.5 comes out
- git+https://github.com/jupyter/nbconvert@master
- nbsphinx>=0.4.2
- recommonmark
- notedown
- pypandoc
- breathe
- mock
- awscli
- autodocsumm
- nbconvert==5.6.1
- nbsphinx==0.4.3
- recommonmark==0.6.0
- notedown==1.5.1
- pypandoc==1.4
- breathe==4.13.1
- mock==3.0.5
- awscli==1.16.266
- autodocsumm==0.1.11
2 changes: 1 addition & 1 deletion docs/python_docs/python/tutorials/extend/custom_layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ The rest of methods of the `Block` class are already implemented, and majority o

Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.

The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convinient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from [this article](https://mxnet.apache.org/architecture/program_model.html).
The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convinient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from [this article](/api/architecture/program_model).

Hybridization is a process that Apache MxNet uses to create a symbolic graph of a forward computation. This allows to increase computation performance by optimizing the computational symbolic graph. Once the symbolic graph is created, Apache MxNet caches and reuses it for subsequent computations.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,14 +99,14 @@ ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]
batch_size = per_device_batch_size * max(num_gpus, 1)
```

Now we will apply data augmentations on training images. This makes minor alterations on the training images, and our model will consider them as distinct images. This can be very useful for fine-tuning on a relatively small dataset, and it will help improve the model. We can use the Gluon [DataSet API](https://mxnet.apache.org/tutorials/gluon/datasets.html), [DataLoader API](https://mxnet.apache.org/tutorials/gluon/datasets.html), and [Transform API](https://mxnet.apache.org/tutorials/gluon/data_augmentation.html) to load the images and apply the following data augmentations:
Now we will apply data augmentations on training images. This makes minor alterations on the training images, and our model will consider them as distinct images. This can be very useful for fine-tuning on a relatively small dataset, and it will help improve the model. We can use the Gluon [DataSet API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.Dataset), [DataLoader API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.DataLoader), and [Transform API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.Dataset.transform) to load the images and apply the following data augmentations:
1. Randomly crop the image and resize it to 224x224
2. Randomly flip the image horizontally
3. Randomly jitter color and add noise
4. Transpose the data from `[height, width, num_channels]` to `[num_channels, height, width]`, and map values from [0, 255] to [0, 1]
5. Normalize with the mean and standard deviation from the ImageNet dataset.

For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](https://mxnet.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html).
For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](/api/cpp/docs/tutorials/cpp_inference).

```python
jitter_param = 0.4
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ with warnings.catch_warnings():
Epoch 2, loss 0.3229 <!--notebook-skip-line-->
```

You can load the saved model, by using the `load_parameters` API in Gluon. For more details refer to the [Loading model parameters from file tutorial](../blocks/save_load_params.html#saving-model-parameters-to-file)
You can load the saved model, by using the `load_parameters` API in Gluon. For more details refer to the [Loading model parameters from file tutorial](/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html#saving-model-parameters-to-file)


```python
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -240,8 +240,8 @@ The function you will explore is: *y = x<sub>1</sub> + 2x<sub>2</sub> + ... 10

### Preparing the Data

In MXNet, both [mx.io.LibSVMIter](https://mxnet.apache.org/versions/master/api/python/io/io.html#mxnet.io.LibSVMIter)
and [mx.io.NDArrayIter](https://mxnet.apache.org/versions/master/api/python/io/io.html#mxnet.io.NDArrayIter)
In MXNet, both [mx.io.LibSVMIter](/api/python/docs/api/mxnet/io/index.html#mxnet.io.LibSVMIter)
and [mx.io.NDArrayIter](/api/python/docs/api/mxnet/io/index.html#mxnet.io.NDArrayIter)
support loading sparse data in CSR format. In this example, we'll use the `NDArrayIter`.

You may see some warnings from SciPy. You don't need to worry about those for this example.
Expand Down
Loading

0 comments on commit ddbe0b1

Please sign in to comment.