Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[mkldnn-v1.0]rebase with master #16649

Merged
merged 33 commits into from
Oct 28, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
91ad266
fixed broken links across multiple files (#16581)
TEChopra1000 Oct 23, 2019
e22e93f
fix missing docs due to git add issues (#16496)
sojiadeshina Oct 23, 2019
05a4c4f
Create SECURITY.md (#16573)
marcoabreu Oct 23, 2019
c3395ca
[Numpy] Support N_D(N>=3) batch_dot (#16586)
sxjscience Oct 24, 2019
0742a9b
Large Vector tests for DGL Ops Part 2 (#16497)
ChaiBapchya Oct 24, 2019
ca5a2a0
[Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (…
stu1130 Oct 24, 2019
8270672
Surpress subgraph log in CI (#16607)
ZhennanQin Oct 24, 2019
bde443e
Fix dequantize memory corruption (#16606)
ZhennanQin Oct 24, 2019
dd4eaf5
[MKLDNN]Fix reorder2default (#16602)
ZhennanQin Oct 24, 2019
e10e94e
second round of fixing broken links in multiple files (#16598)
TEChopra1000 Oct 24, 2019
82ddc93
Python Docstring Convetion (#16550)
Oct 24, 2019
487d69a
[MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)
titsuki Oct 24, 2019
9c99bf2
Fix for wrong reqs set after switching from training to inference (#1…
ptrendx Oct 24, 2019
ef56334
julia/docs: more DRY on page rendering (#16396)
iblislin Oct 24, 2019
4e03e6a
Disables test_bulking_operator_gpu due to flakiness (#16611)
ChaiBapchya Oct 25, 2019
c0e616f
C Api for simplebind, fix comment for trigoops, add atol to assert (#…
ChaiBapchya Oct 25, 2019
c574067
Imagenet inference to nightly fix (#16599)
ChaiBapchya Oct 25, 2019
7862738
Fix python doc build issue (#16630)
ChaiBapchya Oct 25, 2019
0712f00
Faster general take (#16615)
blchu Oct 26, 2019
8c44af4
[Gluon] Don't serialize shared parameters twice (#16582)
leezu Oct 26, 2019
e262455
Fix index overflow bug in einsum (#16589)
hzfan Oct 26, 2019
29e467b
Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)
ZhennanQin Oct 26, 2019
c130cc9
add npx reshape (#16640)
sxjscience Oct 27, 2019
9f21cdd
RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)
leezu Oct 27, 2019
73c6b4a
fix bad encode (#16641)
yajiedesign Oct 27, 2019
84d61a1
[Perl] - ndarray to native array conversion fix (#16635)
tlby Oct 27, 2019
d12e674
fixing broken links in multiple files - round 3 (#16634)
TEChopra1000 Oct 27, 2019
22e5ae3
add type switch to weight tensor (#16543)
xidulu Oct 27, 2019
6ab4220
numpy doc enhancement (#16637)
reminisce Oct 27, 2019
ffc5392
Disable float16 test (#16643)
hzfan Oct 27, 2019
11dff51
Fix GetMKLDNNData for delay alloc (#16618)
ZhennanQin Oct 28, 2019
d7a8ccf
Revert "[mkldnn-1.0]Rebase to master (#16648)"
ZhennanQin Oct 28, 2019
d30a6d3
Merge remote-tracking branch 'offical/master' into mkldnn-v1.0
ZhennanQin Oct 28, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions benchmark/python/einsum/benchmark_einsum.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,15 @@ def test_np_einsum():
cost = measure_cost(500, np.einsum, *args, optimize=True)
print("Greedy einsum: {} ms".format(cost * 1000))

print("RNN Use Case:")
a = np.random.uniform(0, 1, size=(64, 128, 512))
b = np.random.uniform(0, 1, size=(128, 512, 2, 2))
args = ['bij, ijkl->bkl', a, b]
cost = measure_cost(2, np.einsum, *args, optimize=True)
print('Greedy einsum: {} ms'.format(cost * 1000))
cost = measure_cost(2, np.einsum, *args)
print('Basic einsum: {} ms'.format(cost * 1000))

print('Inner Product:')
a = np.ones(6000000)
b = np.ones(6000000)
Expand Down
5 changes: 3 additions & 2 deletions ci/docker/runtime_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -1482,8 +1482,9 @@ nightly_test_installation() {
nightly_test_imagenet_inference() {
set -ex
echo $PWD
cp /work/mxnet/build/cpp-package/example/imagenet_inference .
/work/mxnet/cpp-package/example/inference/unit_test_imagenet_inference.sh
cp /work/mxnet/build/cpp-package/example/imagenet_inference /work/mxnet/cpp-package/example/inference/
cd /work/mxnet/cpp-package/example/inference/
./unit_test_imagenet_inference.sh
}

#Runs a simple MNIST training example
Expand Down
19 changes: 9 additions & 10 deletions docs/python_docs/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,12 @@ dependencies:
- matplotlib
- notebook
- pip:
# using nbconvert master until v5.5 comes out
- git+https://github.com/jupyter/nbconvert@master
- nbsphinx>=0.4.2
- recommonmark
- notedown
- pypandoc
- breathe
- mock
- awscli
- autodocsumm
- nbconvert==5.6.1
- nbsphinx==0.4.3
- recommonmark==0.6.0
- notedown==1.5.1
- pypandoc==1.4
- breathe==4.13.1
- mock==3.0.5
- awscli==1.16.266
- autodocsumm==0.1.11
2 changes: 1 addition & 1 deletion docs/python_docs/python/tutorials/extend/custom_layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ The rest of methods of the `Block` class are already implemented, and majority o

Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.

The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convinient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from [this article](https://mxnet.apache.org/architecture/program_model.html).
The reason for that is that `HybridBlock` allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convinient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from [this article](/api/architecture/program_model).

Hybridization is a process that Apache MxNet uses to create a symbolic graph of a forward computation. This allows to increase computation performance by optimizing the computational symbolic graph. Once the symbolic graph is created, Apache MxNet caches and reuses it for subsequent computations.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,14 +99,14 @@ ctx = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]
batch_size = per_device_batch_size * max(num_gpus, 1)
```

Now we will apply data augmentations on training images. This makes minor alterations on the training images, and our model will consider them as distinct images. This can be very useful for fine-tuning on a relatively small dataset, and it will help improve the model. We can use the Gluon [DataSet API](https://mxnet.apache.org/tutorials/gluon/datasets.html), [DataLoader API](https://mxnet.apache.org/tutorials/gluon/datasets.html), and [Transform API](https://mxnet.apache.org/tutorials/gluon/data_augmentation.html) to load the images and apply the following data augmentations:
Now we will apply data augmentations on training images. This makes minor alterations on the training images, and our model will consider them as distinct images. This can be very useful for fine-tuning on a relatively small dataset, and it will help improve the model. We can use the Gluon [DataSet API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.Dataset), [DataLoader API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.DataLoader), and [Transform API](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.Dataset.transform) to load the images and apply the following data augmentations:
1. Randomly crop the image and resize it to 224x224
2. Randomly flip the image horizontally
3. Randomly jitter color and add noise
4. Transpose the data from `[height, width, num_channels]` to `[num_channels, height, width]`, and map values from [0, 255] to [0, 1]
5. Normalize with the mean and standard deviation from the ImageNet dataset.

For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](https://mxnet.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html).
For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](/api/cpp/docs/tutorials/cpp_inference).

```python
jitter_param = 0.4
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ with warnings.catch_warnings():
Epoch 2, loss 0.3229 <!--notebook-skip-line-->
```

You can load the saved model, by using the `load_parameters` API in Gluon. For more details refer to the [Loading model parameters from file tutorial](../blocks/save_load_params.html#saving-model-parameters-to-file)
You can load the saved model, by using the `load_parameters` API in Gluon. For more details refer to the [Loading model parameters from file tutorial](/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html#saving-model-parameters-to-file)


```python
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -240,8 +240,8 @@ The function you will explore is: *y = x<sub>1</sub> + 2x<sub>2</sub> + ... 10

### Preparing the Data

In MXNet, both [mx.io.LibSVMIter](https://mxnet.apache.org/versions/master/api/python/io/io.html#mxnet.io.LibSVMIter)
and [mx.io.NDArrayIter](https://mxnet.apache.org/versions/master/api/python/io/io.html#mxnet.io.NDArrayIter)
In MXNet, both [mx.io.LibSVMIter](/api/python/docs/api/mxnet/io/index.html#mxnet.io.LibSVMIter)
and [mx.io.NDArrayIter](/api/python/docs/api/mxnet/io/index.html#mxnet.io.NDArrayIter)
support loading sparse data in CSR format. In this example, we'll use the `NDArrayIter`.

You may see some warnings from SciPy. You don't need to worry about those for this example.
Expand Down
Loading