Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Fixed tutorial warnings (#14472)
Browse files Browse the repository at this point in the history
* Fixed tutorial warnings

* Remove test case for matrix factorization tutorial

* trigger

* Update hybrid.md
  • Loading branch information
NRauschmayr authored and nswamy committed Apr 5, 2019
1 parent 07901c3 commit a211550
Show file tree
Hide file tree
Showing 7 changed files with 15 additions and 298 deletions.
7 changes: 5 additions & 2 deletions docs/tutorials/gluon/hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,10 @@ You can use other language bindings to load them. You can also load them back
to gluon with `SymbolBlock`:

```python
net2 = gluon.SymbolBlock.imports('model-symbol.json', ['data'], 'model-0001.params')
import warnings

with warnings.catch_warnings():
net2 = gluon.SymbolBlock.imports('model-symbol.json', ['data'], 'model-0001.params')
```

## Operators that do not work with hybridize
Expand Down Expand Up @@ -259,4 +262,4 @@ For example, avoid writing `x += y` and use `x = x + y`, otherwise you will get

The recommended practice is to utilize the flexibility of imperative NDArray API during experimentation. Once you finalized your model, make necessary changes mentioned above so you can call `hybridize` function to improve performance.

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
4 changes: 3 additions & 1 deletion docs/tutorials/gluon/save_load_params.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,9 @@ One of the main reasons to serialize model architecture into a JSON file is to l
Serialized Hybrid networks (saved as .JSON and .params file) can be loaded and used inside Python frontend using `gluon.nn.SymbolBlock`. To demonstrate that, let's load the network we serialized above.

```python
deserialized_net = gluon.nn.SymbolBlock.imports("lenet-symbol.json", ['data'], "lenet-0001.params", ctx=ctx)
import warnings
with warnings.catch_warnings():
deserialized_net = gluon.nn.SymbolBlock.imports("lenet-symbol.json", ['data'], "lenet-0001.params", ctx=ctx)
```

`deserialized_net` now contains the network we deserialized from files. Let's test the deserialized network to make sure it works.
Expand Down
1 change: 0 additions & 1 deletion docs/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,6 @@ Select API:&nbsp;
* [MNIST Handwritten Digit Classification](/tutorials/python/mnist.html)
* [Movie Review Classification using Convolutional Networks](/tutorials/nlp/cnn.html)
* [Generative Adversarial Networks (GANs)](/tutorials/unsupervised_learning/gan.html)
* [Recommender Systems using Matrix Factorization](/tutorials/python/matrix_factorization.html)
* [Speech Recognition with Connectionist Temporal Classification Loss](/tutorials/speech_recognition/ctc.html)
* Practitioner Guides
* [Predicting on new images using a pre-trained ImageNet model](/tutorials/python/predict_image.html)
Expand Down
4 changes: 3 additions & 1 deletion docs/tutorials/onnx/fine_tuning_gluon.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,9 @@ We create a symbol block that is going to hold all our pre-trained layers, and a


```python
pre_trained = gluon.nn.SymbolBlock(outputs=new_sym, inputs=mx.sym.var('data_0'))
import warnings
with warnings.catch_warnings():
pre_trained = gluon.nn.SymbolBlock(outputs=new_sym, inputs=mx.sym.var('data_0'))
net_params = pre_trained.collect_params()
for param in new_arg_params:
if param in net_params:
Expand Down
5 changes: 4 additions & 1 deletion docs/tutorials/onnx/inference_on_onnx_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,9 @@ print(data_names)
And load them into a MXNet Gluon symbol block.

```python
net = gluon.nn.SymbolBlock(outputs=sym, inputs=mx.sym.var('data_0'))
import warnings
with warnings.catch_warnings():
net = gluon.nn.SymbolBlock(outputs=sym, inputs=mx.sym.var('data_0'))
net_params = net.collect_params()
for param in arg_params:
if param in net_params:
Expand Down Expand Up @@ -247,6 +249,7 @@ Lucky for us, the [Caltech101 dataset](http://www.vision.caltech.edu/Image_Datas

We show that in our next tutorial:


- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](http://mxnet.incubator.apache.org/tutorials/onnx/fine_tuning_gluon.html)

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
289 changes: 0 additions & 289 deletions docs/tutorials/python/matrix_factorization.md

This file was deleted.

3 changes: 0 additions & 3 deletions tests/tutorials/test_tutorials.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,6 @@ def test_onnx_fine_tuning_gluon():
def test_onnx_inference_on_onnx_model():
assert _test_tutorial_nb('onnx/inference_on_onnx_model')

def test_python_matrix_factorization():
assert _test_tutorial_nb('python/matrix_factorization')

def test_python_linear_regression():
assert _test_tutorial_nb('python/linear-regression')

Expand Down

0 comments on commit a211550

Please sign in to comment.