Skip to content

Commit

Permalink
update notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
szha committed Aug 14, 2020
1 parent 6defdaa commit a5ec9b9
Show file tree
Hide file tree
Showing 37 changed files with 499 additions and 659 deletions.
12 changes: 6 additions & 6 deletions docs/python_docs/python/tutorials/deploy/export/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ To run the tutorial you will need to have installed the following python modules
*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator set which comes with ONNX v1.2.1.


```python
```{.python .input}
import mxnet as mx
import numpy as np
from mxnet.contrib import onnx as onnx_mxnet
Expand All @@ -47,7 +47,7 @@ logging.basicConfig(level=logging.INFO)
We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) model from the [MXNet Model Zoo](/api/python/docs/api/gluon/model_zoo/index.html).
We will also download synset file to match labels.

```python
```{.python .input}
# Download pre-trained resnet model - json and params by running following code.
path='http://data.mxnet.io/models/imagenet/'
[mx.test_utils.download(path+'resnet/18-layers/resnet-18-0000.params'),
Expand All @@ -61,7 +61,7 @@ Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.

Let us describe the MXNet's `export_model` API.

```python
```{.python .input}
help(onnx_mxnet.export_model)
```

Expand Down Expand Up @@ -109,7 +109,7 @@ Since we have downloaded pre-trained model files, we will use the `export_model`

We will use the downloaded pre-trained model files (sym, params) and define input variables.

```python
```{.python .input}
# Downloaded input symbol and params files
sym = './resnet-18-symbol.json'
params = './resnet-18-0000.params'
Expand All @@ -123,7 +123,7 @@ onnx_file = './mxnet_exported_resnet50.onnx'

We have defined the input parameters required for the `export_model` API. Now, we are ready to covert the MXNet model into ONNX format.

```python
```{.python .input}
# Invoke export model API. It returns path of the converted onnx model
converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)
```
Expand All @@ -134,7 +134,7 @@ This API returns path of the converted model which you can later use to import t

Now we can check validity of the converted ONNX model by using ONNX checker tool. The tool will validate the model by checking if the content contains valid protobuf:

```python
```{.python .input}
from onnx import checker
import onnx
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ And we are done. You can test the installation now by importing mxnet from pytho

We are now ready to run a pre-trained model and run inference on a Jetson module. In this tutorial we are using ResNet-50 model trained on Imagenet dataset. We run the following classification script with either cpu/gpu context using python3.

```python
```{.python .input}
from mxnet import gluon
import mxnet as mx
Expand Down
24 changes: 12 additions & 12 deletions docs/python_docs/python/tutorials/extend/customop.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Custom operator in python is easy to develop and good for prototyping, but may h



```python
```{.python .input}
import numpy as np
import mxnet as mx
from mxnet import gluon, autograd
Expand All @@ -42,7 +42,7 @@ This operator implements the standard sigmoid activation function. This is only
First we implement the forward and backward computation by sub-classing `mx.operator.CustomOp`:


```python
```{.python .input}
class Sigmoid(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
"""Implements forward computation.
Expand Down Expand Up @@ -75,7 +75,7 @@ class Sigmoid(mx.operator.CustomOp):
Then we need to register the custom op and describe it's properties like input and output shapes so that mxnet can recognize it. This is done by sub-classing `mx.operator.CustomOpProp`:


```python
```{.python .input}
@mx.operator.register("sigmoid") # register with name "sigmoid"
class SigmoidProp(mx.operator.CustomOpProp):
def __init__(self):
Expand Down Expand Up @@ -110,7 +110,7 @@ class SigmoidProp(mx.operator.CustomOpProp):
We can now use this operator by calling `mx.nd.Custom`:


```python
```{.python .input}
x = mx.nd.array([0, 1, 2, 3])
# attach gradient buffer to x for autograd
x.attach_grad()
Expand All @@ -121,7 +121,7 @@ with autograd.record():
print(y)
```

```python
```{.python .input}
# call backward computation
y.backward()
# gradient is now saved to the grad buffer we attached previously
Expand All @@ -137,7 +137,7 @@ The dense operator performs a dot product between data and weight, then add bias
### Forward & backward implementation


```python
```{.python .input}
class Dense(mx.operator.CustomOp):
def __init__(self, bias):
self._bias = bias
Expand All @@ -158,7 +158,7 @@ class Dense(mx.operator.CustomOp):
### Registration


```python
```{.python .input}
@mx.operator.register("dense") # register with name "sigmoid"
class DenseProp(mx.operator.CustomOpProp):
def __init__(self, bias):
Expand Down Expand Up @@ -192,7 +192,7 @@ class DenseProp(mx.operator.CustomOpProp):
Parameterized CustomOp are usually used together with Blocks, which holds the parameter.


```python
```{.python .input}
class DenseBlock(mx.gluon.Block):
def __init__(self, in_channels, channels, bias, **kwargs):
super(DenseBlock, self).__init__(**kwargs)
Expand All @@ -207,7 +207,7 @@ class DenseBlock(mx.gluon.Block):
### Example usage


```python
```{.python .input}
dense = DenseBlock(3, 5, 0.1)
dense.initialize()
x = mx.nd.uniform(shape=(4, 3))
Expand All @@ -218,7 +218,7 @@ print(y)
## Using custom operators with fork
In Linux systems, the default method in multiprocessing to create process is by using fork. If there are unfinished async custom operations when forking, the program will be blocked because of python GIL. Always use sync calls like `wait_to_read` or `waitall` before calling fork.

```python
```{.python .input}
x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# unfinished async sigmoid operation will cause blocking
Expand All @@ -227,10 +227,10 @@ os.fork()

Correctly handling this will make mxnet depend upon libpython, so the workaround now is to ensure that all custom operations are executed before forking process.

```python
```{.python .input}
x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# force execution by reading y
print(y.asnumpy())
os.fork()
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ We will use the [Oxford 102 Category Flower Dataset](http://www.robots.ox.ac.uk/
We have prepared a utility file to help you download and organize your data into train, test, and validation sets. Run the following Python code to download and prepare the data:


```python
```{.python .input}
import mxnet as mx
data_util_file = "oxford_102_flower_dataset.py"
base_url = "https://raw.githubusercontent.com/apache/incubator-mxnet/master/docs/tutorial_utils/data/{}?raw=true"
Expand All @@ -65,7 +65,7 @@ Now your data will be organized into train, test, and validation sets, images be
Now let's first import necessary packages:


```python
```{.python .input}
import math
import os
import time
Expand All @@ -80,7 +80,7 @@ from mxnet.gluon.model_zoo.vision import resnet50_v2
Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](/api/python/docs/tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html) to adjust learning rates during training.
Here we set the `epochs` to 1 for quick demonstration, please change to 40 for actual training.

```python
```{.python .input}
classes = 102
epochs = 1
lr = 0.001
Expand Down Expand Up @@ -108,7 +108,7 @@ Now we will apply data augmentations on training images. This makes minor altera

For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](/api/cpp/docs/tutorials/cpp_inference).

```python
```{.python .input}
jitter_param = 0.4
lighting_param = 0.1
Expand Down Expand Up @@ -165,7 +165,7 @@ Before we go to training, one unique Gluon feature you should be aware of is hyb



```python
```{.python .input}
# load pre-trained resnet50_v2 from model zoo
finetune_net = resnet50_v2(pretrained=True, ctx=ctx)
Expand Down Expand Up @@ -195,7 +195,7 @@ Now let's define the test metrics and start fine-tuning.



```python
```{.python .input}
def test(net, val_data, ctx):
metric = mx.metric.Accuracy()
for i, (data, label) in enumerate(val_data):
Expand Down Expand Up @@ -254,7 +254,7 @@ We now have a trained our custom model. This can be serialized into model files



```python
```{.python .input}
finetune_net.export("flower-recognition", epoch=epochs)
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Logistic Regression is one of the first models newcomers to Deep Learning are im
Before anything else, let's import required packages for this tutorial.


```python
```{.python .input}
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon
Expand All @@ -36,7 +36,7 @@ mx.random.seed(12345) # Added for reproducibility
In this tutorial we will use fake dataset, which contains 10 features drawn from a normal distribution with mean equals to 0 and standard deviation equals to 1, and a class label, which can be either 0 or 1. The size of the dataset is an arbitrary value. The function below helps us to generate a dataset. Class label `y` is generated via a non-random logic, so the network would have a pattern to look for. Boundary of 3 is selected to make sure that number of positive examples smaller than negative, but not too small


```python
```{.python .input}
def get_random_data(size, ctx):
x = nd.normal(0, 1, shape=(size, 10), ctx=ctx)
y = x.sum(axis=1) > 3
Expand All @@ -46,7 +46,7 @@ def get_random_data(size, ctx):
Also, let's define a set of hyperparameters, that we are going to use later. Since our model is simple and dataset is small, we are going to use CPU for calculations. Feel free to change it to GPU for a more advanced scenario.


```python
```{.python .input}
ctx = mx.cpu()
train_data_size = 1000
val_data_size = 100
Expand All @@ -60,7 +60,7 @@ To work with data, Apache MXNet provides [Dataset](https://mxnet.apache.org/api/
Below we define training and validation datasets, which we are going to use in the tutorial.


```python
```{.python .input}
train_x, train_ground_truth_class = get_random_data(train_data_size, ctx)
train_dataset = ArrayDataset(train_x, train_ground_truth_class)
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
Expand All @@ -77,7 +77,7 @@ The only requirement for the logistic regression is that the last layer of the n
Below, we define a model which has an input layer of 10 neurons, a couple of inner layers of 10 neurons each, and output layer of 1 neuron. We stack the layers using [HybridSequential](https://mxnet.apache.org/api/python/gluon/gluon.html#mxnet.gluon.nn.HybridSequential) block and initialize parameters of the network using [Xavier](https://mxnet.apache.org/api/python/optimization/optimization.html#mxnet.initializer.Xavier) initialization.


```python
```{.python .input}
net = nn.HybridSequential()
net.add(nn.Dense(units=10, activation='relu')) # input layer
Expand All @@ -99,7 +99,7 @@ Metric helps us to estimate how good our model is in terms of a problem we are t
Below we define these objects.


```python
```{.python .input}
loss = gluon.loss.SigmoidBinaryCrossEntropyLoss()
trainer = Trainer(params=net.collect_params(), optimizer='sgd',
optimizer_params={'learning_rate': 0.1})
Expand All @@ -110,7 +110,7 @@ f1 = mx.metric.F1()
The next step is to define the training function in which we iterate over all batches of training data, execute the forward pass on each batch and calculate training loss. On line 19, we sum losses of every batch per epoch into a single variable, because we calculate loss per single batch, but want to display it per epoch.


```python
```{.python .input}
def train_model():
cumulative_train_loss = 0
Expand Down Expand Up @@ -159,7 +159,7 @@ For `F1` metric to work, instead of one number per class, we must pass probabili
Then we pass this stacked matrix to `F1` score.


```python
```{.python .input}
def validate_model(threshold):
cumulative_val_loss = 0
Expand Down Expand Up @@ -193,7 +193,7 @@ def validate_model(threshold):
By using the defined above functions, we can finally write our main training loop.


```python
```{.python .input}
epochs = 10
threshold = 0.5
Expand Down
Loading

0 comments on commit a5ec9b9

Please sign in to comment.