Skip to content

Commit b994a0f

Browse files
committed
Grammar fixes
1 parent bde134a commit b994a0f

File tree

1 file changed

+31
-25
lines changed

1 file changed

+31
-25
lines changed

gallery/how_to/work_with_microtvm/micro_train.py

Lines changed: 31 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -60,14 +60,19 @@
6060
# Installing the Prerequisites
6161
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6262
#
63-
# To run this tutorial, we will need Tensorflow and TFLite to train our model, pyserial and tlcpack
64-
# (a community build of TVM) to compile and test it, and imagemagick and curl to preprocess data.
65-
# We will also need to install the Arduino CLI and the mbed_nano package to test our model.
63+
# This tutorial will use TensorFlow to train the model - a widely used machine learning library
64+
# created by Google. TensorFlow is a very low-level library, however, so we will the Keras
65+
# interface to talk to TensorFlow. We will also use TensorFlow Lite to perform quantization on
66+
# our model, as TensorFlow by itself does not support this.
67+
#
68+
# Once we have our generated model, we will use TVM to compile and test it. To avoid having to
69+
# build from source, we'll install ``tlcpack`` - a community build of TVM. Lastly, we'll also
70+
# install ``imagemagick`` and ``curl`` to preprocess data:
6671
#
6772
# .. code-block:: bash
6873
#
6974
# %%bash
70-
# pip install -q tensorflow tflite pyserial
75+
# pip install -q tensorflow tflite
7176
# pip install -q tlcpack-nightly -f https://tlcpack.ai/wheels
7277
# apt-get -qq install imagemagick curl
7378
#
@@ -82,7 +87,7 @@
8287
# This tutorial demonstrates training a neural network, which is requires a lot of computing power
8388
# and will go much faster if you have a GPU. If you are viewing this tutorial on Google Colab, you
8489
# can enable a GPU by going to **Runtime->Change runtime type** and selecting "GPU" as our hardware
85-
# accelerator. If you are running locally, you can `follow Tensorflow's guide <https://www.tensorflow.org/guide/gpu>`_ instead.
90+
# accelerator. If you are running locally, you can `follow TensorFlow's guide <https://www.tensorflow.org/guide/gpu>`_ instead.
8691
#
8792
# We can test our GPU installation with the following code:
8893

@@ -131,7 +136,7 @@
131136
# a small enough fraction not to matter - just keep in mind that this will drive down our percieved
132137
# accuracy slightly.
133138
#
134-
# We could use the Tensorflow dataloader utilities, but we'll instead do it manually to make sure
139+
# We could use the TensorFlow dataloader utilities, but we'll instead do it manually to make sure
135140
# it's easy to change the datasets being used. We'll end up with the following file hierarchy:
136141
#
137142
# .. code-block::
@@ -267,7 +272,7 @@
267272
#
268273
# In this tutorial, we will use an RGB 64x64 input image and alpha 0.25. This is not quite
269274
# ideal, but it allows the finished model to fit in 192 KB of RAM, while still letting us perform
270-
# transfer learning using the official Tensorflow source models (if we used alpha <0.25 or a
275+
# transfer learning using the official TensorFlow source models (if we used alpha <0.25 or a
271276
# grayscale input, we wouldn't be able to do this).
272277
#
273278
# What is Transfer Learning?
@@ -290,10 +295,11 @@
290295
# We can take advantage of this by starting training with a MobileNet model that was trained on
291296
# ImageNet, and already knows how to identify those lines and shapes. We can then just remove the
292297
# last few layers from this pretrained model, and add our own final layers. We'll then train this
293-
# conglomerate model for a few epochs on our cars vs non-cars dataset, to fine tune the first layers
294-
# and train from scratch the last layers.
298+
# conglomerate model for a few epochs on our cars vs non-cars dataset, to adjust the first layers
299+
# and train from scratch the last layers. This process of training an already-partially-trained
300+
# model is called *fine-tuning*.
295301
#
296-
# Source MobileNets for transfer learning have been `pretrained by the Tensorflow folks <https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md>`_, so we
302+
# Source MobileNets for transfer learning have been `pretrained by the TensorFlow folks <https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md>`_, so we
297303
# can just download the one closest to what we want (the 128x128 input model with 0.25 depth scale).
298304

299305
os.makedirs(f"{FOLDER}/models")
@@ -326,8 +332,8 @@
326332
model.add(tf.keras.layers.Dense(2, activation="softmax"))
327333

328334
######################################################################
329-
# Training Our Network
330-
# ^^^^^^^^^^^^^^^^^^^^
335+
# Fine Tuning Our Network
336+
# ^^^^^^^^^^^^^^^^^^^^^^^
331337
# When training neural networks, we must set a parameter called the **learning rate** that controls
332338
# how fast our network learns. It must be set carefully - too slow, and our network will take
333339
# forever to train; too fast, and our network won't be able to learn some fine details. Generally
@@ -361,8 +367,8 @@
361367
#
362368
# To address both issues we will **quantize** the model - representing the weights as eight bit
363369
# integers. It's more complex than just rounding, though - to get the best performance, TensorFlow
364-
# tracks how each neuron in our model activates, so we can figure out how to best represent the
365-
# while being relatively truthful to the original model.
370+
# tracks how each neuron in our model activates, so we can figure out how most accurately simulate
371+
# the neuron's original activations with integer operations.
366372
#
367373
# We will help TensorFlow do this by creating a representative dataset - a subset of the original
368374
# that is used for tracking how those neurons activate. We'll then pass this into a ``TFLiteConverter``
@@ -388,7 +394,7 @@ def representative_dataset():
388394
######################################################################
389395
# Download the Model if Desired
390396
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
391-
# We've now got a finished model, that you can use locally or in other tutorials (try autotuning
397+
# We've now got a finished model that you can use locally or in other tutorials (try autotuning
392398
# this model or viewing it on `https://netron.app/ <https://netron.app/>`_). But before we do
393399
# those things, we'll have to write it to a file (``quantized.tflite``). If you're running this
394400
# tutorial on Google Colab, you'll have to uncomment the last two lines to download the file
@@ -403,8 +409,8 @@ def representative_dataset():
403409
######################################################################
404410
# Compiling With TVM For Arduino
405411
# ------------------------------
406-
# Tensorflow has a built-in framework for deploying to microcontrollers - `TFLite Micro <https://www.tensorflow.org/lite/microcontrollers>`_. However,
407-
# it's poorly supported by development boards, and does not support autotuning. We will use Apache
412+
# TensorFlow has a built-in framework for deploying to microcontrollers - `TFLite Micro <https://www.tensorflow.org/lite/microcontrollers>`_. However,
413+
# it's poorly supported by development boards and does not support autotuning. We will use Apache
408414
# TVM instead.
409415
#
410416
# TVM can be used either with its command line interface (``tvmc``) or with its Python interface. The
@@ -481,8 +487,8 @@ def representative_dataset():
481487
# Testing our Arduino Project
482488
# ---------------------------
483489
# Consider the following two 224x224 images from the author's camera roll - one of a car, one not.
484-
# We will test our Arduino project by loading both of these images, and executing the compiled model
485-
# on them both.
490+
# We will test our Arduino project by loading both of these images and executing the compiled model
491+
# on them.
486492
#
487493
# .. image:: https://raw.githubusercontent.com/guberti/web-data/micro-train-tutorial-data/testdata/microTVM/data/model_train_images_combined.png
488494
# :align: center
@@ -494,7 +500,7 @@ def representative_dataset():
494500
#
495501
# It's also challenging to load raw data onto an Arduino, as only C/CPP files (and similar) are
496502
# compiled. We can work around this by embedding our raw data in a hard-coded C array with the
497-
# built-in utility ``bin2c``, that will output a file resembling the following:
503+
# built-in utility ``bin2c`` that will output a file like below:
498504
#
499505
# .. code-block:: c
500506
#
@@ -559,8 +565,8 @@ def representative_dataset():
559565
# Now that our project has been generated, TVM's job is mostly done! We can still call
560566
# ``arduino_project.build()`` and ``arduino_project.upload()``, but these just use ``arduino-cli``'s
561567
# compile and flash commands underneath. We could also begin autotuning our model, but that's a
562-
# subject for a different tutorial. To finish up, we'll first test that our program compiles does
563-
# not throw any compiler errors:
568+
# subject for a different tutorial. To finish up, we'll verify no compiler errors are thrown
569+
# by our project:
564570

565571
shutil.rmtree(f"{FOLDER}/models/project/build", ignore_errors=True)
566572
# sphinx_gallery_start_ignore
@@ -622,8 +628,8 @@ def representative_dataset():
622628
# Other object results:
623629
# 0, 255
624630
#
625-
# The first number represents the model's confidence that the object **is** a car, and ranges from
626-
# 0-255. The second number represents the model's confidence that the object **is not** a car, and
631+
# The first number represents the model's confidence that the object **is** a car and ranges from
632+
# 0-255. The second number represents the model's confidence that the object **is not** a car and
627633
# is also 0-255. These results mean the model is very sure that the first image is a car, and the
628634
# second image is not (which is correct). Hence, our model is working!
629635
#
@@ -632,7 +638,7 @@ def representative_dataset():
632638
# In this tutorial, we used transfer learning to quickly train an image recognition model to
633639
# identify cars. We modified its input dimensions and last few layers to make it better at this,
634640
# and to make it faster and smaller. We then quantified the model and compiled it using TVM to
635-
# create an Arduino sketch. Lastly, we tested the model using two static images, to prove it works
641+
# create an Arduino sketch. Lastly, we tested the model using two static images to prove it works
636642
# as intended.
637643
#
638644
# Next Steps

0 commit comments

Comments
 (0)