|
36 | 36 | # .. image:: https://raw.githubusercontent.com/guberti/web-data/micro-train-tutorial-data/images/utilities/colab_button.png |
37 | 37 | # :align: center |
38 | 38 | # :target: https://colab.research.google.com/github/guberti/tvm-site/blob/asf-site/docs/_downloads/a7c7ea4b5017ae70db1f51dd8e6dcd82/micro_train.ipynb |
39 | | -# :width: 600px |
| 39 | +# :width: 300px |
40 | 40 | # |
41 | 41 | # Motivation |
42 | 42 | # ---------- |
|
258 | 258 | # |
259 | 259 | # Our applications generally don't need perfect accuracy - 90% is good enough. We can thus use the |
260 | 260 | # older and smaller MobileNet V1 architecture. But this *still* won't be small enough - by default, |
261 | | -# MobileNet V1 with 224x224 inputs and depth 1.0 takes ~50 MB to just **store**. To reduce the size |
| 261 | +# MobileNet V1 with 224x224 inputs and alpha 1.0 takes ~50 MB to just **store**. To reduce the size |
262 | 262 | # of the model, there are three knobs we can turn. First, we can reduce the size of the input images |
263 | | -# from 224x224 to 96x96 or 64x64, and Keras makes it easy to do this. We can also reduce the **depth** |
264 | | -# of the model, from 1.0 to 0.25. And if we were really strapped for space, we could reduce the |
| 263 | +# from 224x224 to 96x96 or 64x64, and Keras makes it easy to do this. We can also reduce the **alpha** |
| 264 | +# of the model, from 1.0 to 0.25, which downscales the width of the network (and the number of |
| 265 | +# filters) by a factor of four. And if we were really strapped for space, we could reduce the |
265 | 266 | # number of **channels** by making our model take grayscale images instead of RGB ones. |
266 | 267 | # |
267 | | -# In this tutorial, we will use an RGB 64x64 input image and 0.25 depth scale. This is not quite |
| 268 | +# In this tutorial, we will use an RGB 64x64 input image and alpha 0.25. This is not quite |
268 | 269 | # ideal, but it allows the finished model to fit in 192 KB of RAM, while still letting us perform |
269 | | -# transfer learning using the official Tensorflow source models (if we used depth scale <0.25 or |
270 | | -# a grayscale input, we wouldn't be able to do this). |
| 270 | +# transfer learning using the official Tensorflow source models (if we used alpha <0.25 or a |
| 271 | +# grayscale input, we wouldn't be able to do this). |
271 | 272 | # |
272 | 273 | # What is Transfer Learning? |
273 | 274 | # ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
369 | 370 | # the conversion. By default, TFLite keeps the inputs and outputs of our model as floats, so we must |
370 | 371 | # explicitly tell it to avoid this behavior. |
371 | 372 |
|
372 | | -converter = tf.lite.TFLiteConverter.from_keras_model(model) |
373 | | - |
374 | | - |
375 | 373 | def representative_dataset(): |
376 | 374 | for image_batch, label_batch in full_dataset.take(10): |
377 | 375 | yield [image_batch] |
378 | 376 |
|
379 | 377 |
|
| 378 | +converter = tf.lite.TFLiteConverter.from_keras_model(model) |
380 | 379 | converter.optimizations = [tf.lite.Optimize.DEFAULT] |
381 | 380 | converter.representative_dataset = representative_dataset |
382 | 381 | converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] |
@@ -431,7 +430,7 @@ def representative_dataset(): |
431 | 430 | # |
432 | 431 | # Generating our project |
433 | 432 | # ^^^^^^^^^^^^^^^^^^^^^^ |
434 | | -# Next, we'll compile the model to TVM's MLF (machine learning format) intermediate representation, |
| 433 | +# Next, we'll compile the model to TVM's MLF (model library format) intermediate representation, |
435 | 434 | # which consists of C/C++ code and is designed for autotuning. To improve performance, we'll tell |
436 | 435 | # TVM that we're compiling for the ``nrf52840`` microprocessor (the one the Nano 33 BLE uses). We'll |
437 | 436 | # also tell it to use the C runtime (abbreviated ``crt``) and to use ahead-of-time memory allocation |
@@ -563,6 +562,9 @@ def representative_dataset(): |
563 | 562 | # not throw any compiler errors: |
564 | 563 |
|
565 | 564 | shutil.rmtree(f"{FOLDER}/models/project/build", ignore_errors=True) |
| 565 | +# sphinx_gallery_start_ignore |
| 566 | +arduino_project = MagicMock() |
| 567 | +# sphinx_gallery_end_ignore |
566 | 568 | arduino_project.build() |
567 | 569 | print("Compilation succeeded!") |
568 | 570 |
|
|
0 commit comments