You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# This tutorial demonstrates training a neural network, which is requires a lot of computing power
83
88
# and will go much faster if you have a GPU. If you are viewing this tutorial on Google Colab, you
84
89
# can enable a GPU by going to **Runtime->Change runtime type** and selecting "GPU" as our hardware
85
-
# accelerator. If you are running locally, you can `follow Tensorflow's guide <https://www.tensorflow.org/guide/gpu>`_ instead.
90
+
# accelerator. If you are running locally, you can `follow TensorFlow's guide <https://www.tensorflow.org/guide/gpu>`_ instead.
86
91
#
87
92
# We can test our GPU installation with the following code:
88
93
@@ -131,7 +136,7 @@
131
136
# a small enough fraction not to matter - just keep in mind that this will drive down our percieved
132
137
# accuracy slightly.
133
138
#
134
-
# We could use the Tensorflow dataloader utilities, but we'll instead do it manually to make sure
139
+
# We could use the TensorFlow dataloader utilities, but we'll instead do it manually to make sure
135
140
# it's easy to change the datasets being used. We'll end up with the following file hierarchy:
136
141
#
137
142
# .. code-block::
@@ -267,7 +272,7 @@
267
272
#
268
273
# In this tutorial, we will use an RGB 64x64 input image and alpha 0.25. This is not quite
269
274
# ideal, but it allows the finished model to fit in 192 KB of RAM, while still letting us perform
270
-
# transfer learning using the official Tensorflow source models (if we used alpha <0.25 or a
275
+
# transfer learning using the official TensorFlow source models (if we used alpha <0.25 or a
271
276
# grayscale input, we wouldn't be able to do this).
272
277
#
273
278
# What is Transfer Learning?
@@ -290,10 +295,11 @@
290
295
# We can take advantage of this by starting training with a MobileNet model that was trained on
291
296
# ImageNet, and already knows how to identify those lines and shapes. We can then just remove the
292
297
# last few layers from this pretrained model, and add our own final layers. We'll then train this
293
-
# conglomerate model for a few epochs on our cars vs non-cars dataset, to fine tune the first layers
294
-
# and train from scratch the last layers.
298
+
# conglomerate model for a few epochs on our cars vs non-cars dataset, to adjust the first layers
299
+
# and train from scratch the last layers. This process of training an already-partially-trained
300
+
# model is called *fine-tuning*.
295
301
#
296
-
# Source MobileNets for transfer learning have been `pretrained by the Tensorflow folks <https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md>`_, so we
302
+
# Source MobileNets for transfer learning have been `pretrained by the TensorFlow folks <https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md>`_, so we
297
303
# can just download the one closest to what we want (the 128x128 input model with 0.25 depth scale).
0 commit comments