|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# End-to-End Multiclass Image Classification Example\n", |
| 8 | + "1. [Introduction](#Introduction)\n", |
| 9 | + "2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)\n", |
| 10 | + " 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n", |
| 11 | + " 2. [Prepare the data](#Prepare-the-data)\n", |
| 12 | + "3. [Training the model](#Training-the-model)\n", |
| 13 | + " 1. [Training parameters](#Training-parameters)\n", |
| 14 | + " 2. [Start the training](#Start-the-training)\n", |
| 15 | + "4. [Inference](#Inference)" |
| 16 | + ] |
| 17 | + }, |
| 18 | + { |
| 19 | + "cell_type": "markdown", |
| 20 | + "metadata": {}, |
| 21 | + "source": [ |
| 22 | + "## Introduction\n", |
| 23 | + "\n", |
| 24 | + "Welcome to our end-to-end example of distributed image classification algorithm. In this demo, we will use the Amazon sagemaker image classification algorithm to train on the [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). \n", |
| 25 | + "\n", |
| 26 | + "To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on." |
| 27 | + ] |
| 28 | + }, |
| 29 | + { |
| 30 | + "cell_type": "markdown", |
| 31 | + "metadata": {}, |
| 32 | + "source": [ |
| 33 | + "## Prequisites and Preprocessing\n", |
| 34 | + "\n", |
| 35 | + "### Permissions and environment variables\n", |
| 36 | + "\n", |
| 37 | + "Here we set up the linkage and authentication to AWS services. There are three parts to this:\n", |
| 38 | + "\n", |
| 39 | + "* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook\n", |
| 40 | + "* The S3 bucket that you want to use for training and model data\n", |
| 41 | + "* The Amazon sagemaker image classification docker image which need not be changed" |
| 42 | + ] |
| 43 | + }, |
| 44 | + { |
| 45 | + "cell_type": "code", |
| 46 | + "execution_count": null, |
| 47 | + "metadata": {}, |
| 48 | + "outputs": [], |
| 49 | + "source": [ |
| 50 | + "%%time\n", |
| 51 | + "import sagemaker\n", |
| 52 | + "from sagemaker import get_execution_role\n", |
| 53 | + "\n", |
| 54 | + "role = get_execution_role()\n", |
| 55 | + "print(role)\n", |
| 56 | + "\n", |
| 57 | + "sess = sagemaker.Session()\n", |
| 58 | + "bucket=sess.default_bucket()\n", |
| 59 | + "prefix = 'ic-fulltraining'" |
| 60 | + ] |
| 61 | + }, |
| 62 | + { |
| 63 | + "cell_type": "code", |
| 64 | + "execution_count": null, |
| 65 | + "metadata": {}, |
| 66 | + "outputs": [], |
| 67 | + "source": [ |
| 68 | + "from sagemaker.amazon.amazon_estimator import get_image_uri\n", |
| 69 | + "\n", |
| 70 | + "training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version=\"latest\")\n", |
| 71 | + "print (training_image)" |
| 72 | + ] |
| 73 | + }, |
| 74 | + { |
| 75 | + "cell_type": "markdown", |
| 76 | + "metadata": {}, |
| 77 | + "source": [ |
| 78 | + "### Data preparation\n", |
| 79 | + "Download the data and transfer to S3 for use in training. In this demo, we are using [Caltech-256](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) dataset, which contains 30608 images of 256 objects. For the training and validation data, we follow the splitting scheme in this MXNet [example](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/data/caltech256.sh). In particular, it randomly selects 60 images per class for training, and uses the remaining data for validation. The algorithm takes `RecordIO` file as input. The user can also provide the image files as input, which will be converted into `RecordIO` format using MXNet's [im2rec](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) tool. It takes around 50 seconds to converted the entire Caltech-256 dataset (~1.2GB) on a p2.xlarge instance. However, for this demo, we will use record io format. " |
| 80 | + ] |
| 81 | + }, |
| 82 | + { |
| 83 | + "cell_type": "code", |
| 84 | + "execution_count": null, |
| 85 | + "metadata": {}, |
| 86 | + "outputs": [], |
| 87 | + "source": [ |
| 88 | + "import os \n", |
| 89 | + "import urllib.request\n", |
| 90 | + "import boto3\n", |
| 91 | + "\n", |
| 92 | + "def download(url):\n", |
| 93 | + " filename = url.split(\"/\")[-1]\n", |
| 94 | + " if not os.path.exists(filename):\n", |
| 95 | + " urllib.request.urlretrieve(url, filename)\n", |
| 96 | + "\n", |
| 97 | + " \n", |
| 98 | + "def upload_to_s3(channel, file):\n", |
| 99 | + " s3 = boto3.resource('s3')\n", |
| 100 | + " data = open(file, \"rb\")\n", |
| 101 | + " key = channel + '/' + file\n", |
| 102 | + " s3.Bucket(bucket).put_object(Key=key, Body=data)\n", |
| 103 | + "\n", |
| 104 | + "\n", |
| 105 | + "# caltech-256\n", |
| 106 | + "download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')\n", |
| 107 | + "download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')" |
| 108 | + ] |
| 109 | + }, |
| 110 | + { |
| 111 | + "cell_type": "code", |
| 112 | + "execution_count": null, |
| 113 | + "metadata": {}, |
| 114 | + "outputs": [], |
| 115 | + "source": [ |
| 116 | + "# Four channels: train, validation, train_lst, and validation_lst\n", |
| 117 | + "s3train = 's3://{}/{}/train/'.format(bucket, prefix)\n", |
| 118 | + "s3validation = 's3://{}/{}/validation/'.format(bucket, prefix)\n", |
| 119 | + "\n", |
| 120 | + "# upload the lst files to train and validation channels\n", |
| 121 | + "!aws s3 cp caltech-256-60-train.rec $s3train --quiet\n", |
| 122 | + "!aws s3 cp caltech-256-60-val.rec $s3validation --quiet" |
| 123 | + ] |
| 124 | + }, |
| 125 | + { |
| 126 | + "cell_type": "markdown", |
| 127 | + "metadata": {}, |
| 128 | + "source": [ |
| 129 | + "\n", |
| 130 | + "\n", |
| 131 | + "Once we have the data available in the correct format for training, the next step is to actually train the model using the data. After setting training parameters, we kick off training, and poll for status until training is completed.\n" |
| 132 | + ] |
| 133 | + }, |
| 134 | + { |
| 135 | + "cell_type": "markdown", |
| 136 | + "metadata": {}, |
| 137 | + "source": [ |
| 138 | + "## Training the model\n", |
| 139 | + "\n", |
| 140 | + "Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.\n", |
| 141 | + "### Training parameters\n", |
| 142 | + "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:\n", |
| 143 | + "\n", |
| 144 | + "* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. \n", |
| 145 | + "* **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training \n", |
| 146 | + "* **Output path**: This the s3 folder in which the training output is stored\n" |
| 147 | + ] |
| 148 | + }, |
| 149 | + { |
| 150 | + "cell_type": "code", |
| 151 | + "execution_count": null, |
| 152 | + "metadata": {}, |
| 153 | + "outputs": [], |
| 154 | + "source": [ |
| 155 | + "s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)\n", |
| 156 | + "ic = sagemaker.estimator.Estimator(training_image,\n", |
| 157 | + " role, \n", |
| 158 | + " train_instance_count=1, \n", |
| 159 | + " train_instance_type='ml.p2.xlarge',\n", |
| 160 | + " train_volume_size = 50,\n", |
| 161 | + " train_max_run = 360000,\n", |
| 162 | + " input_mode= 'File',\n", |
| 163 | + " output_path=s3_output_location,\n", |
| 164 | + " sagemaker_session=sess)" |
| 165 | + ] |
| 166 | + }, |
| 167 | + { |
| 168 | + "cell_type": "markdown", |
| 169 | + "metadata": {}, |
| 170 | + "source": [ |
| 171 | + "Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:\n", |
| 172 | + "\n", |
| 173 | + "* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.\n", |
| 174 | + "* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.\n", |
| 175 | + "* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.\n", |
| 176 | + "* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.\n", |
| 177 | + "* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.\n", |
| 178 | + "* **epochs**: Number of training epochs.\n", |
| 179 | + "* **learning_rate**: Learning rate for training.\n", |
| 180 | + "* **top_k**: Report the top-k accuracy during training.\n", |
| 181 | + "* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode\n" |
| 182 | + ] |
| 183 | + }, |
| 184 | + { |
| 185 | + "cell_type": "code", |
| 186 | + "execution_count": null, |
| 187 | + "metadata": {}, |
| 188 | + "outputs": [], |
| 189 | + "source": [ |
| 190 | + "ic.set_hyperparameters(num_layers=18,\n", |
| 191 | + " image_shape = \"3,224,224\",\n", |
| 192 | + " num_classes=257,\n", |
| 193 | + " num_training_samples=15420,\n", |
| 194 | + " mini_batch_size=128,\n", |
| 195 | + " epochs=5,\n", |
| 196 | + " learning_rate=0.01,\n", |
| 197 | + " top_k=2,\n", |
| 198 | + " precision_dtype='float32')" |
| 199 | + ] |
| 200 | + }, |
| 201 | + { |
| 202 | + "cell_type": "markdown", |
| 203 | + "metadata": {}, |
| 204 | + "source": [ |
| 205 | + "## Input data specification\n", |
| 206 | + "Set the data type and channels used for training" |
| 207 | + ] |
| 208 | + }, |
| 209 | + { |
| 210 | + "cell_type": "code", |
| 211 | + "execution_count": null, |
| 212 | + "metadata": {}, |
| 213 | + "outputs": [], |
| 214 | + "source": [ |
| 215 | + "train_data = sagemaker.session.s3_input(s3train, distribution='FullyReplicated', \n", |
| 216 | + " content_type='application/x-recordio', s3_data_type='S3Prefix')\n", |
| 217 | + "validation_data = sagemaker.session.s3_input(s3validation, distribution='FullyReplicated', \n", |
| 218 | + " content_type='application/x-recordio', s3_data_type='S3Prefix')\n", |
| 219 | + "\n", |
| 220 | + "data_channels = {'train': train_data, 'validation': validation_data}" |
| 221 | + ] |
| 222 | + }, |
| 223 | + { |
| 224 | + "cell_type": "markdown", |
| 225 | + "metadata": {}, |
| 226 | + "source": [ |
| 227 | + "## Start the training\n", |
| 228 | + "Start training by calling the fit method in the estimator" |
| 229 | + ] |
| 230 | + }, |
| 231 | + { |
| 232 | + "cell_type": "code", |
| 233 | + "execution_count": null, |
| 234 | + "metadata": { |
| 235 | + "scrolled": true |
| 236 | + }, |
| 237 | + "outputs": [], |
| 238 | + "source": [ |
| 239 | + "ic.fit(inputs=data_channels, logs=True)" |
| 240 | + ] |
| 241 | + }, |
| 242 | + { |
| 243 | + "cell_type": "markdown", |
| 244 | + "metadata": {}, |
| 245 | + "source": [ |
| 246 | + "# Inference\n", |
| 247 | + "\n", |
| 248 | + "***\n", |
| 249 | + "\n", |
| 250 | + "A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document. You can deploy the created model by using the deploy method in the estimator" |
| 251 | + ] |
| 252 | + }, |
| 253 | + { |
| 254 | + "cell_type": "code", |
| 255 | + "execution_count": null, |
| 256 | + "metadata": {}, |
| 257 | + "outputs": [], |
| 258 | + "source": [ |
| 259 | + "ic_classifier = ic.deploy(initial_instance_count = 1,\n", |
| 260 | + " instance_type = 'ml.m4.xlarge')" |
| 261 | + ] |
| 262 | + }, |
| 263 | + { |
| 264 | + "cell_type": "markdown", |
| 265 | + "metadata": {}, |
| 266 | + "source": [ |
| 267 | + "### Download test image" |
| 268 | + ] |
| 269 | + }, |
| 270 | + { |
| 271 | + "cell_type": "code", |
| 272 | + "execution_count": null, |
| 273 | + "metadata": {}, |
| 274 | + "outputs": [], |
| 275 | + "source": [ |
| 276 | + "!wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg\n", |
| 277 | + "file_name = '/tmp/test.jpg'\n", |
| 278 | + "# test image\n", |
| 279 | + "from IPython.display import Image\n", |
| 280 | + "Image(file_name) " |
| 281 | + ] |
| 282 | + }, |
| 283 | + { |
| 284 | + "cell_type": "markdown", |
| 285 | + "metadata": {}, |
| 286 | + "source": [ |
| 287 | + "### Evaluation\n", |
| 288 | + "\n", |
| 289 | + "Evaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.\n", |
| 290 | + "\n", |
| 291 | + "**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for 5 epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate." |
| 292 | + ] |
| 293 | + }, |
| 294 | + { |
| 295 | + "cell_type": "code", |
| 296 | + "execution_count": null, |
| 297 | + "metadata": {}, |
| 298 | + "outputs": [], |
| 299 | + "source": [ |
| 300 | + "import json\n", |
| 301 | + "import numpy as np\n", |
| 302 | + "\n", |
| 303 | + "with open(file_name, 'rb') as f:\n", |
| 304 | + " payload = f.read()\n", |
| 305 | + " payload = bytearray(payload)\n", |
| 306 | + " \n", |
| 307 | + "ic_classifier.content_type = 'application/x-image'\n", |
| 308 | + "result = json.loads(ic_classifier.predict(payload))\n", |
| 309 | + "# the result will output the probabilities for all classes\n", |
| 310 | + "# find the class with maximum probability and print the class index\n", |
| 311 | + "index = np.argmax(result)\n", |
| 312 | + "object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']\n", |
| 313 | + "print(\"Result: label - \" + object_categories[index] + \", probability - \" + str(result[index]))" |
| 314 | + ] |
| 315 | + }, |
| 316 | + { |
| 317 | + "cell_type": "markdown", |
| 318 | + "metadata": {}, |
| 319 | + "source": [ |
| 320 | + "### Clean up\n", |
| 321 | + "\n", |
| 322 | + "\n", |
| 323 | + "When we're done with the endpoint, we can just delete it and the backing instances will be released. Uncomment and run the following cell to delete the endpoint and model" |
| 324 | + ] |
| 325 | + }, |
| 326 | + { |
| 327 | + "cell_type": "code", |
| 328 | + "execution_count": null, |
| 329 | + "metadata": {}, |
| 330 | + "outputs": [], |
| 331 | + "source": [ |
| 332 | + "ic_classifier.delete_endpoint()" |
| 333 | + ] |
| 334 | + }, |
| 335 | + { |
| 336 | + "cell_type": "code", |
| 337 | + "execution_count": null, |
| 338 | + "metadata": {}, |
| 339 | + "outputs": [], |
| 340 | + "source": [] |
| 341 | + } |
| 342 | + ], |
| 343 | + "metadata": { |
| 344 | + "kernelspec": { |
| 345 | + "display_name": "conda_mxnet_p36", |
| 346 | + "language": "python", |
| 347 | + "name": "conda_mxnet_p36" |
| 348 | + }, |
| 349 | + "language_info": { |
| 350 | + "codemirror_mode": { |
| 351 | + "name": "ipython", |
| 352 | + "version": 3 |
| 353 | + }, |
| 354 | + "file_extension": ".py", |
| 355 | + "mimetype": "text/x-python", |
| 356 | + "name": "python", |
| 357 | + "nbconvert_exporter": "python", |
| 358 | + "pygments_lexer": "ipython3", |
| 359 | + "version": "3.6.5" |
| 360 | + }, |
| 361 | + "notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." |
| 362 | + }, |
| 363 | + "nbformat": 4, |
| 364 | + "nbformat_minor": 2 |
| 365 | +} |
0 commit comments