Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[TUTORIAL] Add multiple GPUs training tutorial #15158

Merged
merged 13 commits into from
Jun 14, 2019

Conversation

Ishitori
Copy link
Contributor

@Ishitori Ishitori commented Jun 5, 2019

Description

Add new tutorial about multigpu training using Gluon API.

@Ishitori Ishitori requested a review from szha as a code owner June 5, 2019 22:04
import mxnet as mx

a = mx.nd.array([1, 2, 3], ctx=mx.gpu(0))
b = mx.nd.array([5, 6, 7], ctx=mx.gpu(1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the tutorial nightly test has changed to use P3.2xlarge with 1GPU, so this may fail

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, whole this tutorial is about how to do multigpu training. I guess if this is the case, I will have to remove it from nightly tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, added to the whitelist

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a good idea to not test it. I'll suggest changes to make it testable and carry the same information

@piyushghai
Copy link
Contributor

Thanks for your contributions @Ishitori .
@mxnet-label-bot Add [pr-awaiting-review, Doc]

@marcoabreu marcoabreu added Doc pr-awaiting-review PR is waiting for code review labels Jun 7, 2019
Copy link
Contributor

@thomelane thomelane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @Ishitori. some rewording is required in few places.

## Prerequisites

- Two or more GPUs
- Cuda 9 or higher
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments as @vishaalkapoor in Float16 tutorial

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUDA and CuDNN

c = a + b.as_in_context(a.context)
```

Using this example we have learnt that we can perform operations with NDArrays only if they are stored on the same GPU. So, how can we split the data between GPUs, but use the same model for training? We will answer this question in the next session.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using this example -> Using this example,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

session -> section


## Storing the network on multiple GPUs

When you create a network using [Blocks](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block) the parameters of blocks are also stored in a form of NDArray. When you initialize your network, you have to specify which context you are going to use for the underlying NDArrays. The feature of the [initialize method](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.initialize) is that it can accept the list of contexts, meaning that you can provide more than one context to store underlying parameters. In the example below we create the LeNet network and initialize it to be stored on GPU(0) and GPU(1) simultaneously. Each GPU will receive its own copy of the parameters:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the example below -> In the example below,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stored in a form of NDArray -> stored in NDArrays


To do multiple GPU training with a given batch of the data, we divide the examples in the batch into number of portions equal to the number of GPUs we use and distribute one to each GPU. Then, each GPU will individually calculate the local gradient of the model parameters based on the batch subset it was assigned and the model parameters it maintains. Next, we sum together the local gradients on the GPUs to get the current batch stochastic gradient. After that, each GPU uses this batch stochastic gradient to update the complete set of model parameters that it maintains. Figure below depicts the batch stochastic gradient calculation using data parallelism and two GPUs.

![data-parallel](https://www.d2l.ai/_images/data-parallel.svg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move this and other image dependencies to web-data repo.


# Multiple GPUs training with Gluon API

In this tutorial we will walk through how one can train deep learning neural networks on multiple GPUs within a single machine. This tutorial focuses on data parallelism oppose to model parallelism. The latter is not supported by Apache MXNet out of the box, and one have to manually route the data among different devices to achieve model parallelism. Check out [model parallelism tutorial](https://mxnet.incubator.apache.org/versions/master/faq/model_parallel_lstm.html) to learn more about it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oppose -> as opposed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you give a quick explaination of 'data parallelism' or link to good explaination.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one have to -> one has to


As we mentioned above, the gradients for each data split are calculated independently and then later summed together. We haven't mentioned yet where exactly this aggregation happens.

Apache MXNet uses [KVStore](https://mxnet.incubator.apache.org/versions/master/api/scala/kvstore.html) - a virtual place for data sharing between different devices, including machines and GPUs. The KVStore is responsible for storing and, by default, aggregating the gradients of the model. The physical location of the KVStore is defined when we create a [trainer](https://mxnet.incubator.apache.org/versions/master/api/python/gluon/gluon.html#mxnet.gluon.Trainer) and by default is set to `device`, which mean it will aggregate gradients and update weights on GPUs. The actual data is distributed in round-robin fashion among available GPUs per block. This statement means two things, which are important to know from practical perspective.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trainer -> Trainer


Apache MXNet uses [KVStore](https://mxnet.incubator.apache.org/versions/master/api/scala/kvstore.html) - a virtual place for data sharing between different devices, including machines and GPUs. The KVStore is responsible for storing and, by default, aggregating the gradients of the model. The physical location of the KVStore is defined when we create a [trainer](https://mxnet.incubator.apache.org/versions/master/api/python/gluon/gluon.html#mxnet.gluon.Trainer) and by default is set to `device`, which mean it will aggregate gradients and update weights on GPUs. The actual data is distributed in round-robin fashion among available GPUs per block. This statement means two things, which are important to know from practical perspective.

The first thing is there is an additional memory allocation happens on GPUs that is not directly related to your data and your model to store auxiliary information for GPUs sync-up. Depending on the complexity of your model, the amount of required memory can be significant, and you may even experience CUDA out of memory exceptions. If that is the case, and you cannot decrease batch size anymore, you may want to consider switching `KVStore` storage to RAM by setting `kvstore` argument to `local` during instantiation of the `Trainer`. That most probably will decrease the wall-clock performance time of your model, because the gradients and parameters would need to be copied to RAM and back.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

happens on GPUs -> that happens on GPUs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That most probably will-> Often this decreases


The first thing is there is an additional memory allocation happens on GPUs that is not directly related to your data and your model to store auxiliary information for GPUs sync-up. Depending on the complexity of your model, the amount of required memory can be significant, and you may even experience CUDA out of memory exceptions. If that is the case, and you cannot decrease batch size anymore, you may want to consider switching `KVStore` storage to RAM by setting `kvstore` argument to `local` during instantiation of the `Trainer`. That most probably will decrease the wall-clock performance time of your model, because the gradients and parameters would need to be copied to RAM and back.

The second thing is that since that auxiliary information distributed among GPUs in round-robin fashion on per block level, `KVStore` may use more memory on some GPUs and less on others. For example, if your model has a very big embedding layer, you may see that your first GPU uses 90% of your memory while others use only 50%. That affects how much data you actually can load in a single batch, because the data between devices is split evenly. If that is the case, again, and you have to keep or increase your batch size, you, again, may want to switch to the `local` mode.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that auxiliary information distributed among GPUs -> the auxiliary information is distributed among GPUs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove ', again,' both times


## Conclusion

With Apache MXNet training using multiple GPUs doesn't need a lot of extra code. To do the multiple GPUs training one needs to initialize a model on all GPUs, split the batches of data into separate splits where each is stored on a different GPU and run the model separately on every split. The synchronization of gradients and parameters between GPUs is done automatically by Apache MXNet.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one -> you


## Recommended Next Steps

* Check out our two video tutorial on improving your code performance. In the [first video](https://www.youtube.com/watch?v=n8tN6pRZBdE) we explain how to visualize the performance, and in the [second video](https://www.youtube.com/watch?v=Cqo7FPftNyo) we show how to optimize it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optimize it -> optimize it.

@Ishitori
Copy link
Contributor Author

Fixed everything mentioned above.


## Multiple GPUs classification of MNIST images

In the first step, we are going to load the MNIST images, switch the format of data from `height x width x channel` to `channel x height x width` and normalize the data
In the first step, we are going to load the MNIST imagesa and use [ToTensor](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.vision.transforms.ToTensor) to convert the format of the data from `height x width x channel` to `channel x height x width` and divide it by 255.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imagesa -> images

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@thomelane
Copy link
Contributor

LGTM

import mxnet as mx

a = mx.nd.array([1, 2, 3], ctx=mx.gpu(0))
b = mx.nd.array([5, 6, 7], ctx=mx.gpu(1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a good idea to not test it. I'll suggest changes to make it testable and carry the same information

```python
import mxnet as mx

a = mx.nd.array([1, 2, 3], ctx=mx.gpu(0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use context[0] and context[1]

from mxnet import init
from mxnet.gluon import nn

context = [mx.gpu(0), mx.gpu(1)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make this a

n_gpu = mx.context.num_gpus()
context = [mx.gpu(0), mx.gpu(1)] if n_gpu >= 2 else [mx.gpu(), mx.gpu()] if n_gpu == 1 else [mx.cpu(), mx.cpu()]

@Ishitori
Copy link
Contributor Author

Added the tutorial back to test by applying @ThomasDelteil trick.


The first thing is there is an additional memory allocation that happens on GPUs that is not directly related to your data and your model to store auxiliary information for GPUs sync-up. Depending on the complexity of your model, the amount of required memory can be significant, and you may even experience CUDA out of memory exceptions. If that is the case, and you cannot decrease batch size anymore, you may want to consider switching `KVStore` storage to RAM by setting `kvstore` argument to `local` during instantiation of the `Trainer`. Often this decreases the wall-clock performance time of your model, because the gradients and parameters would need to be copied to RAM and back.

The second thing is that since the auxiliary information is distributed among GPUs in round-robin fashion on per block level, `KVStore` may use more memory on some GPUs and less on others. For example, if your model has a very big embedding layer, you may see that your first GPU uses 90% of your memory while others use only 50%. That affects how much data you actually can load in a single batch, because the data between devices is split evenly. If that is the case and you have to keep or increase your batch size, you may want to switch to the `local` mode.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a question, here should we also mention about dist_device_sync mode of kvstore used for distributed training with updates on GPUs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the docs dist_device_sync make sense only for distributed training, when there are more than 1 host. With mutligpu training on a single host, which is covered in this tutorial, only local and device modes makes sense.

@ThomasDelteil ThomasDelteil merged commit 41d35c4 into apache:master Jun 14, 2019
@roywei
Copy link
Member

roywei commented Jun 14, 2019

@Ishitori
Copy link
Contributor Author

Fixed here #15248

haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* Add multiple GPUs training tutorial

* Add download source button

* Add tutorial to the test suite

* Remove from nightly build (no CI multigpu machines)

* Add extension to whitelisted multigpu tutorial

* Force build

* Force update

* Code review fixes

* Force build

* Typo fix and force build

* Add tutorial back to tests

* Add tutorial to the index

* Force build
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Doc pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants