Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Change mx.test_utils.list_gpus to mx.context.num_gpus where possible #14946

Merged
merged 7 commits into from
May 30, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/tutorials/gluon/datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def construct_net():
return net

# construct and initialize network.
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()

net = construct_net()
net.hybridize()
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/gluon/info_gan.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ batch_size = 64
z_dim = 100
n_continuous = 2
n_categories = 10
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()
```

Some functions to load and normalize images.
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/gluon/learning_rate_finder.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Using a Pre-activation ResNet-18 from the Gluon model zoo, we instantiate our Le


```python
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()
net = mx.gluon.model_zoo.vision.resnet18_v2(classes=10)
learner = Learner(net=net, data_loader=data_loader, ctx=ctx)
lr_finder = LRFinder(learner)
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/gluon/learning_rate_schedules.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ As discussed above, the schedule should return a learning rate given an (1-based

```python
# Use GPU if one exists, else use CPU
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()

# MNIST images are 28x28. Total pixels in input layer is 28x28 = 784
num_inputs = 784
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/gluon/save_load_params.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Let's define a helper function to build a LeNet model and another helper to trai

```python
# Use GPU if one exists, else use CPU
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()

# MNIST images are 28x28. Total pixels in input layer is 28x28 = 784
num_inputs = 784
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/nlp/cnn.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ import time
CNNModel = namedtuple("CNNModel", ['cnn_exec', 'symbol', 'data', 'label', 'param_blocks'])

# Define what device to train/test on, use GPU if available
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()

arg_names = cnn.list_arguments()

Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/python/kvstore.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,9 @@ values and then push the aggregated value:

```python
# The numbers used below assume 4 GPUs
gpus = mx.test_utils.list_gpus()
if len(gpus) > 1:
contexts = [mx.gpu(i) for i in gpus]
gpus = mx.context.num_gpus()
if gpus > 0:
contexts = [mx.gpu(i) for i in range(gpus)]
else:
contexts = [mx.cpu(i) for i in range(4)]
b = [mx.nd.ones(shape, ctx) for ctx in contexts]
Expand Down Expand Up @@ -173,4 +173,4 @@ When the distributed version is ready, we will update this section.
## Next Steps
* [MXNet tutorials index](http://mxnet.io/tutorials/index.html)

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
2 changes: 1 addition & 1 deletion docs/tutorials/python/mnist.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ mnist = mx.test_utils.get_mnist()
mx.random.seed(42)

# Set the compute context, GPU is available otherwise CPU
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()
```

After running the above source code, the entire MNIST dataset should be fully loaded into memory. Note that for large datasets it is not feasible to pre-load the entire dataset first like we did here. What is needed is a mechanism by which we can quickly and efficiently stream data directly from the source. MXNet Data iterators come to the rescue here by providing exactly that. Data iterator is the mechanism by which we feed input data into an MXNet training algorithm and they are very simple to initialize and use and are optimized for speed. During training, we typically process training samples in small batches and over the entire training lifetime will end up processing each training example multiple times. In this tutorial, we'll configure the data iterator to feed examples in batches of 100. Keep in mind that each example is a 28x28 grayscale image and the corresponding label.
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/python/profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ Let's define a method that will run one training iteration given data and label.

```python
# Use GPU if available
if len(mx.test_utils.list_gpus())!=0:
if mx.context.num_gpus():
ctx=mx.gpu()
else:
ctx=mx.cpu()
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/unsupervised_learning/gan.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ sigma = 0.02
lr = 0.0002
beta1 = 0.5
# Define the compute context, use GPU if available
ctx = mx.gpu() if mx.test_utils.list_gpus() else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()

#=============Generator Module=============
generator = mx.mod.Module(symbol=generatorSymbol, data_names=('rand',), label_names=None, context=ctx)
Expand Down
2 changes: 1 addition & 1 deletion example/adversary/adversary_generation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
},
"outputs": [],
"source": [
"ctx = mx.gpu() if len(mx.test_utils.list_gpus()) else mx.cpu()\n",
"ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()\n",
"batch_size = 128"
]
},
Expand Down
2 changes: 1 addition & 1 deletion example/autoencoder/convolutional_autoencoder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@
"outputs": [],
"source": [
"batch_size = 512\n",
"ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()"
"ctx = mx.gpu() if mx.context.num_gpus() else mx.cpu()"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion example/bi-lstm-sort/bi-lstm-sort.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
"seq_len = 5\n",
"split = 0.8\n",
"batch_size = 512\n",
"ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()"
"ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion example/distributed_training-horovod/gluon_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@

if not args.no_cuda:
# Disable CUDA if there are no GPUs.
if not mx.test_utils.list_gpus():
if mx.context.num_gpus() == 0:
args.no_cuda = True

logging.basicConfig(level=logging.INFO)
Expand Down
2 changes: 1 addition & 1 deletion example/distributed_training-horovod/module_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@

if not args.no_cuda:
# Disable CUDA if there are no GPUs.
if not mx.test_utils.list_gpus():
if mx.context.num_gpus() == 0:
args.no_cuda = True

logging.basicConfig(level=logging.INFO)
Expand Down
9 changes: 4 additions & 5 deletions example/image-classification/test_score.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,10 @@ def test_imagenet1k_inception_bn(**kwargs):
assert r > g and r < g + .1

if __name__ == '__main__':
gpus = mx.test_utils.list_gpus()
assert len(gpus) > 0
batch_size = 16 * len(gpus)
gpus = ','.join([str(i) for i in gpus])

num_gpus = mx.context.num_gpus()
assert num_gpus > 0
batch_size = 16 * num_gpus
gpus = ','.join(map(str, range(num_gpus)))
kwargs = {'gpus':gpus, 'batch_size':batch_size, 'max_num_examples':500}
download_data()
test_imagenet1k_resnet(**kwargs)
Expand Down
2 changes: 1 addition & 1 deletion example/multi-task/multi-task-learning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"source": [
"batch_size = 128\n",
"epochs = 5\n",
"ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()\n",
"ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()\n",
"lr = 0.01"
]
},
Expand Down
2 changes: 1 addition & 1 deletion example/recommenders/demo2-dssm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
"hidden_units = 128\n",
"epsilon_proj = 0.25\n",
"\n",
"ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()"
"ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions example/svm_mnist/svm_mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
# Article's suggestion on batch size
batch_size = 200

ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()
ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()

results = {}
for output in [mlp_svm_l2, mlp_svm_l1, mlp_softmax]:
Expand Down Expand Up @@ -121,4 +121,4 @@

#svm_l2 97.85 %s
#svm_l1 98.15 %s
#softmax 97.69 %s
#softmax 97.69 %s
4 changes: 2 additions & 2 deletions python/mxnet/gluon/contrib/nn/basic_layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
'PixelShuffle3D']

import warnings
from .... import nd, test_utils
from .... import nd, context
from ...block import HybridBlock, Block
from ...nn import Sequential, HybridSequential, BatchNorm

Expand Down Expand Up @@ -233,7 +233,7 @@ def _get_num_devices(self):
warnings.warn("Caution using SyncBatchNorm: "
"if not using all the GPUs, please mannually set num_devices",
UserWarning)
num_devices = len(test_utils.list_gpus())
num_devices = context.num_gpus()
num_devices = num_devices if num_devices > 0 else 1
return num_devices

Expand Down
2 changes: 1 addition & 1 deletion tests/python/gpu/test_nccl.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

shapes = [(10), (100), (1000), (10000), (100000), (2,2), (2,3,4,5,6,7,8)]
keys = [1,2,3,4,5,6,7]
num_gpus = len(mx.test_utils.list_gpus())
num_gpus = mx.context.num_gpus()


if num_gpus > 8 :
Expand Down
2 changes: 1 addition & 1 deletion tests/python/profiling/test_nvtx.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

def test_nvtx_ranges_present_in_profile():

if not mx.test_utils.list_gpus():
if not mx.context.num_gpus():
unittest.skip('Test only applicable to machines with GPUs')

# Build a system independent wrapper to execute simple_forward with nvprof
Expand Down
6 changes: 3 additions & 3 deletions tools/caffe_converter/test_converter.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,9 @@ def main():
gpus = [-1]
default_batch_size = 32
else:
gpus = mx.test_utils.list_gpus()
assert gpus, 'At least one GPU is needed to run test_converter in GPU mode'
default_batch_size = 32 * len(gpus)
num_gpus = mx.context.num_gpus()
assert num_gpus, 'At least one GPU is needed to run test_converter in GPU mode'
default_batch_size = 32 * num_gpus

models = ['bvlc_googlenet', 'vgg-16', 'resnet-50']

Expand Down