layout | title | date | tags | categories | |||||
---|---|---|---|---|---|---|---|---|---|
post |
Use Docker to run Deep Learning Libraries |
2017-04-13 21:52:15 -0700 |
|
|
Here’s how to use one docker container to run several deep-learning libraries at once, on your own computer, using terminal and git.
Although it’s possible to run any of these libraries in their own docker container, or to install any of them from scratch, running a big docker container with all the libraries might be the most simple method - for comparing different implementations of similar ideas, and to avoid the time and difficulty of setting up a from-scratch install of each individual library. none of these packages are completely dominant at the moment, so while things may converge towards one environment, for now it’s best to be nimble. note: these instructions are specific to macOS - there may be some differences in the commands if you are using linux or windows (please add notes below if you find differences or better methods!)
download and install the “docker” application from https://docs.docker.com/engine/installation/
docker is an open-source project that consists of "containers" that include all necessary dependencies and environmental variables. when you run a docker container, it runs as a virtual machine that is isolated from other installed software on your computer, even other docker containers. Note that any changes you make within the docker container only persist while docker is running, and everything is re-set when the container is run again.
in terminal, type docker run hello-world
to verify that docker has installed correctly.
use docker to download this deep learning docker container* by typing
docker pull floydhub/dl-docker:cpu
*there are several other similar all-in-one docker files that are worth comparing - this one from kyle mcdonald for instance.
you can read more about what this specific docker container does here. basically, it runs a virtual version of ubuntu linux that has Tensorflow, Caffe, Theano, Keras, Lasagne, Torch, iPython/Jupyter Notebook, Numpy, SciPy, Pandas, Scikit Learn, Matplotlib, and OpenCV all installed and working without conflict. very helpful!
once this download is complete, create a folder named sharedfolder
. we'll make this folder accessible from within the docker container in the next step.
mkdir sharedfolder
docker run -it -p 8888:8888 -p 6006:6006 -v [yourhomedirectoryhere]/sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash
on my computer [yourhomedirectoryhere]
would be replaced with /Users/luke
. type pwd
in terminal to see which directory you are currently in. type ls
to see the contents of your current directory - make sure the folder sharedfolder
that you just created is in fact inside this directory.
-p 8888:8888 -p 6006:6006
open these ports to the docker container (to view a jupyter notebook, or a tensorboard graph, for example). if you receive an error that these ports are currently in use, try another port number, or omit this part of the command for now.
bash
is included at the end of the command to specify that you'd like to access docker via bash
. you can specify another shell if you like.
once the docker container is running, you can launch jupyter notebooks (type jupyter notebook
), caffe (type caffe
), or torch (type th
). to use included python libraries, launch python (type python
) and then import the library you want to use. for example: use tensorflow by typing import tensorflow as tf
. from here you can get started as if you'd installed tensorflow by any other means.
to exit docker, type control+d
. make sure anything that you want to save (datasets, code, trained models, outputs, etc) is inside /sharedfolder
, as everything else inside the docker container will reset the next time you run it.
as an example, we'll try this deep convolutional generative adversarial network project written in torch: https://github.com/soumith/dcgan.torch
run the docker container.
enter the sharedfolder
you created earlier by typing cd sharedfolder
.
use git to clone the project repository into your sharedfolder
so that it can be accessed from within docker:
git clone https://github.com/soumith/dcgan.torch.git
enter the directory you've just created for the project by typing cd dcgan.torch
.
torch is written in lua, which is a beautiful language with an interesting history. use lua's package manager luarocks
to install the necessary optnet
package:
luarocks install optnet
at this point you can try out the dcgan.torch project code to train an image generator on existing datasets, or your own set of images, or use pre-trained models to generate images right away. look at the project readme for a walkthrough of the various options, or download a pre-trained model here, saving it inside your sharedfolder
directory:
- celebrity faces (uses the celebA dataset)
- bedrooms (uses the LSUN dataset)
type the following command to use a pre-trained model (from inside the dcgan.torch
directory) to generate an image named g01.png inside your dcgan.torch
directory:
gpu=0 batchSize=1 display=0 name=g01 net=~/sharedfolder/celebA_25_net_G.t7 th generate.lua
change ~/sharedfolder/celebA_25_net_G.t7
to point to whichever model you want to use (celebrity faces, in this case).
change batchsize
to generate more than one image at a time.
change g01
to another filename to create a new image.
you can play with other options like imsize
and noisemode
- see the project readme for all options.
if you are interested in getting deeper with generative adversarial networks (GAN's), here are a some suggested readings: