These instructions are for Paperspace. You could use AWS as well, but Paperspace offers better value for money at this point in time.
- Sign up and create a P4000 machine with ML-in-Box template on Paperspace
- SSH and change password with
$ passwd
- You can double-check CUDA, cuDNN and Nvidia driver versions with:
$ (cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2 && nvcc --version && nvidia-smi)
or$ (cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2 && nvcc --version && nvidia-smi)
- Move dotfiles (.bashrc, .tmuxconf, .dircolors, etc.) to set up initial configuration
- Install tmux with
$ sudo apt install tmux
- Re-run bashrc with
$ source ~/.bashrc
(theconda activate
commands will fail in the tmux windows - don't need to worry about that)
I personally prefer virtualenv (and virtualenvwrapper) but since the machine comes with anaconda, we'll go with that.
- Update conda with
conda update -n base conda
- Create new anaconda environment with
conda create -n tensorflow_py36 python=3.6 pip
- Activate new environment with
source activate tensorflow_p36
- Install tensorflow from pre-built binaries that come with the machine, found in the
src
folderpip install src/tensorflow-1.7.0-cp36-cp36m-linux_x86_64.whl
- Install keras with
pip install keras
- Reboot with
$ sudo reboot
- At this point, test that tensorflow is working on GPU as expected
$ python
>>> import tensorflow as tf
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
- Finally, create a new kernel for Jupyter notebooks inside the virtual env with
$ ipython kernel install --user --name=tensorflow_p36
- Install aws cli with
$ pip install awscli
and configure with$ aws configure
- Create directories
$ mkdir WIP && mkdir WIP/180503_lentil_app && mkdir WIP/180503_lentil_app/imgs cd WIP/180503_lentil_app/
- Clone git repo
$ git clone https://github.com/DeepBodapati/lentil_app.git .
- Copy from S3
$ aws s3 cp s3://lentil-imgs/src_imgs.zip imgs/ && aws s3 cp s3://lentil-imgs/test_imgs.zip imgs/
- Unzip all the downloaded S3 files:
$ cd imgs/
$ unzip src_imgs.zip && mv imgs/ src/
$ unzip test_imgs.zip -d test/
- Run the
prep_data_for_DL-ebay-only.ipynb
notebook to separate into training and validation data
- Run one of the fine tuning model notebooks (e.g.,
Xception_fine_tuning.ipynb
orMobilenet_fine_tuning.ipynb
) to train via transfer-learning and / or fine-tuning