From 40c2815204c0bcd405ef6aef5043a6f34568b43c Mon Sep 17 00:00:00 2001 From: Cyril Diagne Date: Wed, 23 Oct 2019 18:54:41 +0200 Subject: [PATCH] Update documentation --- README.md | 25 ++++++++++++------------- docs/kuda/cli.md | 5 +---- docs/kuda/getting_started.md | 25 +++++++++++-------------- docs/kuda/remote_development.md | 11 +++++++++-- 4 files changed, 33 insertions(+), 33 deletions(-) diff --git a/README.md b/README.md index ef0f25e..ec10cfb 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,9 @@ **Develop & deploy serverless applications on remote GPUs.** -[Kuda](https://kuda.dev) is a small util that consolidates the workflow of prototyping and deploying serverless [CUDA](https://developer.nvidia.com/cuda-zone)-based applications on [Kubernetes](http://kubernetes.io). +[Kuda](https://kuda.dev) helps prototyping and deploying serverless applications that need [CUDA](https://developer.nvidia.com/cuda-zone) on [Kubernetes](http://kubernetes.io) on the major cloud providers. + +It is based on [Knative](https://knative.dev), [Skaffold](https://skaffold.dev) and [Kaniko](https://github.com/GoogleContainerTools/kaniko). ## Disclaimer @@ -24,25 +26,22 @@ **Easy to use** - `kuda setup ` : Setup a new cluster will all the requirements on the provider's managed Kubernetes, or upgrade an existing cluster. -- `kuda app deploy` : Builds & deploy an application as a serverless container. +- `kuda app dev` : Deploys an application and watches your local folder so that the app reloads automatically on the cluster when you make local changes. +- `kuda app deploy` : Deploy the application as a serverless container. **Language/Framework agnostic** - Built and deployed with [Docker](https://docker.io), applications can be written in any language and use any framework. -- Applications deployed with Kuda are not required to import any specific library, keeping the code 100% portable. - -**Remote development** - -- The `kuda dev` command lets you spawn a remote development session with GPU inside the cluster. -- It uses [Ksync](https://github.com/vapor-ware/ksync) to synchronise your working directory with the remote session so you can code from your workstation while running the app on the remote session. +- Applications deployed with Kuda are not required to import any specific library, keeping their code 100% portable. **Cloud provider Compatibility** -| Provider | Status | -| - | - | -| [GCP](providers/gcp) | ✔ | -| [AWS](providers/aws) | In progress | -| Azure | Not started | +| Provider | Status | +| -------------------- | -------------- | +| [GCP](providers/gcp) | ✔ | +| [AWS](providers/aws) | In progress... | +| Azure | Not started | +| NGC | Not started | ## Ready? diff --git a/docs/kuda/cli.md b/docs/kuda/cli.md index abd1cc7..57cab3f 100644 --- a/docs/kuda/cli.md +++ b/docs/kuda/cli.md @@ -45,10 +45,7 @@ This command: - Starts a development pod based on the Deep Learning VM - Synchronise the directory provided as parameter with the remote node -List of recommended `base-image`: - -- all images from [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda/) -- gcloud's [Deep Learning containers](https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container) +You can find a list of suggested `base-image` in the [remote development](remote_development.md) documentation. ### → Stop diff --git a/docs/kuda/getting_started.md b/docs/kuda/getting_started.md index c773a53..de05858 100644 --- a/docs/kuda/getting_started.md +++ b/docs/kuda/getting_started.md @@ -21,8 +21,6 @@ This process can take a while since it will create a remote cluster on GKE and i ## 2 - Develop -### • Initialize - Retrieve a simple demo application: ```bash @@ -30,25 +28,24 @@ git clone https://github.com/cyrildiagne/kuda-apps cd kuda-apps/hello-gpu ``` -Install the example dependencies (feel free to create a virtualenv or a [remote dev session](https://docs.kuda.dev/kuda/remote_development)). +Then start the example in dev mode. It will reload automatically when you make changes from your local machine: ```bash -pip install -r requirements.txt +kuda app dev my-hello-gpu ``` -### • Run and Test - -Then start the example in dev mode. It will reload automatically when you make changes from your local machine: +Wait for the app to build and launch. This might take a while if a new node needs +to be allocated. +You can then query your application using any program able to make an HTTP request. +Here is an example using cURL: ```bash -export PORT=80 && python app.py +curl -i -H "Host: my-hello-gpu.default.example.com" http:// ``` -Open `http://localhost` in a web browser to visit the app. Try making changes to the code and reload the page. - Press `Ctrl+C` to stop running the application. -## • Deploy +## 3 - Deploy You can then deploy the app as a serverless API. This will create an endpoint that scales down the GPU nodes to 0 when not used. @@ -60,18 +57,18 @@ kuda app deploy hello-world:0.1.0 → For more information on the `kuda app deploy` command, check the [reference](https://docs.kuda.dev/kuda/cli#deploy). -## 3 - Call your API +## 4 - Call your API You can then test your application by making a simple HTTP request to your cluster. First retrieve the IP address of your cluster by running: `kuda get status` ```bash -curl -H "Host: hello-world.example.com" http:// +curl -i -H "Host: my-hello-gpu.default.example.com" http:// ``` The first call might need to spawn an instance which could take while. Subsequent calls should be a lot faster. -## 4 - Cleanup +## 5 - Cleanup ### • Delete the cluster diff --git a/docs/kuda/remote_development.md b/docs/kuda/remote_development.md index 221efbd..9e044b8 100644 --- a/docs/kuda/remote_development.md +++ b/docs/kuda/remote_development.md @@ -1,5 +1,7 @@ # Remote Development +**⚠️ Remote development and this guide are still WIP. Following this guide probably won't work for now.** + This guide will walk you through the process of developping remotely on the Kubernetes cluster. Make sure you have a cluster running with Kuda's dependencies. @@ -25,15 +27,20 @@ cd hello-gpu Start a remote dev session that will be provisioned on your cluster. ```bash -kuda dev start nvidia/cuda:10.1-base +kuda dev start gcr.io/deeplearning-platform-release/base-cu100 ``` -`nvidia/cuda:10.1-base` Is the docker image to use as base. It allows you to specify which version of CUDA and CuDNN you need. You can find a list of suggested images in the kuda dev [reference page](https://docs.kuda.dev/kuda/cli#dev). +`gcr.io/deeplearning-platform-release/base-cu100` Is the docker image to use as base. This image is convenient if you're using Kuda for deep learning since it packages most of the softwares needed in the deeplearning development cycle. It also allows you to specify which version of CUDA and CuDNN you need. This command will start the remote session and synchronize the CWD \(current working directory\) with the remote instance. Upon started, it will also print the cluster's IP address / port to use later on. Make not of that as we'll refer to it later as `` +List of recommended `base-image`: + +- all images from [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda/). These images are fairly lightweight but python must be installed manually. +- gcloud's [Deep Learning containers](https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container) + → For more information on the `kuda dev start` command, check the [reference](https://docs.kuda.dev/kuda/cli#dev). ## • Retrieve & initialize an example application