Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
cyrildiagne committed Oct 23, 2019
1 parent 4716054 commit 40c2815
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 33 deletions.
25 changes: 12 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@

**Develop & deploy serverless applications on remote GPUs.**

[Kuda](https://kuda.dev) is a small util that consolidates the workflow of prototyping and deploying serverless [CUDA](https://developer.nvidia.com/cuda-zone)-based applications on [Kubernetes](http://kubernetes.io).
[Kuda](https://kuda.dev) helps prototyping and deploying serverless applications that need [CUDA](https://developer.nvidia.com/cuda-zone) on [Kubernetes](http://kubernetes.io) on the major cloud providers.

It is based on [Knative](https://knative.dev), [Skaffold](https://skaffold.dev) and [Kaniko](https://github.com/GoogleContainerTools/kaniko).

## Disclaimer

Expand All @@ -24,25 +26,22 @@
**Easy to use**

- `kuda setup <provider>` : Setup a new cluster will all the requirements on the provider's managed Kubernetes, or upgrade an existing cluster.
- `kuda app deploy` : Builds & deploy an application as a serverless container.
- `kuda app dev` : Deploys an application and watches your local folder so that the app reloads automatically on the cluster when you make local changes.
- `kuda app deploy` : Deploy the application as a serverless container.

**Language/Framework agnostic**

- Built and deployed with [Docker](https://docker.io), applications can be written in any language and use any framework.
- Applications deployed with Kuda are not required to import any specific library, keeping the code 100% portable.

**Remote development**

- The `kuda dev` command lets you spawn a remote development session with GPU inside the cluster.
- It uses [Ksync](https://github.com/vapor-ware/ksync) to synchronise your working directory with the remote session so you can code from your workstation while running the app on the remote session.
- Applications deployed with Kuda are not required to import any specific library, keeping their code 100% portable.

**Cloud provider Compatibility**

| Provider | Status |
| - | - |
| [GCP](providers/gcp) ||
| [AWS](providers/aws) | In progress |
| Azure | Not started |
| Provider | Status |
| -------------------- | -------------- |
| [GCP](providers/gcp) ||
| [AWS](providers/aws) | In progress... |
| Azure | Not started |
| NGC | Not started |

## Ready?

Expand Down
5 changes: 1 addition & 4 deletions docs/kuda/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,7 @@ This command:
- Starts a development pod based on the Deep Learning VM
- Synchronise the directory provided as parameter with the remote node

List of recommended `base-image`:

- all images from [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda/)
- gcloud's [Deep Learning containers](https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container)
You can find a list of suggested `base-image` in the [remote development](remote_development.md) documentation.

### → Stop

Expand Down
25 changes: 11 additions & 14 deletions docs/kuda/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,34 +21,31 @@ This process can take a while since it will create a remote cluster on GKE and i

## 2 - Develop

### • Initialize

Retrieve a simple demo application:

```bash
git clone https://github.com/cyrildiagne/kuda-apps
cd kuda-apps/hello-gpu
```

Install the example dependencies (feel free to create a virtualenv or a [remote dev session](https://docs.kuda.dev/kuda/remote_development)).
Then start the example in dev mode. It will reload automatically when you make changes from your local machine:

```bash
pip install -r requirements.txt
kuda app dev my-hello-gpu
```

### • Run and Test

Then start the example in dev mode. It will reload automatically when you make changes from your local machine:
Wait for the app to build and launch. This might take a while if a new node needs
to be allocated.

You can then query your application using any program able to make an HTTP request.
Here is an example using cURL:
```bash
export PORT=80 && python app.py
curl -i -H "Host: my-hello-gpu.default.example.com" http://<YOUR-CLUSTER-IP>
```

Open `http://localhost` in a web browser to visit the app. Try making changes to the code and reload the page.

Press `Ctrl+C` to stop running the application.

## Deploy
## 3 - Deploy

You can then deploy the app as a serverless API. This will create an endpoint that scales down the GPU nodes to 0 when not used.

Expand All @@ -60,18 +57,18 @@ kuda app deploy hello-world:0.1.0

→ For more information on the `kuda app deploy` command, check the [reference](https://docs.kuda.dev/kuda/cli#deploy).

## 3 - Call your API
## 4 - Call your API

You can then test your application by making a simple HTTP request to your cluster.
First retrieve the IP address of your cluster by running: `kuda get status`

```bash
curl -H "Host: hello-world.example.com" http://<cluster-ip-address>
curl -i -H "Host: my-hello-gpu.default.example.com" http://<YOUR-CLUSTER-IP>
```

The first call might need to spawn an instance which could take while. Subsequent calls should be a lot faster.

## 4 - Cleanup
## 5 - Cleanup

### • Delete the cluster

Expand Down
11 changes: 9 additions & 2 deletions docs/kuda/remote_development.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Remote Development

**⚠️ Remote development and this guide are still WIP. Following this guide probably won't work for now.**

This guide will walk you through the process of developping remotely on the Kubernetes cluster.

Make sure you have a cluster running with Kuda's dependencies.
Expand All @@ -25,15 +27,20 @@ cd hello-gpu
Start a remote dev session that will be provisioned on your cluster.

```bash
kuda dev start nvidia/cuda:10.1-base
kuda dev start gcr.io/deeplearning-platform-release/base-cu100
```

`nvidia/cuda:10.1-base` Is the docker image to use as base. It allows you to specify which version of CUDA and CuDNN you need. You can find a list of suggested images in the kuda dev [reference page](https://docs.kuda.dev/kuda/cli#dev).
`gcr.io/deeplearning-platform-release/base-cu100` Is the docker image to use as base. This image is convenient if you're using Kuda for deep learning since it packages most of the softwares needed in the deeplearning development cycle. It also allows you to specify which version of CUDA and CuDNN you need.

This command will start the remote session and synchronize the CWD \(current working directory\) with the remote instance.

Upon started, it will also print the cluster's IP address / port to use later on. Make not of that as we'll refer to it later as `<your-dev-session-external-ip:port>`

List of recommended `base-image`:

- all images from [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda/). These images are fairly lightweight but python must be installed manually.
- gcloud's [Deep Learning containers](https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container)

→ For more information on the `kuda dev start` command, check the [reference](https://docs.kuda.dev/kuda/cli#dev).

## • Retrieve & initialize an example application
Expand Down

0 comments on commit 40c2815

Please sign in to comment.