Skip to content

Commit

Permalink
Merge pull request #285 from univerone/add-docker-build
Browse files Browse the repository at this point in the history
[MRG]:whale: Update modelci docker image (CPU&GPU). docker-compose is done
  • Loading branch information
HuaizhengZhang committed Apr 19, 2021
2 parents 8ba8605 + b99c2aa commit bf1123e
Show file tree
Hide file tree
Showing 13 changed files with 517 additions and 114 deletions.
4 changes: 2 additions & 2 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ build
dist
*.egg-info
*.egg/
*.pyc
*.swp

.tox
Expand All @@ -15,7 +14,8 @@ html/*
__pycache__

### Front end ###
frontend
node_modules
npm-debug.log

### Build cache ###
*/**/*.cache
2 changes: 1 addition & 1 deletion .github/workflows/run_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
- 'frontend/**'
jobs:
test:
runs-on: ubuntu-latest
runs-on: ubuntu-18.04
services:
mongodb:
image: mongo
Expand Down
140 changes: 106 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,79 +47,151 @@ Several features are in beta testing and will be available in the next release s

*If your want to join in our development team, please contact huaizhen001 @ e.ntu.edu.sg*

## Installation
## Demo

### using pip
The below figures illusrates the web interface of our system and overall workflow.
| Web frontend | Workflow |
|:------------:|:--------------:|
| <img src="https://i.loli.net/2020/12/10/4FsfciXjtPO12BQ.png" alt="drawing" width="500"/> | <img src="https://i.loli.net/2020/12/10/8IaeW9mS2NjQEYB.png" alt="drawing" width="500"/> |


## Installation Guide

### Prerequisites

- A GNU/Linux environment(Ubuntu preferred)
- [Docker](https://docs.docker.com/engine/install/)
- [Docker Compose](https://docs.docker.com/compose/) (Optional, for Installation via Docker)
- [TVM](https://github.com/apache/incubator-tvm) and `tvm` Python module(Optional)
- [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) and Python API(Optional)
- Python >= 3.7

### Installation using pip

```shell script
# upgrade installation packages first
pip install -U setuptools requests
# install modelci from GitHub
pip install git+https://github.com/cap-ntu/ML-Model-CI.git@master
```

### create conda workspace
Once you have installed, make sure the docker daemon is running, then you can start MLModelCI service on a leader server by:

```bash
modelci service init
```

**Note**
- Conda and Docker are required to run this installation script.
- To use TensorRT, you have to manually install TensorRT (`sudo` is required). See instruction
[here](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
![CLI start service](https://i.loli.net/2021/04/15/rLiMoxkqRO67Tyg.gif)

```shell script
bash scripts/install.sh


Or stop the service by:

```bash
modelci service stop
```

### Docker
![CLI stop service](https://i.loli.net/2021/04/16/jo1ZnWsqrmxFvlU.gif)



### Installation using Docker

![](https://img.shields.io/docker/pulls/mlmodelci/mlmodelci.svg) ![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci)
![](https://img.shields.io/docker/pulls/mlmodelci/mlmodelci.svg)

#### For CPU-only Machines

![](https://img.shields.io/docker/v/mlmodelci/mlmodelci/cpu)![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci/cpu)

```shell script
docker pull mlmodelci/mlmodelci
docker pull mlmodelci/mlmodelci:cpu
```

<!-- Please refer to [here](/integration/README.md) for more information. -->
Start basic services by Docker Compose:

```bash
docker-compose -f ML-Model-CI/docker/docker-compose-cpu-modelhub.yml up -d
```

## Quick Start
Stop the services by:

The below figurs illusrates the
| Web frontend | Workflow |
|:------------:|:--------------:|
| <img src="https://i.loli.net/2020/12/10/4FsfciXjtPO12BQ.png" alt="drawing" width="500"/> | <img src="https://i.loli.net/2020/12/10/8IaeW9mS2NjQEYB.png" alt="drawing" width="500"/> |
```bash
docker-compose -f ML-Model-CI/docker/docker-compose-cpu-modelhub.yml down
```

#### For CUDA10.2 Machine

### 0. Start the ModelCI service
![](https://img.shields.io/docker/v/mlmodelci/mlmodelci/cuda10.2-cudnn8)![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci/cuda10.2-cudnn8)

Once you have installed, start MLModelCI service on a leader server by:
```shell script
modelci service init
docker pull mlmodelci/mlmodelci:cuda10.2-cudnn8
```

Start basic services by Docker Compose:

```bash
docker-compose -f ML-Model-CI/docker/docker-compose-gpu-modelhub.yml up -d
```

![docker-compose start service](https://i.loli.net/2021/04/15/65oYIBurfhPRK3U.gif)

Stop the services by:

```bash
docker-compose -f ML-Model-CI/docker/docker-compose-gpu-modelhub.yml down
```

**We provide three options for users to use MLModelCI: CLI, Web interface and import it as a python package**
![docker-compose stop service](https://i.loli.net/2021/04/15/CyNzo4uhXkSrQRE.gif)

<!-- Please refer to [here](/integration/README.md) for more information. -->


## Usage

**We provide three options for users to use MLModelCI: CLI, Running Programmatically and Web interface**

### 1. CLI

```python
```console
# publish a model to the system
modelci modelhub publish registration_example.yml
modelci@modelci-PC:~$ modelci modelhub publish -f example/resnet50.yml
{'data': {'id': ['60746e4bc3d5598e0e7a786d']}, 'status': True}
```

Please refer to [WIKI](https://github.com/cap-ntu/ML-Model-CI/wiki) for more CLIs.
Please refer to [WIKI](https://github.com/cap-ntu/ML-Model-CI/wiki) for more CLI options.

### 2. Python Package
### 2. Running Programmatically

```python
# utilize the convert function
from modelci.hub.converter import ONNXConverter
from modelci.hub.converter import convert
from modelci.types.bo import IOShape

# the system can trigger the function automaticlly
# users can call the function individually
ONNXConverter.from_torch_module(
'<path to torch model>',
'<path to export onnx model>',
inputs=[IOShape([-1, 3, 224, 224], float)],
)
convert(
'<torch model>',
src_framework='pytorch',
dst_framework='onnx',
save_path='<path to export onnx model>',
inputs=[IOShape([-1, 3, 224, 224], dtype=float)],
outputs=[IOShape([-1, 1000], dtype=float)],
opset=11)
```

### 3. Web Interface

If you have installed MLModelCI via pip, you should start the frontend service manually.

```bash
# Navigate to the frontend folder
cd frontend
# Install dependencies
yarn install
# Start the frontend
yarn start
```

The frontend will start on <http://localhost:3333>

## Quickstart with Notebook

- [Publish an image classification model](./example/notebook/image_classification_model_deployment.ipynb) [![nbviewer](https://raw.githubusercontent.com/jupyter/design/master/logos/Badges/nbviewer_badge.svg)](https://nbviewer.jupyter.org/github/cap-ntu/ML-Model-CI/blob/master/example/notebook/image_classification_model_deployment.ipynb)
Expand Down
Loading

0 comments on commit bf1123e

Please sign in to comment.