Skip to content

Commit a90d286

Browse files
authored
Enhance the documentation. (#374)
* refact: refact doc theme * refact: resolve `html_static_path` warning * fix: myst-parser warning * fix: NVIDIA naming * refact: C++ installation doc * fix: training setup * refact: refine yixiao's doc [training.install] * fix: ONNX link * fix: typo tl -> tf * refact: refine wording. thx Luo * refact: pin doc utility versions * fix: markdown style * refact: quick_start/prediction doc * refact: update logo in config * feat: auto glob tensorflow GraphDef input/output nodes * refact: doc - quick-start * refact: figure in docs -> center
1 parent 9858573 commit a90d286

File tree

17 files changed

+431
-262
lines changed

17 files changed

+431
-262
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ We demonstrate how to train a custom pose estimation model with HyperPose. Hyper
141141
from hyperpose import Config, Model, Dataset
142142

143143
# Set model name to distinguish models (necessary)
144-
Config.set_model_name("My_lopps")
144+
Config.set_model_name("MyLightweightOpenPose")
145145

146146
# Set model type, model backbone and dataset
147147
Config.set_model_type(Config.MODEL.LightweightOpenpose)

docs/conf.py

+12-6
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,10 @@
1212
#
1313
import os
1414
import sys
15+
1516
dir_path = os.path.dirname(os.path.realpath(__file__))
1617
sys.path.insert(0, os.path.abspath('../'))
17-
sys.path.insert(0, os.path.abspath('../Hyperpose'))
18+
sys.path.insert(0, os.path.abspath('../hyperpose'))
1819

1920
# -- Project information -----------------------------------------------------
2021

@@ -32,10 +33,15 @@
3233
'sphinx.ext.todo',
3334
'sphinx.ext.githubpages',
3435
'sphinx_markdown_tables',
35-
'recommonmark',
36-
'numpydoc'
36+
'myst_parser',
37+
'numpydoc',
3738
]
3839

40+
myst_enable_extensions = ["colon_fence"]
41+
pygments_style = "sphinx"
42+
pygments_dark_style = "monokai"
43+
html_logo = "markdown/images/logo.png"
44+
3945
autodoc_mock_imports = [
4046
'gridfs',
4147
'horovod',
@@ -52,6 +58,7 @@
5258
'tensorflow',
5359
'tqdm',
5460
'h5py',
61+
'tensorlayer',
5562

5663
'tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops',
5764
]
@@ -73,13 +80,12 @@
7380

7481
# The theme to use for HTML and HTML Help pages. See the documentation for
7582
# a list of builtin themes.
76-
#
77-
html_theme = 'sphinx_rtd_theme'
83+
html_theme = 'furo'
7884

7985
# Add any paths that contain custom static files (such as style sheets) here,
8086
# relative to this directory. They are copied after the builtin static files,
8187
# so a file named "default.css" will overwrite the builtin "default.css".
82-
html_static_path = ['_static']
88+
html_static_path = []
8389

8490
master_doc = 'index'
8591

docs/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Welcome to HyperPose's Documentation!
77
=====================================
88

99
Installation
10-
-----------
10+
------------
1111
.. toctree::
1212
:maxdepth: 1
1313

docs/markdown/images/data_vis_dir.png

64.9 KB
Loading
275 KB
Loading

docs/markdown/install/prediction.md

+19-13
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Note that C++ prediction library requires NVIDIA GPU acceleration.
44
HyperPose is developed and frequently tested on Linux platforms (i.e., Ubuntu 18.04). Hence, we recommend you to build HyperPose on Linux.
55

6-
## Docker Environment Installation (RECOMMENDED)
6+
## Container Installation (RECOMMENDED)
77

88
To ease the installation, you can use HyperPose library in our docker image where the environment is pre-installed (including pretrained models).
99

@@ -15,13 +15,13 @@ To test your docker environment compatibility and get related instructions:
1515
wget https://raw.githubusercontent.com/tensorlayer/hyperpose/master/scripts/test_docker.py -qO- | python
1616
```
1717

18-
- [CUDA Driver >= 418.81.07 (bounded by NVidia Docker)](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation)
19-
- [NVidia Docker >= 2.0](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites)
18+
- [CUDA Driver >= 418.81.07 (bounded by NVIDIA Docker)](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation)
19+
- [NVIDIA Docker >= 2.0](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#pre-requisites)
2020
- [Docker >= 19.03](https://docs.docker.com/engine/install/)
2121

2222
### Official Docker Image
2323

24-
NVidia docker support is required to execute our docker image.
24+
NVIDIA docker support is required to execute our docker image.
2525

2626
The official image is on [DockerHub](https://hub.docker.com/r/tensorlayer/hyperpose).
2727

@@ -55,19 +55,25 @@ docker run --rm --gpus all $(USER_DEF_NAME)
5555
* Third-Party
5656
* OpenCV3.2+. (**[OpenCV 4+](https://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html) is highly recommended**)
5757
* CUDA related:
58-
* (suggested) [CUDA 10.2](https://developer.nvidia.com/cuda-downloads), [CuDNN 7.6.5](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html), [TensorRT 7.1](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
58+
* (suggested) [CUDA 10.2](https://developer.nvidia.com/cuda-downloads), [CuDNN 8.2.0](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html), [TensorRT >= 7.1, <= 8.0](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
5959
* (minimal) [CUDA 10.0](https://developer.nvidia.com/cuda-downloads), [CuDNN 7.6.5](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html), [TensorRT 7.0](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
6060
* gFlags (for command-line tool/examples/tests)
6161

62-
> Packages of other versions might also work but not tested.
62+
```{admonition} Note
63+
Packages of other versions might also work but not tested.
64+
```
65+
66+
```{admonition} TensorRT Tips
67+
For Linux users, you are highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo ](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
68+
```
6369

64-
> **About TensorRT installation**
65-
>
66-
> - For Linux users, you are highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo ](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
67-
> - Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).
68-
> - Also, for Ubuntu 18.04 users, this [3rd party blog](https://ddkang.github.io/2020/01/02/installing-tensorrt.html) may help you.
70+
```{admonition} CUDA-CuDNN-TensorRT Compatibility
71+
:class: warning
72+
73+
Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).
74+
```
6975

70-
### Build On Ubuntu 18.04
76+
### Build on Ubuntu 18.04
7177

7278
```bash
7379
# >>> Install OpenCV3+ and other dependencies.
@@ -102,4 +108,4 @@ cmake --build .
102108

103109
- **Step 3**: Execute your codes!
104110

105-
Go to [Quick Start](../quick_start/prediction.md) to test your installation.
111+
Go to [Quick Start](../quick_start/prediction.md) to test your installation.

docs/markdown/install/training.md

+104-71
Original file line numberDiff line numberDiff line change
@@ -1,116 +1,149 @@
11
# Python Training Library Installation
22

3-
## Prerequisites
4-
* [Anaconda3](https://www.anaconda.com/products/individual):<br>
5-
Anaconda is used to create virtual environment that facilitates building the running environment and ease the complexity of library depedencies. Here we mainly use it to create virtual python environment and install cuda run-time libraries.
6-
* [CUDA](https://developer.nvidia.com/cuda-downloads):<br>
7-
CUDA enviroment is essential to run deep learning neural networks on GPUs. The CUDA installation packages to download should match your system and your NVIDIA Driver version.
3+
## Configure CUDA environment
84

9-
## Configure environment
10-
&emsp;There are two ways to install hyperpose python training library.
5+
You can configure your CUDA either by Anaconda or your system setting.
116

12-
&emsp;All the following instructions have been tested on the environments below:<br>
13-
| OS | NVIDIA Driver | CUDA Toolkit | GPU |
14-
| ------------ | ------------- | ------------ | -------------- |
15-
| Ubuntu 18.04 | 410.79 | 10.0 | Tesla V100-DGX |
16-
| Ubuntu 18.04 | 440.33.01 | 10.2 | Tesla V100-DGX |
17-
| Ubuntu 18.04 | 430.64 | 10.1 | TITAN RTX |
18-
| Ubuntu 18.04 | 430.26 | 10.2 | TITAN XP |
19-
| Ubuntu 16.04 | 430.50 | 10.1 | RTX 2080Ti |
7+
### Using CUDA toolkits from Anaconda (RECOMMENDED)
8+
9+
:::{admonition} Prerequisites
10+
- [Anaconda3](https://www.anaconda.com/products/individual)
11+
- [NVidia Driver >= 410.79 (required by CUDA 10)](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation)
12+
:::
2013

14+
It is suggested to create new conda environment regarding the CUDA requirements.
2115

22-
&emsp;Before all, we recommend you to create anaconda virtual environment first, which could handle the possible conflicts between the libraries you already have in your computers and the libraries hyperpose need to install, and also handle the dependencies of the cudatoolkit and cudnn library in a very simple way.<br>
23-
&emsp;To create the virtual environment, run the following command in bash:
2416
```bash
25-
# >>> create virtual environment (choose yes)
26-
conda create -n hyperpose python=3.7
17+
# >>> create virtual environment
18+
conda create -n hyperpose python=3.7 -y
2719
# >>> activate the virtual environment, start installation
2820
conda activate hyperpose
2921
# >>> install cudatoolkit and cudnn library using conda
3022
conda install cudatoolkit=10.0.130
3123
conda install cudnn=7.6.0
3224
```
25+
26+
::::{warning}
27+
It is also possible to install CUDA dependencies without creating a new environment.
28+
But it might introduce environment conflicts.
29+
30+
:::{code-block} bash
31+
conda install cudatoolkit=10.0.130
32+
conda install cudnn=7.6.0
33+
:::
34+
::::
35+
36+
37+
### Using system-wide CUDA toolkits
38+
39+
Users may also directly depend on the system-wide CUDA and CuDNN libraries.
40+
41+
HyperPose have been tested on the environments below:
42+
43+
| OS | NVIDIA Driver | CUDA Toolkit | GPU |
44+
| ------------ | ------------- | ------------ | -------------- |
45+
| Ubuntu 18.04 | 410.79 | 10.0 | Tesla V100-DGX |
46+
| Ubuntu 18.04 | 440.33.01 | 10.2 | Tesla V100-DGX |
47+
| Ubuntu 18.04 | 430.64 | 10.1 | TITAN RTX |
48+
| Ubuntu 18.04 | 430.26 | 10.2 | TITAN XP |
49+
| Ubuntu 16.04 | 430.50 | 10.1 | RTX 2080Ti |
50+
51+
::::{admonition} Check CUDA/CuDNN versions
3352

34-
&emsp;After configuring and activating conda enviroment, we can then begin to install the hyperpose.<br>
53+
To test CUDA version, run `nvcc --version`: the highlight line in the output indicates that you have CUDA 11.2 installed.
54+
:::{code-block} bash
55+
:emphasize-lines: 5
56+
nvcc --version
57+
# ========== Valid output looks like ==========
58+
# nvcc: NVIDIA (R) Cuda compiler driver
59+
# Copyright (c) 2005-2020 NVIDIA Corporation
60+
# Built on Mon_Nov_30_19:08:53_PST_2020
61+
# Cuda compilation tools, release 11.2, V11.2.67
62+
# Build cuda_11.2.r11.2/compiler.29373293_0
63+
:::
3564

36-
### (I)The first method to install is to put hyperpose python module in the working directory.(recommand)<br>
37-
&emsp;After git-cloning the source [repository](https://github.com/tensorlayer/hyperpose.git), you can directly import hyperpose python library under the root directory of the cloned repository.<br>
65+
To check your system-wide CuDNN version **on Linux**: the output (in the comment) shows that we have CuDNN 8.0.5.
66+
:::{code-block} bash
67+
ls /usr/local/cuda/lib64 | grep libcudnn.so
68+
# === Valid output looks like ===
69+
# libcudnn.so
70+
# libcudnn.so.8
71+
# libcudnn.so.8.0.5
72+
:::
73+
::::
3874

39-
&emsp;To make importion available, you should install the prerequist dependencies as followed:<br>
40-
&emsp;you can either install according to the requirements.txt in the [repository](https://github.com/tensorlayer/hyperpose.git)
75+
## Install HyperPose Python training library
76+
77+
### Install with `pip`
78+
79+
To install a stable library from [Python Package Index](https://github.com/tensorlayer/hyperpose):
4180

4281
```bash
43-
# install according to the requirements.txt
44-
pip install -r requirements.txt
82+
pip install -U hyperpose
4583
```
4684

47-
&emsp;or install libraries one by one
85+
Or you can install a specific release of hyperpose from GitHub, for example:
4886

4987
```bash
50-
# >>> install tensorflow of version 2.3.1
51-
pip install tensorflow-gpu==2.3.1
52-
# >>> install tensorlayer of version 2.2.3
53-
pip install tensorlayer==2.2.3
54-
# >>> install other requirements (numpy<=17.0.0 because it has conflicts with pycocotools)
55-
pip install opencv-python
56-
pip install numpy==1.16.4
57-
pip install pycocotools
58-
pip install matplotlib
88+
export HYPERPOSE_VERSION="2.2.0-alpha"
89+
pip install -U https://github.com/tensorlayer/hyperpose/archive/${HYPERPOSE_VERSION}.zip
5990
```
6091

61-
&emsp;This method of installation use the latest source code and thus is less likely to meet compatibility problems.<br><br>
92+
More GitHub releases and its version can be found [here](https://github.com/tensorlayer/hyperpose/releases).
6293

63-
### (II)The second method to install is to use pypi repositories.<br>
64-
&emsp;We have already upload hyperpose python library to pypi website so you can install it using pip, which gives you the last stable version.
94+
### Local installation
95+
96+
You can also install HyperPose by installing the raw GitHub repository, this is usually for developers.
6597

6698
```bash
67-
pip install hyperpose
99+
# Install the source codes from GitHub
100+
git clone https://github.com/tensorlayer/hyperpose.git
101+
pip install -U -r hyperpose/requirements.txt
102+
103+
# Add `hyperpose/hyperpose` to `PYTHONPATH` to help python find it.
104+
export HYPERPOSE_PYTHON_HOME=$(pwd)/hyperpose
105+
export PYTHONPATH=$HYPERPOSE_PYTHON_HOME/python:${PYTHONPATH}
68106
```
69107

70-
&emsp;This will download and install all dependencies automatically.
108+
## Check the installation
109+
110+
Let's check whether HyperPose is installed by running following commands:
71111

72-
&emsp;Now after installing dependent libraries and hyperpose itself, let's check whether the installation successes.
73-
run following command in bash:
74112
```bash
75-
# >>> now the configuration is done, check whether the GPU is avaliable.
76-
python
77-
>>> import tensorflow as tf
78-
>>> import tensorlayer as tl
79-
>>> tf.test.is_gpu_available()
80-
# >>> if the output is True, congratulation! you can import and run hyperpose now
81-
>>> from hyperpose import Config,Model,Dataset
113+
python -c '
114+
import tensorflow as tf # Test TensorFlow installation
115+
import tensorlayer as tl # Test TensorLayer installation
116+
assert tf.test.is_gpu_available() # Test GPU availability
117+
import hyperpose # Test HyperPose import
118+
'
82119
```
83120

84-
## Extra configuration for exporting model
85-
&emsp;The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a **.npz** file that contains the well-trained model weights.
121+
## Optional Setup
122+
123+
### Extra configurations for exporting models
86124

87-
&emsp;For the training platform, the enviroment configuration above is engough. However, most inference engine only accept .pb format or .onnx format model, such as [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
125+
The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a **.npz** file that contains the well-trained model weights.
88126

89-
&emsp;Thus, one need to convert the trained model loaded with **.npz** file weight to **.pb** format or **.onnx** format for further deployment, which need extra configuration below:<br>
127+
For the training platform, the enviroment configuration above is engough. However, most inference engine accepts ProtoBuf or [ONNX](https://onnx.ai/) format model. For example, the HyperPose C++ inference engine leverages [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) as the DNN engine, which takes ONNX models as inputs.
90128

91-
### (I)Convert to .pb format:<br>
92-
&emsp;To convert the model into .pb format, we use *@tf.function* to decorate the *infer* function of each model class, so we can use the *get_concrete_function* function from tensorflow to consctruct the frozen model computation graph and then save it in .pb format.
129+
Thus, one need to convert the trained model loaded with **.npz** file weight to **.pb** format or **.onnx** format for further deployment, which need extra configuration below:
93130

94-
&emsp;We already provide a script with cli to facilitate conversion, which located at [export_pb.py](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py). What we need here is only *tensorflow* library that we already installed.
131+
#### Converting a ProtoBuf model
95132

96-
### (II)Convert to .onnx format:<br>
97-
&emsp;To convert the model in .onnx format, we need to first convert the model into .pb format, then convert it from .pb format into .onnx format. Two extra library are needed:
133+
To convert the model into ProtoBuf format, we use `@tf.function` to decorate the `infer` function for each model class, and we then can use the `get_concrete_function` function from tensorflow to consctruct the frozen model computation graph and then save it with ProtoBuf format.
98134

99-
* [tf2onnx](https://github.com/onnx/tensorflow-onnx):<br>
100-
*tf2onnx* is used to convert .pb format model into .onnx format model. more information see [here](https://github.com/onnx/tensorflow-onnx).<br>
101-
install tf2onnx by running:
135+
We provide [a commandline tool](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py) to facilitate the conversion. The prerequisite of this tool is a tensorflow library installed along with HyperPose's dependency.
102136

103-
```bash
104-
pip install -U tf2onnx
105-
```
137+
#### Converting a ONNX model
106138

107-
* [graph_transforms](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool):<br>
108-
*graph_transform* is used to check the input and output node of the .pb file if one doesn't know. when convert .pb file into .onnx file using tf2onnx, one is required to provide the input node name and output node name of the computation graph stored in .pb file, so he may need to use *graph_transform* to inspect the .pb file to get node names.<br>
109-
build graph_transforms according to [tensorflow tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool).
139+
To convert a trained model into ONNX format, we need to first convert the model into ProtoBuf format, we then convert a ProtoBuf model into ONNX format, which requires an additional library: [**tf2onnx**](https://github.com/onnx/tensorflow-onnx) for converting TensorFlow's ProtoBuf model into ONNX format.
110140

111-
## Extra configuration for parallel training
112-
&emsp;The hyperpose python training library use the High performance distributed machine learning framework **Kungfu** for parallel training.<br>
113-
&emsp;Thus to use the parallel training functionality of hyperpose, please install [Kungfu](https://github.com/lsds/KungFu) according to the official instructon it provides.
141+
To install `tf2onnx`, we simply run:
114142

143+
```bash
144+
pip install -U tf2onnx
145+
```
115146

147+
### Extra configuration for distributed training with KungFu
116148

149+
The HyperPose python training library can also perform distributed training with [Kungfu](https://github.com/lsds/KungFu). To enable parallel training, please install [Kungfu](https://github.com/lsds/KungFu) according to its official instructon.

docs/markdown/performance/supports.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@
2020

2121
### Released Prediction Models
2222

23-
We released the models on [Google Drive](TinyVGG-V1-HW=256x384.uff). `.onnx` and `.uff` files are for inference.
23+
We released the models on [Google Drive](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing). `.onnx` and `.uff` files are for inference.

0 commit comments

Comments
 (0)