Core components of PathologyGo, the AI assistance system designed for histopathological inference.
Dependency
- Docker
- Python 2.7 and 3.x
- openslide
- tensorflow_serving
- grpc
- pillow
- numpy
- opencv-python
Dockerized TensorFlow Serving
- GPU version: GitHub, Docker Hub.
- CPU version: Docker Hub.
Quick Start
This code is easy to implement. Just change the path to your data repo:
from utils import config
GPU_LIST = config.INFERENCE_GPUS
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join('{0}'.format(n) for n in GPU_LIST)
from inference import Inference
if __name__ == '__main__':
pg = Inference(data_dir='/path/to/data/', data_list='/path/to/list',
class_num=2, result_dir='./result', use_level=1)
pg.run()
You may configure all the model-specific parameters in utils/config.py
.
Example
Use the CAMELYON16 test dataset as an example,
the data path should be /data/CAMELYON/
, and the content of the data list is
001.tif
002.tif
...
The predicted heatmaps will be written to ./result
.
DIY Notes
You may use other exported models. You can change the model name for TensorFlow Serving in utils/config.py
. Just remember to modify class_num
and use_level
.
Note that the default input / output tensor name should be input
/ output
.