This document has instructions for how to run UNet for the following modes/precisions:
-
Clone this intelai/models repository:
$ git clone [email protected]:IntelAI/models.git
This repository includes launch scripts for running Unet.
-
Download and extract the pretrained model:
$ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/unet_fp32_pretrained_model.tar.gz $ tar -xvf unet_fp32_pretrained_model.tar.gz
-
Clone the tf_unet repository, and then get PR #202 to get cpu optimizations:
$ git clone [email protected]:jakeret/tf_unet.git $ cd tf_unet/ $ git fetch origin pull/202/head:cpu_optimized From github.com:jakeret/tf_unet * [new ref] refs/pull/202/head -> cpu_optimized $ git checkout cpu_optimized Switched to branch 'cpu_optimized'
-
Navigate to the
benchmarks
directory in your local clone of the intelai/models repo from step 1. Thelaunch_benchmark.py
script in thebenchmarks
directory is used for starting a model run in a optimized TensorFlow docker container. It has arguments to specify which model, framework, mode, precision, and docker image to use, along with the checkpoint files that were downloaded in step 2 and the path to the UNet model repository that you cloned in step 3.UNet can be run to test batch and online inference using the following command with your checkpoint and model-source-dir paths:
python launch_benchmark.py \ --model-name unet \ --precision fp32 \ --mode inference \ --framework tensorflow \ --benchmark-only \ --batch-size 1 \ --socket-id 0 \ --docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 \ --checkpoint /home/<user>/unet_trained \ --model-source-dir /home/<user>/tf_unet \ -- checkpoint_name=model.cpkt
Note that the
--verbose
or--output-dir
flag can be added to the above command to get additional debug output or change the default output location. -
The log file is saved to the value of
--output-dir
.Below is an example of what the log file tail:
Time spent per BATCH: 1.1043 ms Total samples/sec: 905.5344 samples/s Ran inference with batch size 1 Log location outside container: {--output-dir value}/benchmark_unet_inference_fp32_20190201_205601.log