Skip to content

Latest commit

 

History

History
43 lines (33 loc) · 1.11 KB

README.md

File metadata and controls

43 lines (33 loc) · 1.11 KB

yolo-tensorrt

Optimization of YOLOv8 and YOLOv10 inference with TensorRT.

Dependencies

  • TensorRT >= 10.2.0
  • CUDA >= 11.8
  • Opencv >= 4.8.0

Build

(Optional) build the docker image to avoid managing dependencies

docker build -t yolofast . # might takes a long time bc of the opencv build (~1h on my modest machine)
docker run --gpus all -it --name yolofast -v $(pwd):/workspace/yolofast yolofast
mkdir build && cd build
cmake .. 
make -j4

Usage

Usage: yolofast [options]
Options:
  --model <name>          Specify YOLOv8n or YOLOv10n model.
  --video <path>          Run inference on video and save it as 'detection_output.avi'.
  --image <path>          Run inference on image and save it as 'detection_output.jpg'.
  --build <precision>     Specify precision optimization (e.g., fp32, fp12 or int8).
  --timing                Enable timing information.

Example:
  ./yolofast --model yolov8 --build fp16 --video ../samples/video.mp4 --timing

Results

image

The model had also no problem to run on a video :

Alt Text