To accelerate mobileNet-ssd with tensorRT
TensorRT-Mobilenet-SSD can run 50fps on jetson tx2
Requierments:
1.tensorRT4
2.cudnn7
3.opencv
Run:
cmake .
make
./build/bin/mobileNet
Reference:
https://github.com/saikumarGadde/tensorrt-ssd-easy
https://github.com/chuanqi305/MobileNet-SSD
I replaced depthwise with group_conv,because group_conv has been optimized in cudnn7
I retrianed mobileNet-SSD,my number of classfication is 5
TODO:
- To save serialized model
- To solve the bug of getting different result with same input
- The bottleneck of time cost lies in the decoding of pictures. "imread" cost too much ,to resolve it.
- To modify the architecture, decrease the time cost
If want to decrease the time cost of "imread",you could rebuild OpenCV[https://github.com/jetsonhacks/buildOpenCVTX2]
Added producer-consumer
The bug has been fixed