You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using yolov5 as an object detection model following image fusion. I have infrared and optical input streams, the image fusion model (in this case TarDAL takes two frames as input i.e. infrared and RGB and generates a fused frame from them. Therefore, at output I am getting one stream from the fusion model. After fusing, the next step is to detect objects from the fused frame using yolov5 or any other detector. You have implemented yolo variants on Jetson and GPU, how can I incorporate image fusion as a primary inference. What modifications would you suggest?
Thank you for the wonderful work.
I am using yolov5 as an object detection model following image fusion. I have infrared and optical input streams, the image fusion model (in this case TarDAL takes two frames as input i.e. infrared and RGB and generates a fused frame from them. Therefore, at output I am getting one stream from the fusion model. After fusing, the next step is to detect objects from the fused frame using yolov5 or any other detector. You have implemented yolo variants on Jetson and GPU, how can I incorporate image fusion as a primary inference. What modifications would you suggest?
Also I was running NVIDIA deepstream python apps, the deepstream_test1 runs successfully and detects object. I am not sure how can I use yolov5 model with it to detect object.
The text was updated successfully, but these errors were encountered: