Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pip install yolov5 - How does yolov5.predict() work? #2240

Closed
saskia1001 opened this issue Feb 17, 2021 · 12 comments
Closed

pip install yolov5 - How does yolov5.predict() work? #2240

saskia1001 opened this issue Feb 17, 2021 · 12 comments
Labels
question Further information is requested

Comments

@saskia1001
Copy link

❔Question

I pip installed yolov5 and want to integrate it into a python script for detecting my custom objects in a feed of a drone. But the basic command results = yolov5.predict(image1) on a jpg image did not give me anything back. When I try results.show() , I just get the original image back. For the model_path I gave the .pt file of my custom trained model. Does it work with my own weights and classes? How can I use yolov5.predict()? Can I find any further documentation on how to use it?

Additional context

Thanks a lot for the YOLOv5 model. I trained a YOLOv5 model with custom data and it worked perfectly.

@saskia1001 saskia1001 added the question Further information is requested label Feb 17, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Feb 17, 2021

👋 Hello @saskia1001, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 17, 2021

@saskia1001 that is a good question! We are not actually the original authors of the new pip package. We were planning on launching a pip package that acted as a wrapper around our PyTorch Hub functionality, but it seems we were too slow and another author did this for us. Our idea was to provide a common (and identical) interface between the pip model and PyTorch Hub model. It appears the pip package acts along these lines but is not an exact equivalent.

From https://pypi.org/project/yolov5/ the package homepage is listed as https://github.com/fcakyon/yolov5-pip. I will raise an issue there to see if they can align the pip model functionality with the torch hub model functionality, and then you should be able to to visit the hub tutorial to answer all your questions.

Tutorials

@saskia1001
Copy link
Author

@glenn-jocher Thanks for the quick response! And for clearing the authorship. The PyTorch Hub functionality works great for predictions on images for my custom model. But I did not manage to make detection on a video (.mp4) or a video feed. Does this work? I did not figure it out. Looking forward to seeing and working with your pip package.

@glenn-jocher
Copy link
Member

@saskia1001 you can think of the hub model as a higher level model than the basic pytorch model, as it handles pre and postprocessing (resizing, NMS, etc.). It accepts a single image or a batch of images. Therefore if you have a video you must load it with a package like cv2 and pass in individual frames to the hub model.

Alternatively detect.py is a fully managed solution for video inference:
python detect.py --source video.mp4

@saskia1001
Copy link
Author

@glenn-jocher That helps a lot! Thank you!!!

@saskia1001
Copy link
Author

Hi @glenn-jocher, I tried to implement the PyTorchHub approach for my drone project. I can path a numpy.ndarray into the results = model(img, im_size) method. That works fine. results gives back an object of the type "models.common.Detections". Is there any way to convert this object type back into a numpy.ndarray? I need to pass the detection into cv2.imshow(). The results.show() method does not do the job in my script as I need to loop through a while loop with changing frames in order to get my video stream with the detection running. If I pass results.show() into cv2.imshow(), my loop only return the first frame/img from the video stream. It would really help me to know how to get models.common.Detections back into an np.ndarray. Thanks a lot!

@glenn-jocher
Copy link
Member

@saskia1001 yes, the Detections object provides a number of values following inference. See the Detections() class for full details:

yolov5/models/common.py

Lines 231 to 246 in d2e754b

class Detections:
# detections class for YOLOv5 inference results
def __init__(self, imgs, pred, files, names=None):
super(Detections, self).__init__()
d = pred[0].device # device
gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations
self.imgs = imgs # list of images as numpy arrays
self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
self.names = names # class names
self.files = files # image filenames
self.xyxy = pred # xyxy pixels
self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
self.n = len(self.pred)

@glenn-jocher
Copy link
Member

@saskia1001 BTW, to answer your cv2 show question, you can do results.render(), which will overlay detections onto results.imgs, and then pass results.imgs to cv2.show()

@saskia1001
Copy link
Author

@glenn-jocher results.render() did the job. I had an older version of common.py where render() was not included. It is the version Roboflow uses to apply the model (Colab Notebook) on which I did my first training trys and had stored locally. Thank you so much for your support! This model is awesome. Great work!

@glenn-jocher
Copy link
Member

@saskia1001 great, glad everthing works. If you have any other issues or suggestions let us know!

@jahanvikotwal
Copy link

local variable 'results' referenced before assignment

@glenn-jocher
Copy link
Member

@jahanvikotwal hi,

It seems that the issue you're facing is related to a local variable 'results' being referenced before assignment. This error typically occurs when you're attempting to use a variable before it has been defined or assigned a value.

To resolve this issue, make sure that you have properly initialized and assigned a value to the 'results' variable before attempting to reference it. Double-check your code for any potential syntax or logical errors that might be causing this problem.

If you're still experiencing difficulties, please share the relevant code snippet or provide more information about the specific context in which this error occurs. This will help us provide more targeted assistance.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants