Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about image sources for training and image quality of camera #1

Open
hillbicks opened this issue May 16, 2022 · 4 comments
Open

Comments

@hillbicks
Copy link

Hey man,

hope you don't mind I open up an issue. I already opened an issue a couple of months ago at the @OlafenwaMoses, but never got an answer.

I was able to set it up in the same way you did, using node red, an example image works just fine. All the images coming from my front camera (1920x1080) are not recognized though and therefore I'm wondering what your experience is.

Also, how did you collect the USPS images for training and how many?

Thanks in advance!

@sstratoti
Copy link
Owner

Hi! Sorry about that. Life got in the way.

I collected roughly 300 images through a bing.com image file downloader. There used to be one for google images, but they got wise to it and have blocked it. I used 289 images to train, and then 25 for testing.

For the image processing, I have Blue Iris sending a snapshot image to a MQTT topic. I then have node-red subscribe to that topic, and use the node-red-contrib-image-tools palette (specifically the jimp-image node) to store the image into buffer. This is because blue iris is sending it over as a base64 image in MQTT. Then I use node-red-contrib-deepstack palette (specifically the deepstack-custom-model node) to send the image in the buffer to the deepstack end point.

Does this help to answer your question?

@hillbicks
Copy link
Author

hillbicks commented Jun 4, 2022

Hey, no worries. Happy that you answered.

Thanks for the hint to the bing downloader. Since I don't know what images were used for the original, I'll build by own dataset and try it that way.

I'm still not sure whether the resolution of 1920x1080 is just not enough when the delivery trucks are not directly in front of the camera.

Do you know the resolution of your snapshots by chance?

I'll build by model and report back here, maybe it'll help someone else.

First EDIT: I just tried again with a snapshot of an DHL truck from today. The original snapshot was not detected as DHL, but with the truck cropped out the detection was successful. So maybe the solution is to just use a snapshot of the car for the detection instead.

@sstratoti
Copy link
Owner

Higher resolution images shouldn't affect it, at least I don't think so.

If you message me your gmail I can share with you over drive the images folder and the tags I used. Most were lower resolution.

@hillbicks
Copy link
Author

I really appreciate that offer.

But I think I found a way yesterday. Like I said, the full resolution picture wasn't working for me, the logo was not detected. But as soon as I cropped out the DHL truck and fed that into the deepstack model it worked, although the resolution/quality of the pictures is much worse.

It seems like it requires a certain area, not necessarily the resolution itself. today is sunday, tomorrow is a national holiday, so tuesday I'll know whether this works with my setup, but I'm pretty confident.

btw: You mentioned blue iris, have you heard of frigate? It integrates really nice with home assistant (sensors for the zones and different objects that were detected), there are lovelace cards to show the live view and events that were captured. Lot of people in the home assistant environment migrated from blue iris to frigate, maybe it is worth checking out for you :)

And thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants