Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolo classifier + AI - stick ( movidius, laceli ) #1505

Open
MrJBSwe opened this issue May 1, 2018 · 121 comments
Open

Yolo classifier + AI - stick ( movidius, laceli ) #1505

MrJBSwe opened this issue May 1, 2018 · 121 comments

Comments

@MrJBSwe
Copy link

MrJBSwe commented May 1, 2018

I think combining motion detection with classification like yolo and using AI-sticks as workhorse could fit very nicely in to the core purpose of motioneyeos.

The trend is a usb-dongle drawing less than 1 watt could do the classifying job, example movidius or laceli

@jasaw
Copy link
Collaborator

jasaw commented May 2, 2018

This is very interesting. I've been looking for ways to add some sort of AI to enhance motion detection. I found someone using the Pi GPU.
Do you know roughly what kind of performance we can expect running Yolo classifier on a RPi 3?
Also do you know whether there's a pre-trained model that we can use or whether there's readily available training data? The performance of the AI depends heavily on the training data, and seems like the most difficult part to get right if we were to train it ourselves.

@MrJBSwe
Copy link
Author

MrJBSwe commented May 2, 2018

Pi GPU
Based on JeVois, my guess is PI-GPU & tiny Yolo will run at 0.5 - 2 fps

pre-trained
based on coco
wget https://pjreddie.com/media/files/yolov2-tiny-voc.weights

Movement => Classify
Tiny yolo v2 uses 7.1 GFlops which makes it a good starting point for PI-GPU & movidius. It can also be run on CPU. It seems quite "easy" to train skynet for squirrels, fish

examples movidius

Continuous classification
With laceli, I think movement detection might be obsolete, since yolo is very robust to light changes and noise from background. Yolo v3 uses 140 GFlops and my guess is it should run at >10 fps on laceli !?

@wb666greene
Copy link

wb666greene commented May 20, 2018

The sample code from here:
https://www.pyimagesearch.com/2018/02/19/real-time-object-detection-on-the-raspberry-pi-with-the-movidius-ncs/

which uses MobileNet SSD (I believe from here: https://github.com/chuanqi305/MobileNet-SSD) and a PiCamera module to do about 5 fps on a Pi 3 with Movidius NCS.

I've modified the PyImageSearch sample code to get images via MQTT instead of the PiCamera video stream and then run object detection on them. If a "person" is detected I write out the detected file which will ultimately get pushed to my cell phone in a way yet TBD.

I've written a simple node-red flow also running on the Pi3 with the NCS that presents an ftp server and sends the image files to the NCS detection script. The Pi3 also runs the MQTT server and node-red.

I then configured motioneyeOS on a PiZeroW to ftp its motion images to the Pi3 node-red ftp server.

Its working great, been running all afternoon. Since virtually all security DVRs and netcams can ftp their detected images, I thing this system has great generality and could produce a system worthy of a high priority push notification since the false positive rate will be near zero.

I plan to put it up on github soon, but it will be my first github project attempt so it might take me longer than I'd like.

Running the "Fast Netcam" or v4l2 MJPEG streams into the neural network instead of "snapshots" might be even better, but the FLIR Lorex security DVR I have uses proprietary protocols so ftp'd snapshots is what I used. There is a lot of ugly code to support the lameness of my DVR so after I got it working (been running for three days now) I simplified things for this simple test system I plan to share as a starting point project to integrate AI with video motion detection to greatly lower the false alarm rate.

To suggest an enhancement to motioneye, I'd like to see an option for it to push jpegs directoy to an MQTT server instead of ftp.

I don't think video motion detection like motioneye is obsolete, it makes a great front-end to reduce the load on the network and AI subsystem letting more cameras be handled with less hardware.

Edit: Been running for over 8 hours now. I've the Pi3 also configured as a wifi AP with motioneyeOS connected to it, So I have a stand-alone system with only two parts. There have been 869 frames detected as "motion frames" only 352 had a "person" detected by the AI. Looking at a slide show of the detected images I saw no false positives, over 50% of the frames would be false alarms and very annoying if Emailed. I was testing the system so the number of real events was a lot higher than would normally be the case. So far complete immunity from shadows, reflections, etc.

I think this has great potential!

@debsahu
Copy link

debsahu commented May 28, 2018

Here's is my attempt at something similar:

https://github.com/debsahu/PiCamMovidius

PiMovidiusCamera

@wb666greene
Copy link

wb666greene commented May 29, 2018 via email

@debsahu
Copy link

debsahu commented May 29, 2018

@wb666greene using ocv3 causes fps to drop below 1 (~0.88). Using yolov2 with native PiCamera library is a struggle, I tried. Getting started with GitHub is not straight forward, was a struggle initially. My suggestion is use GitHub desktop.

@wb666greene
Copy link

Thanks I'll look into github desktop. But I've put the code up anyways as my frustration with github is at the point I'm just giving up for now. Getting my system sending notifications is my next order of business.

Here is a link to my github crude as it is:
https://github.com/wb666greene/SecurityDVR_AI_addon/blob/master/README.md

I'd like to try other models but ti they don't run on the Movidius the frame rate is not likely to be high enough for my use.

DarkNet YOLO was really impressive, but took ~18 seconds to run an image on my i7 desktop without CUDA.

@wb666greene
Copy link

Thanks for the extra background info. I'll add some of these links to my github, think they are very helpful and more enlightening that anything I could write up.

My main point is that I've made an "add-on" for an existing video security DVR instead of making a security camera with AI. Expect these to flood the market soon, it'll be a good thing! but until then, I wanted AI in my existing system without a lot of expense or rewiring.

MotioneyeOS is a a perfectly good way get a simple video security DVR going, and in fact it has far superior video motion detection than does my FLIR/Lorex system, but you are on your own for "weatherproofing" and adding IR illumination to your MotioneyeOS setup -- not a small job!

I used it so I could give a simple example instead of over-complicating things with all the ugly code needed to deal with my FLIR/Lorex system's lameness.

@MrJBSwe
Copy link
Author

MrJBSwe commented Jul 30, 2018

Living on the edge
https://aiyprojects.withgoogle.com/edge-tpu

@MrJBSwe
Copy link
Author

MrJBSwe commented Aug 13, 2018

@jasaw
Copy link
Collaborator

jasaw commented Oct 18, 2018

I just had a play with Movidius on a RPi 3B+ recently, with version 2 of the NCSDK (still in beta). Here's what I've found:

  • NCSDK v2 works after fixing a few installation scripts. It pulls in a lot of dependencies, and wants very specific versions of libraries. It will most likely break other existing programs on your host machine by messing with the libraries. It is recommended to install on an sacrificial machine.
  • It supports several neural net models, but neural net models are pulled in from external repositories, so things are left in a broken state because of changes on those external repositories.
  • I tried various neural net models, and only found one that is reliably detecting person.
    • YOLO: unable to test, does not seem to be supported.
    • TinyYOLO: a fast neural net model, but very low accuracy, completely useless for our application.
    • GoogleNet: I can't find pretrained weights that's geared towards detecting person.
    • SSD MobileNet with default pretrained weights: does not seem to be detecting person.
    • SSD MobileNet with chuanqi's pretrained weights: does a good job at detecting person, dogs and cats are OK, struggles with the rest. Still very good for our application. https://github.com/chuanqi305/MobileNet-SSD
  • Frame rates: I'm getting around 6.5 fps, far from real-time.
  • Resolution: Before pushing an image into the Movidius NCS, the image needs to be scaled down to the specific resolution that was used to train the neural net model, which is 300x300 for chuanqi's model.

With all that said, Movidius can still be used for our application as a 2nd-pass system that combs through all the recorded videos to detect person in non-real-time. This may be useful for various use cases, for example:

  1. Send notification and/or trigger alarm when a person is detected. No need to manually look through recorded videos anymore.
  2. Remove videos and/or images that do not contain a person. This reduces storage requirements.

On 2nd thought, Movidius can still be used for real-time person detection. Integrate into motion software and feed one frame every 3rd or 4th frame into the Movidius. The NCSDKv2 C API is documented here for anyone who wishes to try: https://movidius.github.io/ncsdk/ncapi/ncapi2/c_api/readme.html

@wb666greene
Copy link

wb666greene commented Oct 18, 2018

Thanks for posting this. I was about to try and install ncsdk V2 on an old machine to give tiny yolo a try, saved me a lot of time wasting! I was hoping the increased resolution of the TinyYolo model over MobileNetSSD would help. Any possibility you could upload or send me your compiled graph? I'd still like to play with the model, but you've removed my motivation for getting setup to compile it with the SDK.
Edit:
I was able to compile the TinyYolo graph on my V1 SDK, so I can play with it a bit using the V1 API. Have you compared V1 vs V2 results on any of the models?

Have you tried any of the multi-threaded or multi-processing examples? Running on my i7 desktop I've found using USB3 only improved the frame rate by less than half a frame/sec over USB2.

I've been running Chuanqi's MobileNetSSD since July on a Pi3B handling D1 "snapshots" from 9 cameras with overlapping fields of view from my pre-existing Lorex security system. I use the activation of PIR motion sensors to filter (or "gate" ) the images sent to the AI to reduce the load. It works great, I get the snapshots via FTP, and filter what goes to the AI. Only real complaint is the latency to detection can be as high as 3 or 4 seconds, although usually its about 2 seconds, other than the latency it seems real-time enough for me -- effectively the same as motioneye 1 frame/second snapshots.

Your use case (1) was my goal. I never looked at the video anyways, as the Lorex system's "scrubbing" is so poor. With the Emailed AI snapshots I now have a timestamp to use should I ever need to go back and look at the 24/7 video record (what the Lorex is really good at, but everything built on top of it is just plain pitiful).

I have three system modes, Idle, Audio, and Notify. Idle means we are home and going in and out and don't want to be nagged by the AI. Audio means were are home, but want audio notification of a person in the monitored areas -- fantastic for mail and package deliveries. Notify sends Email images to our cell phones. The key is the Audio AI mode has never woken us up in the middle of the night with a false alarm, and the only Emails have all been valid, mailman, political canvasser, package delivery, etc.

Much as I like Motioneye and MotioneyeOS I'm finding the PiCamera modules are not really suitable for 24/7 use as after a period of several days to a couple of weeks the module "crashes" and only returns a static image from before the crash. Everything else seems to work fine SSH, node-red dashboard, cron, etc. but the AI is effectively blind until a reboot. I've a software only MobileNetSSD AI running on a Pi2B and Pi3B with Pi NoIR camera modules, while it only gets one AI frame about every 2 seconds, it still can be surprisingly useful for monitoring key entry areas, but the "soft" camera failures is a serious issue. I've not ever run Motioneye OS 24/7 long enough to know if it suffers the issue or not. I should probably setup my PiZeroW and try.

With this experience, I' starting to swap out some Lorex cameras with 720p Onviif "netcams" (USAVision, ~$20 on Amazon) since I don't really care about the video, its a step up in snapshot resolution (1280x720), one Pi3B+ and Movidius can handle about four cameras with ~1 second worst case detection latency.

In 24/7 testing usage I am getting Movidius "TIMEOUT" errors every three or four days. It seems I can recover with a try block around the NCS API function calls, and having the except deallocate the graph and close the device, followed by a repeat device scan, open and load graph. Tolerable amount of blind time once every few days. I plan to rewrite for the V2 API to see if it fixes the issue, the V2 multistick example doesn't seem to have any errors yet in over a week of running.

@wb666greene
Copy link

@jasaw
I think I can confirm your comments about TinyYolo. I got the sample code to run and modified it to input some D1 security camera images, detection performance is terrible. Missing full frontal, full length, people in the center of the frame, detecting shadows on the sides as "people", and all manner of wrong calls (two chairs as a bicycle, etc.)

Looks like MobileNetSSD is the only practical AI for security camera use at present on resource constrained systems.

While MobileNetSSD also makes a lot of wrong calls, if you only care about detecting "people", which seems fine for secruity camera systems, it performs very well in my experience.

@MrJBSwe
Copy link
Author

MrJBSwe commented Oct 21, 2018

@wb666greene
Copy link

@MrJBSwe
I've seen it, but it looks like about $500 for the AI board and computer to plug it into (unless you already have one with a suitable interface, I don't).

Also so far the development environment looks like only C/C++ at present. If python bindings become available for it I'll get a whole lot more interested. Not that I'm any great python guru, but I find it really hard to beat for "rapid prototyping".

At this point I think the AI network is more of a limitation for security system purposes than is the hardware to run the network. MobileNetSSD cpu only can get 7+ fps on my i7 desktop with the OpenCV 3.4.2 dnn module and simple non- threaded python code.

@jasaw
Copy link
Collaborator

jasaw commented Oct 25, 2018

I have implemented Movidius support into motion and used ChuanQi's MobileNetSSD person detector to work alongside the classic motion detection algorithm. If "Show Frame Changes" in motionEye is enabled, it will also draw a red box around the detected person and the confidence percentage at the top right corner.

I have only tested on a Raspberry Pi with Pi camera with single camera stream. If you have multiple camera streams, the code expects multiple Movidius NC sticks, one stick per mvnc-enabled camera stream. Camera streams with mvnc disabled will use the classic motion detection algorithm.

Code is here:
https://github.com/jasaw/motion/tree/movidius

How to use:

  1. Install Movidius NCSDKv2. Follow the installation manual. Note that the NCSDKv2 may screw up your existing libraries, so I recommend trying this on a sacrificial machine. Alternatively, you could try installing just the API by running sudo make api (I have not tested this one).
  2. Git clone the movidius branch into any directory you like.
    • git clone -b movidius https://github.com/jasaw/motion.git
  3. Go into the directory and run:
    • autoreconf -fiv
    • ./configure
    • make
    • sudo make install
  4. Download the MobileNet SSD graph file or compile your own graph file by following the instructions here.
  5. Add MVNC related configuration items to thread-1.conf file.
    • mvnc_enable on : This will bypass the original motion detection algorithm and use MVNC instead.
    • mvnc_graph_path /home/pi/MobileNetSSD.graph : Path to MobileNetSSD graph. Other neural net models are not supported.
    • mvnc_classification person,cat,dog,car : A comma separated classes of objects to detect.
    • mvnc_threshold 75 : This is confidence threshold in percentage, which takes a range from 0 to 100 as integer. A detected person is only considered valid if the neural net confidence level is above this threshold. 75 seems like a good starting point.

Note: There seems to be some issue getting motionEye front-end to work reliably with this movidius motion. Quite often motionEye is not able to get the mjpg stream from motion, but accessing the stream directly from web browser via port 8081 works fine. Restarting motionEye multiple times seems to workaround this problem for me. Maybe someone can help me look into it?

@wb666greene
Copy link

@jasaw
This is very nice work. Can I ask what kind of frame rate are you getting? If you are getting significantly better fps than I am, I'd be motivated to re-write in C/C++

Using the V1 ncsapi and ChuanQi's MoblileNetSSD on a Pi3B I'm getting about 5.6 fps with the Pi camera module (1280x720) using simple python code and openCV 3.4.2 for image drawing, boxes, and labels (the python PiCamera library I use creates a capture thread).

With simple threaded python code I'm also getting about 5.7 fps from a 1280x720 Onvif netcam (the ~$20 one I mentioned in an earlier reply). This same code and camera running on my i7 Desktop (heavily loaded) is getting about 8 fps. On a lightly loaded AMD quad core its getting about 9 fps

Have you seen any performance improvements of V1 vs. V2 of the ncsapi?

I now have one Pi3B setup with the V2 ncsapi and have run some of the examples (using a USB camera at 640x480) I was most interested in the multi-stick examples, but I've found that the two python examples from the appzoo that I've tried are in pretty bad shape -- not exiting and cleaning up threads properly. I pretty much duplicate their 3-stick results but I don't think they are measuring the frame rate correctly. The frame rate seems camera limited as dropping the light level drops the frame rate and their detection overlays show incredible "lag"

@jasaw
Copy link
Collaborator

jasaw commented Oct 25, 2018

@wb666greene I don't know exactly how many frames I'm getting from the Movidius stick. I'm not even following the threaded example. I think it doesn't matter anyway as long as it's running at roughly 5 fps. With my implementation, everything still runs at whatever frame rate you set, say 30 fps, but inference is only done at 5 fps. A person usually doesn't move in and out of camera view within 200ms (5fps), so it's pretty safe to assume that we'll at least get a few frames of the person, which is more than enough for inference.

I'm going to refactor my code so that I can merge it into upstream motion, and have multi-stick support as well.

@jasaw
Copy link
Collaborator

jasaw commented Oct 26, 2018

I have implemented proper MVNC support into motion software. See my earlier post for usage instructions: #1505 (comment)

@jasaw
Copy link
Collaborator

jasaw commented Nov 5, 2018

@wb666greene I've finally measured the frame rate from my implementation.
Currently, my code is starving the NC stick, only feeding it one frame when its FIFO is empty. This gives me 5.5 fps throughput, minimal heat generated from the device. Been running this setup for more than 1 week, no issue at all.
I've just tested with maintaining at least one frame in the FIFO to ensure no starvation, and managed to get 11.0 fps throughput. I've read that the hardware may overheat when pushed hard continuously, but I have not verified the thermal issue yet. There's thermal throttling built into the hardware, so would be good to see what happens when it's thermal throttled.

@jasaw
Copy link
Collaborator

jasaw commented Nov 5, 2018

I did some temperature testing.

I ran a short test pushing 11 fps and managed to get the NC stick to thermal throttle within 10 minutes, at ambient temperature of 24 degrees Celsius. The stick starts to throttle when it reaches 70 degrees Celsius, and frame rate dropped to 8 fps. I believe this is just the first level throttling (there are 2 stages).

According to Intel's documentation, these are the throttle states:

0: No limit reached.
1: Lower guard temperature threshold of chip sensor reached; short throttling time is in action between inferences to protect the device.
2: Upper guard temperature of chip sensor reached; long throttling time is in action between inferences to protect the device.

The stick temperature seems to plateau at 55 degrees Celsius when pushing 5.5 fps, again ambient temperature of 24 degrees Celsius.

@MrJBSwe
Copy link
Author

MrJBSwe commented Nov 6, 2018

I have recently tried Nvidia Xavier => I get like 5fps with yolov2
https://devtalk.nvidia.com/default/topic/1042534/jetson-agx-xavier/yolo/
( I have also tried it's different power modes and my feeling is at 10w the GPU cant offer any more the rest 10-30w just put power in the CPU cores )

Since it is quite expensive, I'm still putting my hope in direction of AI - sticks like Movidius X

RK3399Pro is an interesting addition ( but I prefer to buy the AI-stick separate with a mature API ;-)
https://www.indiegogo.com/projects/khadas-edge-rk3399pro-hackable-expandable-sbc#/

@wb666greene
Copy link

@jasaw
Interesting results on the thermal test, I'm not seeing much fps difference between short duration tests (~10-30 seconds) and long test runs (overnight or longer).

I have given up on the v2 SDK for now, sticking with the V1 sdk and made some code variations to see what frame rates I can get with the same Python code (auto configuring for Python 3.5 vs 2.7) on three different systems comparing Thread and Multiprocessing to the baseline single main loop code which gave 3.2 fps for the Onvif cameras and 5.3 fps for a USB camera and openCV capture. The Onvif cameras are 1280x720 and the USB camera was also set to 1280x720.

These tests suggested using three Python threads, one to round-robin sample the Onvif cameras, one to process the AI on the NCS, and the main thread to do everything else (MQTT for state and result reporting, saving images with detections, displaying the live images, etc.) would be the way to go.

I got 10.7 fps on my i7 desktop with NCS on USB3 running Python 3.5 on an overnight run.

Running the same code on Pi3B+ with Python 2.7 I'm getting 7.1 fps, but its only been running all morning.

My work area is ~27C and I'm not seeing any evidence of thermal throttling (or it hits so fast my test runs haven't been short enough to see it). I don't think the v1 SDK has the temperature reporting features, I haven't checked the "throttling state" as I really only care about the "equilibrium" frame rate I can obtain. For my purposes 6 fps will support 4 cameras. I'm going to try adding a fourth thread to service a second NCS stick.

@MrJBSwe
The Movidius NCS only costs ~$75, come on in the water is fine :)
Running off a Pi3B+ its only using maybe 8W of power and your total entry fee is <$150 if you have the basics like spare keyboard, mouse, and monitor for installation and development. My target environment is stand-alone headless networked "IOT" device talking to the outside world via MQTT, Email, Telegram etc.

@MrJBSwe
Copy link
Author

MrJBSwe commented Nov 7, 2018

@wb666greene

come on in the water is fine

I have 2 movidus and like them as a "appetizer" while waiting for movidius x ( or something similar )
I have tested both v1 &2 of ncsdk. I'm currently playing around on a nivida 1070 to see what's possible when HW is less of a constraint. Yolov3 seems to be a bit overkill and drains even a 1070 of it's juice.

I want to run yolov2 ( or something similar ) at >= 4 fps ( yolo tiny gives too random results ). I plan to check out your code wb666greene & jasaw, interesting work !

this is a bit interesting ( price is right ;-)
https://youtu.be/bBuHOHPYY7k?t=69

@jasaw
Copy link
Collaborator

jasaw commented Nov 7, 2018

@wb666greene From what I've read, thermal reporting is only available on v2 SDK. In my test, I'm pushing 11 fps consistently through the stick until it starts to go in and out of thermal throttle state cycle of 8 fps (thermal throttled) for 1 second, 11 fps (normal) for 3 seconds. If you take the average, it's still pushing 10 fps, which may explain the 10.7 fps that you're seeing on your i7 desktop. I imagine at higher ambient temperature like summer 45 degrees Celsius, it's going to stay throttled for much longer, possibly even go into 2nd stage throttle.

@MrJBSwe
Copy link
Author

MrJBSwe commented Nov 10, 2018

Maybe something...
https://www.96boards.ai/products/rock960/

Similar to Khadas Edge, RK3399Pro and the upcoming Rock Pi 4 & RockPro64-AI, I guess the trend is RK3399Pro for multiple reason ( but a I still hope for movidius X and or Laceli )

BM1880
https://www.sophon.ai/post/36.html

List
https://github.com/basicmi/AI-Chip-List

Movidius X has been released !
https://www.cnx-software.com/2018/11/14/intel-neural-compute-stick-2-myriad-x-vpu/

@jasaw
Copy link
Collaborator

jasaw commented Nov 15, 2018

@MrJBSwe Yes, Neural Compute Stick 2 has finally been released. Let's see if I can get one to play with.

I see a few obstacles in supporting NCS 2.

  • NCS 2 only works with Intel's new toolkit called OpenVINO.
  • OpenVINO does not run on ARM machines, so will not run on Raspberry Pi.
  • OpenVINO only provides C++ & Python API, but Motion software is written in C. I have a feeling that Motion developers are reluctant to switch to C++.
  • Even if ARM is supported, cross compiling OpenVINO API looks like a giant pain.

Excited and disappointed at the same time...

@wb666greene
Copy link

@MrJBSwe
I'd like try this Intel model with an NCS2. It'll be interesting to see how it performs on the same images I've run through the Google Posenet. But all I can find is a C++ example and no clear description of the data layout of the two "output blobs". Is their a Python example of this? I don't have time to reverse engineer C++ spaghetti.

Thanks for the info on the Lightspeeur devices, the price is nice, but its usefulness is going to depend on the quality and clarity of the sample code.

My experience with 4K images and 300x300 pixel MobilenetSSD-v2 means I won't hazard a guess about what increases or decreases accuracy.
Some of my early tests with MobilenetSSD.v1 and 4 Mpixel cameras made me think higher input resolutions that 1080p was too much of a good thing as to get usable Person Detection sensitivity I had to crop out sub images 1080p size or less.

Hardware failure a few months ago gave me the opportunity to upgrade to a 4K capable system. I figured a virtual PTZ by cropping the image would be very convenient and minimize how much I had to be on a ladder adjusting things so I got a 4K UHD camera and mounted it as close as I could to an existing HD camera so as to have nearly the same vield of view, Running MobilenetSSD-v2 (which I'd switched to a few months before the failure) totally blew me away in terms of how many more detections I got with the UHD camera.

I use a full frame detect, crop (zoom in) and re-detect with higher threshold to reduce false positives. I suspect I get better results with UHD images because the cropped image for verification is "better", but I expected it to perform very poorly as I expected the initial detections would be greatly reduced, which seems not to be the case.

@jasaw
Copy link
Collaborator

jasaw commented Dec 3, 2019

I'm running latest version of OpenVino, and you're right that it handles NCS and NCS2 too transparently. There is no way of defining which stick runs which neural net.

Turns out that I was wrong. The new undocumented API supports querying all the inference capable hardware. I have 2x NCS2 and 2x NCS1 sticks, and the API gives me the name of each device that has the USB path in it, e.g. MYRIAD.1.2-ma2480. I can then choose which model to load to which stick, but for my use case, I loaded the same model to all the sticks. After a lot of effort, I managed to run motion with multiple Myriad sticks on my Raspberry Pi 4. The performance scales quite linearly, which is great. Can't wait to try out NCS3 !

For anyone who is interested, sample code that uses the new query API is documented here
Python: https://docs.openvinotoolkit.org/latest/_inference_engine_ie_bridges_python_sample_hello_query_device_README.html
C++: https://docs.openvinotoolkit.org/latest/_inference_engine_samples_hello_query_device_README.html

@wb666greene
Copy link

@jasaw
Thanks for this, maybe I'm dense, but I can't find the python sample code on the linked site.

@jasaw
Copy link
Collaborator

jasaw commented Dec 4, 2019

@wb666greene The python example is in your openvino install directory.
/opt/intel/openvino/deployment_tools/inference_engine/samples/python_samples/hello_query_device/hello_query_device.py

@wb666greene
Copy link

@jasaw
Thanks, I looked there after not finding it on the website link. Problem was I looked on my main development system which I've not yet updated to the latest OpenVINO. Found it on the system I'd updated to 2019.R3.1

Sorry for the me culpa stupidity.

@jasaw
Copy link
Collaborator

jasaw commented Dec 11, 2019

I have just tested openvino + motion + two NCS2 + chuanqi's MobileNetSSD, very similar setup as my previous 1st gen NCS with NCSDK framework. With openvino and two NCS2 sticks attached to a RPi4, I'm only getting 16 fps, but with NCSDK and a single NCS on RPi3, I got 8 fps on average (after getting into level 1 thermal throttle). This is very disappointing because I expected NCS2 to be twice as fast as the NCS. This experiment suggests that openvino is less efficient compared to NCSDK. @wb666greene @MrJBSwe Do you guys feel that openvino is less efficient as well?

Next I'm going to replace NCS2 with the 1st gen NCS, see what NCS + openvino combo gives me. This will tell me how much less efficient openvino is.

Turns out that the low fps with two NCS2 was caused by my camera reducing frame rate because I was testing it in dark environment and auto-brightness feature reduced shutter speed thus reduced the frame rate. I tested it again in the morning and I managed to get 25 fps with two NCS2. With a single NCS, I get 8.5 fps, similar to NCSDK.

I also tried the VGG_VOC0712Plus_SSD_300x300_ft_iter_160000 SSD model (that Intel uses in their SSD examples), but found that it is a lot less accurate than chuanqi's MobileNetSSD. It false detects a dog as a person, and runs very slow too. I only get 4 fps on my RPi4 with two NCS2 sticks.
I had a quick look at chuanqi's MobileNetSSD github project and the model hasn't been updated for 2 years now. Do you guys know if there's a better trained MobileNetSSD that I can use?

@jasaw
Copy link
Collaborator

jasaw commented Dec 17, 2019

Here's a new recipe for getting motion to work with Intel NCS2 on raspberry Pi, using OpenVINO framework and ChuanQi's MobileNetSSD neural net model. I set one system up for my own use, thought it might be useful to someone else too.

Compile and Install ffmpeg (optional)

This ffmpeg step is only needed if you want to use h264_omx hardware accelerated video encoder.

ffmpeg dependencies
sudo apt-get -y install autoconf automake build-essential cmake git-core libass-dev libfreetype6-dev libsdl2-dev libtool libva-dev libvdpau-dev libvorbis-dev libxcb1-dev libxcb-shm0-dev libxcb-xfixes0-dev pkg-config texinfo wget zlib1g-dev libx264-dev
sudo apt-get -y install libavformat-dev libavcodec-dev libavutil-dev libswscale-dev libavdevice-dev
cd ~
wget https://ffmpeg.org/releases/ffmpeg-4.2.tar.bz2
tar xf ffmpeg-4.2.tar.bz2
wget https://raw.githubusercontent.com/ccrisan/motioneyeos/master/package/ffmpeg/disable-rpi-omx-input-zerocopy.patch
wget https://trac.ffmpeg.org/raw-attachment/ticket/7687/0001-avcodec-omx-Fix-handling-of-fragmented-buffers.patch
patch -p1 -d ffmpeg-4.2 < 0001-avcodec-omx-Fix-handling-of-fragmented-buffers.patch
patch -p1 -d ffmpeg-4.2 < disable-rpi-omx-input-zerocopy.patch
cd ffmpeg-4.2
./configure --enable-mmal --enable-omx --enable-omx-rpi --enable-avfilter --enable-optimizations --enable-avdevice --enable-avcodec --enable-avformat --enable-network --enable-swscale-alpha --enable-dct --enable-fft --enable-mdct --enable-rdft --enable-runtime-cpudetect --enable-hwaccels --disable-doc --enable-gpl --enable-nonfree --enable-ffprobe --enable-swscale --enable-pthreads --enable-libx264 --enable-armv6 --enable-vfp --enable-neon --enable-pic --enable-shared --extra-cflags="-I/opt/vc/include/IL -fPIC"
make -j4
sudo make install

How to install and run motion software with MobileNetSSD alternate detection library on Raspbian

  1. Install OpenVINO raspbian release (2019-R3) on your Raspberry Pi. Follow the instructions. Alternatively, you could download the OpenVINO raspbian release here and unpack into /opt/intel/openvino directory.
    • sudo mkdir -p /opt/intel/openvino
    • sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2019.3.334.tgz --strip 1 -C /opt/intel/openvino
    • Replace INSTALLDIR="..." with INSTALLDIR="/opt/intel/openvino". sudo vi /opt/intel/openvino/bin/setupvars.sh
    • Source setupvars.sh file. . /opt/intel/openvino/bin/setupvars.sh
    • Make sure current user is in "users" group. sudo usermod -a -G users "$(whoami)"
    • Install NCS udev rules. sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
    • Plug in your Intel Movidius Neural Compute Stick.
  2. Git clone the lib_openvino_ssd library.
    • git clone https://github.com/jasaw/lib_openvino_ssd
  3. Build the lib_openvino_ssd library.
    • Install dependencies. sudo apt-get install libjpeg libavutil-dev libswscale-dev
    • cd lib_openvino_ssd
    • make -j4
  4. Test the lib_openvino_ssd library.
    • cd openvino_ssd_test
    • make -j4
    • Copy a few jpg files with people in the images.
    • Edit ../libopenvino.conf to make sure the MODEL_BIN and MODEL_XML point to the mobilenet_iter_73000.bin and mobilenet_iter_73000.xml respectively.
    • Test the SSD library by running ./ssd_test -l ../libopenvinossd.so -c ../libopenvino.conf photo_1.jpg photo_2.jpg. Replace photo_1.jpg and photo_2.jpg with your own jpg files. Make sure your NCS stick is connected.
    • If the test is successful, it will output png files with the detection result drawn on the png image.
  5. Git clone my motion alt_detection motion branch.
    • git clone -b alt_detection https://github.com/jasaw/motion.git
  6. Build and install motion.
    • Install dependencies. sudo apt-get install autoconf autopoint automake build-essential pkgconf libtool libzip-dev libjpeg-dev git libwebp-dev gettext libmicrohttpd-dev
    • cd motion
    • wget https://raw.githubusercontent.com/ccrisan/motioneyeos/dev/package/motion/0002-enable-h264-omx-codec.patch
    • patch -p1 -d . < 0002-enable-h264-omx-codec.patch
    • autoreconf -fiv
    • ./configure
    • make -j4
    • sudo make install
  7. Install MotionEye.
  8. Run MotionEye to set up camera configuration, then stop MotionEye.
  9. Specify which alternate detection library to load by adding the below lines in motion.conf file.
    • alt_detection_library /home/pi/lib_openvino_ssd/libopenvinossd.so
    • alt_detection_conf_file /home/pi/lib_openvino_ssd/libopenvino.conf
  10. Specify which camera to use alternate detection by adding the below lines in the camera config file, e.g. camera-1.conf file.
    • alt_detection_enable on
    • alt_detection_threshold 75
  11. Restart MotionEye.

@wb666greene
Copy link

@jasaw

I also tried the VGG_VOC0712Plus_SSD_300x300_ft_iter_160000 SSD model (that Intel uses in their SSD examples), but found that it is a lot less accurate than chuanqi's MobileNetSSD. It false detects a dog as a person, and runs very slow too. I only get 4 fps on my RPi4 with two NCS2 sticks.
I had a quick look at chuanqi's MobileNetSSD github project and the model hasn't been updated for 2 years now. Do you guys know if there's a better trained MobileNetSSD that I can use?

I used the OpenVINO model down-loader and model optimizer to convert the MobileNetSSD-v2_coco tensorflow lite model that I use with the Coral TPU so I can use it with OpenVINO. I've made my python code be able run both an NCS2 and Coral TPU simultaneously. For a several long runs (~1.5 day) with 15 rtsp at 3 fps cameras (5 are 4K UHD and 10 are 1080p HD) I get ~44.9 fps total with ~32.8 fps from the TPU thread, and ~12.5 fps from the NCS2 thread. This is on an i7 laptop.

In these tests I got zero false positives from the TPU and "bursts" of bogus detections from the NCS2 SSD-v2_coco thread around 11AM. I don't know what to draw from this besides the TPU seems better in every regard. I get many fewer of these "featureless blob" false positives with MobilenetSSD-v2_coco and the NCS/NCS2 than I did with chuanqi's MobileNetSSD, but the TPU is not totally immune, it just takes "wierder" lighting:

08_21_11 9_LorexAI_PoolEquipment

NCS2 false positive that the TPU has so far never detected:
10_04_48 2_SSDv2ncs_Cam11

If you don't want to mess with the model optimizer, PM me and I can upload it to my oneDrive and send you a link that you can download the bin and xml files from, its about 34MB.

@jasaw
Copy link
Collaborator

jasaw commented Dec 21, 2019

@wb666greene You are right that ChuanQi's MobileNetSSD model accuracy is not satisfactory. I am also getting a lot of false positives under certain lighting conditions. I would like to try the MobileNetSSD-v2_coco tensorflow lite model that you are using, but I can't find a way to PM you. If you could point me to where I can download the model, that would be great. I have Intel's Model Optimizer too, so I can do the model conversion. Is there any specific option that you had to use during Model Optimizer conversion?

I just discovered MobileNetSSDv3. The difference between v3 and v2 is the v3 has more optimizations that improve the accuracy without affecting the speed. I haven't got time to look for a v3 model. I may even consider training my own model, only if time permits.

@wb666greene
Copy link

wb666greene commented Dec 21, 2019 via email

@jasaw
Copy link
Collaborator

jasaw commented Dec 22, 2019

@wb666greene sorry for disturbing you again, but can you please share the model before it was converted to xml and bin files?
I tried your precompiled model, but my code is expecting input bgr values from 0 to 255 rather than 0 to 1. I normally just get model optimizer to scale the input as part of the compilation process .

@ssutaj
Copy link

ssutaj commented Dec 23, 2019

Thanks guys for what you re doing, keep it up. Cant wait for my new ordered coral usb accelerator to come. I am curious how it will work with rpi 4 with motioneye + few pi zeros with cameras.

@wb666greene
Copy link

wb666greene commented Dec 24, 2019 via email

@wb666greene
Copy link

wb666greene commented Dec 24, 2019

Thanks guys for what you re doing, keep it up. Cant wait for my new ordered coral usb accelerator to come. I am curious how it will work with rpi 4 with motioneye + few pi zeros with cameras.

If you don't already have the Pi Cameras, IMHO you should rethink this. You can buy IP cameras (aka netcams) for about the price of the Pi Camera Module and they have solved the weatherproofing and mounting issues for you

Here are some test recent test results of my Python code running on Pi4B, Jetson Nano, and Coral Development board decoding multiple 3 fps rtsp streams:

` 5DEC2019wbk some Pi4B tests with rtsp cameras, 3fps per stream:
4 UHD (4K) : ~2.8 fps (hopelessly overloaded)
4 HD (1080p): ~11.8 fps (basically processing every frame)
2 UHD 2 HD : ~6.7 fps (Pi4B struggles with 4K streams)
5 HD : ~14.7 fps (basically processing every frame)
6 HD : ~15.0 fps, -d 0 (no display) ~16.7 fps
8 HD : ~11.6 fps, -d 0 ~14.6 fps

6DEC2019wbk Some UHD tests on Jetson Nano
5 UHD (4K) : ~14.6 fps (effectively processing every frame!)
5 UHD 3 HD : ~10.3 fps, jumps to ~19.1 fps if -d 0 option used (no live image display)
4 UHD 4 HD : ~16.3 fps, ~22.5 fps with -d 0 option
5 UHD 10 HD (1080p): ~4.4 fps, ~7.6 fps with -d 0 option (totally overloaded, get ~39 fps with running on i7-4500U MiniPC)

7DEC2019wbk Coral Development Board
4 HD (1080p) : ~11.9 fps (basically processing every frame)
2 UHD 2 HD : ~11.7 fps
2 UHD 3 HD : ~14.6 fps
2 UHD 4 HD : ~12.3 fps, -d 0 (no display) ~16.7 fps
3 UHD : ~8.8 fps (basically processing every frame)
4 UHD : ~0.1 fps on short run, System locks up eventually!
3 UHD 2 HD : ~0.27 fps Hopelessly overloaded, extremely sluggish.
6 HD : ~17.9 fps
8 HD : ~16.8 fps, -d 0 (no display) 20.5 fps
`
I can supply links to some inexpensive (
$30-100) netcams I've been happy with.

Note that with the Pi4B and these kind of workloads a fan is essential to keep from thermal throttling. The Coral Development board has a built in fan, the Jetson Nano has a rather massive heat-sink that is almost the size of the entire Coral Dev board :)

Edit: What is causing the "strikeout" text?

@ssutaj
Copy link

ssutaj commented Dec 24, 2019

If you don't already have the Pi Cameras, IMHO you should rethink this. You can buy IP cameras (aka netcams) for about the price of the Pi Camera Module and they have solved the weatherproofing and mounting issues for you

Well, i already bought most of the stuff i needed in belief i will get better "system" for same money, but with that coral stick + small additional stuff like pi camera cable reductions i am at 300€, and it is not over yet. I would maybe do better with some classic ip camera system, but meh, it is what it is 😄

3x Pi Zero W
3x Pi NoIR Camera V2
1x Pi 4B 4GB

Coral USB stick will come this week and i believe it can be nice combo with Pi 4. I will need to buy some fake camera housing + make it waterproof. Also router, and if i will get some reasonable fps during testing (>15 fps at 720p - 1080p), i will also buy some IR lighting and SSD/HDD(will have to think it trough). I am only sad that H.265 is not possible with this setup.

I am worried that i will have too much problems with motion/motioneyeos orwith pi zeros -> i will have to write a lot of custom code. Regarding heating, i have just bigger heatsink, maybe it will hold on 😁

I am pretty big newbie, not even coding python (java mostly), with small experience with opencv from school. With that in mind i really appreciate what you re doing 🙂

@wb666greene
Copy link

wb666greene commented Dec 25, 2019 via email

@wb666greene
Copy link

@SamuelSutaj

I fired up my old PiZeroW with MotioneyeOS and it was not working. I re-flashed the SD card with the current version (20190911). I used the default settings except for turning off motion detection, setting image size to 1280x720 and setting frame rate to 10 fps for both camera and streaming.

I get ~5.2 fps with it as a "netcam" (http://MeyeOS:8081). Using the "fast network camera" setting may help (especially if you want higher than 720p resolution), but 5 fps per camera is more than good enough -- IMHO 2-3 fps/camera is generally fine. Seems the current verison works better than I remembered on the PiZeroW.

But the PiZeroW WiFi is not the best, in the same room with the WiFi Router running overnight, I had three camera outages lasting from 8 to 19 minutes. My AI code automatically recovers from camera outages, but the camera is blind during these outages.

"short runs" without the camera drop outs, can hit ~8 fps, I know its camera limited as my AI on this test system can hit ~24 fps with multiple cameras and an NCS2.

@jasaw
Copy link
Collaborator

jasaw commented Jan 10, 2020

@wb666greene Thank you for sharing the model optimizer command for converting ssd_mobilenet_v2_coco. I tried the same command but added --mean_values [127.5,127.5,127.5] --scale_values [127.5] arguments so I can feed BGR pixel values in the 0 - 255 range, rather than the default 0 - 1 range. The model optimizer ran successfully, but when I test the model, I couldn't get any result from it. Since you are using the model, can you please tell me what input you are feeding in, and what output format you are getting from it?

Is this the correct input and output format?

  • Input: image in BGR format, 0 to 1 value per colour?
  • Output: Each group of 7 float (16-bit) values describes an object/box. These 7 values in order.
    • float 0: image_id (always 0)
    • float 1: class_id (this is an index into labels)
    • float 2: score (this is the probability for the class)
    • float 3: box left location within image as number between 0.0 and 1.0
    • float 4: box top location within image as number between 0.0 and 1.0
    • float 5: box right location within image as number between 0.0 and 1.0
    • float 6: box bottom location within image as number between 0.0 and 1.0

I also tried ChuanQi's mobilenet_iter_73000 model. It appears to have a more accurate detection compared to the MobileNetSSD_deploy model, but I can't quantify the accuracy because I haven't got test videos to formally test the models. Since Christmas, mobilenet_iter_73000 has been giving me roughly 1 false positive every 3 days.

@wb666greene
Copy link

wb666greene commented Jan 10, 2020

@jasaw
I'm uisng Python and the OpenCV DNN interface which obscures the low level details (16-bt float vs 32-bit float, etc).

But, I also got no results with SSDv2 when I did the mean subtractions. Not doing it fixed the issues for me, only difference in my Python code for using SSD v1 vs v2:

    if SSDv1:
        blob = cv2.dnn.blobFromImage(cv2.resize(image, PREPROCESS_DIMS), 0.007843, PREPROCESS_DIMS, 127.5)
        personIdx=15
    else:
        blob = cv2.dnn.blobFromImage(image, size=PREPROCESS_DIMS)
        personIdx=1

@corvy
Copy link

corvy commented Mar 27, 2020

Hello, is there a guide to set this up a test of this? I have docker running on a x86 machine, sadly no dongle right now but I have an Intel GPU and of course an option to add an Nvidia GPU card. I was thinking to try to use the CPU first round just to test and then add GPU later. I also have a few Raspberries so ARM could be an option as-well. If there would be a dev-release from GIT or a step-by-step guide to add a object detection model I would be very interested in testing. :D

@jasaw
Copy link
Collaborator

jasaw commented Mar 27, 2020

With Intel GPU, you could try Intel's OpenVINO. Here's my guide: #1505 (comment) Obviously ignore the whole Raspbian part because you're running on x86 machine.

To use Intel GPU, you'll need to update libopenvino.conf file in my lib_openvino_ssd project, change TARGET_DEVICE=MYRIAD to TARGET_DEVICE=GPU

@corvy
Copy link

corvy commented Apr 12, 2020

Hello @jasaw get stuck on the

Git clone the lib_openvino_ssd library

I cannot compile as this is for ARM not Intel X86. Could you please offer some guidance?

root@ubuntu-server:~/temp/lib_openvino_ssd# make -j4
ls: cannot access '/opt/intel/openvino/deployment_tools/inference_engine/lib': No such file or directory
g++ -fPIC -c -o ssd.o ssd.cpp -W -Wall -pthread -g -std=c++17 -O3 -march=armv7-a -DNDEBUG -I. -I/opt/intel/openvino/deployment_tools/inference_engine/include -I/opt/intel/openvino/opencv/include -Wl,-rpath -Wl,/opt/intel/openvino/inference_engine/lib/ -Wl,-rpath -Wl,/opt/intel/openvino/opencv/lib -Wl,-rpath -Wl,/usr/local/lib
g++ -fPIC -c -o ssd_obj.o ssd_obj.cpp -W -Wall -pthread -g -std=c++17 -O3 -march=armv7-a -DNDEBUG -I. -I/opt/intel/openvino/deployment_tools/inference_engine/include -I/opt/intel/openvino/opencv/include -Wl,-rpath -Wl,/opt/intel/openvino/inference_engine/lib/ -Wl,-rpath -Wl,/opt/intel/openvino/opencv/lib -Wl,-rpath -Wl,/usr/local/lib
g++ -fPIC -c -o job.o job.cpp -W -Wall -pthread -g -std=c++17 -O3 -march=armv7-a -DNDEBUG -I. -I/opt/intel/openvino/deployment_tools/inference_engine/include -I/opt/intel/openvino/opencv/include -Wl,-rpath -Wl,/opt/intel/openvino/inference_engine/lib/ -Wl,-rpath -Wl,/opt/intel/openvino/opencv/lib -Wl,-rpath -Wl,/usr/local/lib
g++ -fPIC -c -o log.o log.cpp -W -Wall -pthread -g -std=c++17 -O3 -march=armv7-a -DNDEBUG -I. -I/opt/intel/openvino/deployment_tools/inference_engine/include -I/opt/intel/openvino/opencv/include -Wl,-rpath -Wl,/opt/intel/openvino/inference_engine/lib/ -Wl,-rpath -Wl,/opt/intel/openvino/opencv/lib -Wl,-rpath -Wl,/usr/local/lib
cc1plus: error: bad value (‘armv7-a’) for ‘-march=’ switch
cc1plus: error: bad value (‘armv7-a’) for ‘-march=’ switch

Tried to edit the Makefile to -march=x86-64 but it still fails. :D

@jasaw
Copy link
Collaborator

jasaw commented Apr 13, 2020

Sorry, I forgot my makefile was hard coded for ARM. Try removing -march=armv7-a from the makefile.

@Chiny91
Copy link

Chiny91 commented Apr 28, 2021

[2018]

Santa visited Chiny Towers early and delivered a sacrificial R Pi 3b and a Movidius. So, I have had the @jasaw recipe working with no problems for 12 hours and all appears well, although far too early to comment on reliability. I'm sure the postman delivering today could not imagine the excitement he generated 😄

[2021]
This has been an unequivocal success. The @jasaw recipe has now worked for over 2 years, having only just fallen over and needing a reboot. It has proved nearly 100% accurate, no events missed AFAIK (I have second motioneye checking), and false positives are rare (fog, snow). It proved so accurate so quickly, that within a month, I scripted it to the Pushover alert system and hence to my watch/phone. The ultimate accolade must be that Mrs C considers this R Pi/motioneye/Movidius/Pushover setup entirely normal, indeed essential when we are away, so that we know exactly who is at our front door.

I'm not sure where motion/AI has got to these days, despite having a look around, so I'll keep this system running a while longer.

@jasaw
Copy link
Collaborator

jasaw commented Apr 29, 2021

@Chiny91 Very glad to hear that it has been working well for you. I have been running 4 movidius sticks with my OpenVINO recipe for 2 years and it has been working more reliably than expected. It has fallen over twice so far because movidius 2 sticks have very high peak power draw and RPi USB 5V supply is not stable enough. I ended up using externally powered USB hub.

Regarding false positives, I've only gotten the occasional few. I'm sure there's room for improvement on the neural model itself, but haven't got time to train my own model.

I've also been playing with Jetson Nano and it looks like a much better hardware for our purpose. The hardware is A LOT more capable than movidius 2, but cheaper than a RPi + movidius2 combo. Software stack looks cleaner too because motion just calls straight into standard openCV library. Dave (motion developer) mentioned that he was working on new version of motion that uses openCV. Not sure where he's up to.

@xjb-swe
Copy link

xjb-swe commented Jul 26, 2021

@wb666greene
Copy link

wb666greene commented Jul 26, 2021

https://www.youtube.com/watch?v=aJp-mIBytno&t=38s

Idea is good, but price is awfully high for a 1080p camera, unless the "rolling shutter" and potential of 120 fps is a requirement.

I still think its premature to build the AI into cameras right now, the improvement I got going from MobilenetSSD_v1 to MobilenetSSD_v2 was great enough it'd have been a bummer to have a bunch of cameras stuck on MobilenetSSD_v1. My exception is the OAK-D which is three cameras and a Myriaid-X in one camera housing, The color central camera and two monochrome cameras either side of it allows full color AI with depth information from the stereo cameras to be estimated from the synchronized frames.

The new Corel MPCIe and M.2 TPU modules cost less than half the USB3 TPU version but it also obsoletes the original edgetpu API and replaced it with PyCoral API.

A Coral MPCIe TPU and an old i7-4500U "mini-PC" is getting 35 fps running 7 4K and 7 1080p 3 fps H.265 cameras connecting to the RTSP streams from my security DVR. I've since switched the cameras from H.265 to H.264 as I was getting high latency with h.265, moving to H.264 dropped my latency to ~2 seconds compared to 6+ seconds. No free lunch, I lose a few days of retention on my 24/7 recordings, but I want low latency notification at the start of a potential crime to stop or mitigate the losses, watching nice video of my stuff being carried away the next morning is just not really useful.

@xjb-swe
Copy link

xjb-swe commented Jul 27, 2021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests