-
Notifications
You must be signed in to change notification settings - Fork 33
Motion detection plug-in model and analysis #175
Conversation
sync dev with head repo
sync with head repo
A small but unrelated change that I'd like to commit, if you don't mind, is to activate the "generate package on build" option on the csproj files. This makes it much easier to work with the library in my separate test program. I'm also interested in how you address that problem yourself. In my case, I have a |
Just thinking out loud (I know you're busy right now): I am going to modify void EnableAnalysis(IOutputCaptureHandler handler = null); While I really wanted to run the analysis output (and other filtering) earlier in the pipeline so that the hardware could do h.264 against the filtered output, that isn't looking realistic. But this change still allows for streaming by piping the raw frames through The new (in this PR) class I'm also wondering if dishing out full frames to ffmpeg will have any effect on its ability to output a valid MP4. Probably not, but I'll give it a shot. |
Figures... according to the muxer matrix, VLC won't convert raw frames to MJPEG. https://www.videolan.org/streaming-features.html So ... into the ffmpeg switch-swamp I go. Wish me luck. Annnnd... ffmpeg doesn't work as a server like VLC does. 🙄 Getting pretty close, I had the idea of using ffmpeg to encode the RGB24 to h.264, then pipe that into VLC for streaming. After many hours of work I have it close to working. I put the commands into a script with this beast of a command line:
From a terminal I can do something like this to create a stream which I can browse to:
When I plug the script into |
I overlooked your reply to #108 last week (image processing performance), but I'm starting to think I've also read that Gaussian blur is helpful for motion detection noise-removal, so once I feel this PR is ready to roll, I will probably tackle that next. |
Got it working ... this is a frame from my browser of realtime 24FPS filtered motion analysis (it's pointing elsewhere in my office and picking up the news on TV). As described earlier, /bin/bash is the external process, the raw RGB24 stream pipes into ffmpeg which turns it into an h.420 YUV420p stream (that format is what I was missing the other day) and pipes that to CLVC which demuxes into an MJPEG HTTP stream and listens on the port. Pretty cool! static async Task visualizeStream(int seconds)
{
var cam = GetConfiguredCamera();
MMALCameraConfig.Resolution = new Resolution(640, 480);
MMALCameraConfig.SensorMode = MMALSensorMode.Mode7; // for some reason mode 6 has a pinkish tinge
MMALCameraConfig.Framerate = new MMAL_RATIONAL_T(20, 1);
Console.WriteLine("Preparing pipeline...");
cam.ConfigureCameraSettings();
var motionAlgorithm = new MotionAlgorithmRGBDiff(
rgbThreshold: 200, // default = 200
cellPixelPercentage: 50, // default = 50
cellCountThreshold: 20 // default = 20
);
var motionConfig = new MotionConfig(
algorithm: motionAlgorithm,
testFrameInterval: TimeSpan.FromSeconds(3), // default = 3
testFrameCooldown: TimeSpan.FromSeconds(3) // default = 3
);
var raw_to_mjpeg_stream = new ExternalProcessCaptureHandlerOptions
{
Filename = "/bin/bash",
EchoOutput = true,
Arguments = "-c \"ffmpeg -hide_banner -f rawvideo -c:v rawvideo -pix_fmt rgb24 -s:v 640x480 -r 24 -i - -f h264 -c:v libx264 -preset ultrafast -tune zerolatency -vf format=yuv420p - | cvlc stream:///dev/stdin --sout '#transcode{vcodec=mjpg,vb=2500,fps=20,acodec=none}:standard{access=http{mime=multipart/x-mixed-replace;boundary=7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8554/}' :demux=h264\"",
DrainOutputDelayMs = 500, // default = 500
TerminationSignals = ExternalProcessCaptureHandlerOptions.SignalsFFmpeg
};
using (var shell = new ExternalProcessCaptureHandler(raw_to_mjpeg_stream))
using (var motion = new FrameBufferCaptureHandler(motionConfig, null))
using (var resizer = new MMALIspComponent())
{
// Argument is a capture handler, this is the bash/ffmpeg/clvc pipeline
motionAlgorithm.EnableAnalysis(shell);
// Not taking action when motion is detected in this example,
// but the resizer feeds normal motion detection
resizer.ConfigureOutputPort<VideoPort>(0, new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480), motion);
cam.Camera.VideoPort.ConnectTo(resizer);
await cameraWarmupDelay(cam);
Console.WriteLine($"Streaming MJPEG with motion detection analysis for {seconds} sec to:");
Console.WriteLine($"http://{Environment.MachineName}.local:8554/");
var timeout = new CancellationTokenSource(TimeSpan.FromSeconds(seconds));
await Task.WhenAll(new Task[]{
shell.ProcessExternalAsync(timeout.Token),
cam.ProcessAsync(cam.Camera.VideoPort, timeout.Token),
}).ConfigureAwait(false);
}
cam.Cleanup();
Console.WriteLine("Exiting.");
} |
AppVeyor can't create the ffmpeg nuget package? Looks like it skipped building the ffmpeg project, for some reason. |
This looks to be because you've added |
Oops, meant to pull those out before check-in ... fixing it. |
I don't think I listed the specific changes in that last commit:
If you agree with my work described in #108 and we figure out the raw capture / manipulate situation, these are the changes versus this original PR's description that I will commit. I wanted to get them written down while it's still reasonably fresh in my mind, I suspect I'm about to get swamped by my day job, too. Frame and Motion Processing
Matrix Convolution
MMALSharp.Common
None of the above changes usage. Again, just wanted to document everything that changed, no rush at all, having this PR open isn't blocking me or anything. I know the other thread is long and this is a lot to review! Have a great weekend, we're taking the dog to the bar and doing absolutely nothing useful at all! 😁 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good Jon and when you've got time it would be really helpful to get some documentation together on how all this works and some typical use-case examples (in addition to the example provided?) - it'll also help me do some testing :)
@@ -33,6 +33,7 @@ public interface IMotionCaptureHandler | |||
/// <summary> | |||
/// Disables motion detection. When configured, this will instruct the capture handler not to detect motion. | |||
/// </summary> | |||
void DisableMotionDetection(); | |||
/// <param name="disableCallbackOnly">When true, motion detection will continue but the OnDetect callback will not be invoked.</param> | |||
void DisableMotionDetection(bool disableCallbackOnly = false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a user calls this method and wants to re-enable the callback, how do they do this? Am I right in thinking they just modify OnDetectEnabled
against a FrameDiffDriver
instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling EnableMotionDetection
will re-enable the callback. I will add that to the parameter docs.
/// </summary> | ||
/// <param name="motionConfig">The motion configuration.</param> | ||
/// <param name="onDetect">A callback for when motion is detected.</param> | ||
public FrameBufferCaptureHandler(MotionConfig motionConfig, Action onDetect) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming this additional constructor is used to replicate what MMALCamera.WithMotionDetection()
does?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By funny coincidence I was just looking at that myself, and I don't think it is needed. There was a point where I was trying to avoid modifying MotionConfig
and I think it's left over from that. I will straighten it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually that constructor was useful when doing nothing but visualizing the motion detection analysis. Used in a simple configuration like this:
using (var shell = new ExternalProcessCaptureHandler(raw_to_mjpeg_stream))
using (var motion = new FrameBufferCaptureHandler(motionConfig, null))
using (var resizer = new MMALIspComponent())
{
motionAlgorithm.EnableAnalysis(shell);
resizer.ConfigureOutputPort<VideoPort>(0, new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480), motion);
cam.Camera.VideoPort.ConnectTo(resizer);
await cameraWarmupDelay(cam);
var timeout = new CancellationTokenSource(TimeSpan.FromSeconds(seconds));
await Task.WhenAll(new Task[]{
shell.ProcessExternalAsync(timeout.Token),
cam.ProcessAsync(cam.Camera.VideoPort, timeout.Token),
}).ConfigureAwait(false);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see I do explain one usage in the constructor summary, although it could be better, it does now also work with the camera (the example above is for streaming). I'll update the summary.
Creates a new configured for motion detection using a raw video stream where MMALStandalone.Instance is used (such as processing a pre-recorded file) rather than camera-based processing.
private Stopwatch _testFrameAge; | ||
|
||
/// <summary> | ||
/// Fully are skipped when comparing the test frame to the current frame. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think this is a typo? Is it supposed to read "Frames" instead of "Fully"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should say "Fully black pixels", not sure how that happened. It is fixed in the changes I haven't committed yet (more about that momentarily).
{ | ||
this.FullFrame = false; | ||
_fullTestFrame = false; | ||
this.WorkingData = new List<byte>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we see any further benefit to moving this.WorkingData
into a byte[]
instead of a List<byte>
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, internally List<byte>
is just an array, so we'd just end up doing the same work (resizing etc). One thing that might help, though, is to think about initializing the list to a certain size. I believe the default is just 10 entries and there is some sort of scale-up algorithm, it may be helpful to set it to a large size up front, given the amount of data we're working with. I'll open an issue so we remember to consider it later on. I think there are multiple places in the code where that idea might apply.
Ian, thanks for taking time to look. I was sort of waiting for your feedback about changing And yessir, I definitely agree, I have quite a lot of documentation work lined up! |
Oh and I'm also still unsure of how to set up the camera to get one raw frame into the handler -- it exits before the image processor is able to finish running. See this comment: |
Figured it was silly not to just merge the convolution changes, too. Everything is easy enough to roll back if necessary. (The merge conflict was just around some stuff you'd changed in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes are looking good Jon. I've made a comment regarding needing images to be raw in order for the re-write to work properly.
Array.Copy(ms2.ToArray(), 0, store, 0, ms2.Length); | ||
} | ||
} | ||
throw new Exception("Convolution effects require raw frame data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This concerns me a bit. I'm quite sure that image convolution worked on encoded images before - if they were encoded they were loaded into a new Bitmap
instance and the metadata was made available via BitmapData
. Is that not going to be possible in this re-write? Ideally it should be able to handle both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right, I think an optional extra step to load it into a new bitmap should work. I'll take a look at it from that perspective. It does ultimately need to be raw, though, so that FrameAnalyser
can derive the bits (and bytes) per pixel. I don't think the in-memory bitmap conversion is too expensive. (I sort of wish we had a good abstraction for turning performance data on and off... maybe something for the future.)
Speaking of bytes per pixel, I'm still struggling to get TakeRawPicture
to work. When I try the code shown below, ImageContext.PixelFormat
is not set which causes FrameAnalyser
to fail in the GetBytesPerPixel
method. Am I overlooking something?
var cam = GetConfiguredCamera();
MMALCameraConfig.Encoding = MMALEncoding.RGB24;
MMALCameraConfig.EncodingSubFormat = MMALEncoding.RGB24;
cam.ConfigureCameraSettings();
using (var imgCaptureHandler = new ImageStreamCaptureHandler(ramdiskPath, "jpg"))
{
imgCaptureHandler.Manipulate(context =>
{
context.Apply(new SharpenProcessor());
}, ImageFormat.Jpeg);
await cam.TakeRawPicture(imgCaptureHandler);
}
cam.Cleanup();
Ah -- looking at the debug log, it looks like StreamCaptureHandler
is involved. In the log I see Applying image processing
followed by Something went wrong while processing stream
and the null ref throw. (It was initially confusing because the handler only logs the inner exception -- there was none -- and not the actual exception!) But I'm not sure where the context comes from at that point in time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me run my thinking by you:
Since my revisions operate on ImageContext
data, do you know if copying data out of a formatted Bitmap
(sourced from a JPEG memory stream for example) yields pixel level data? I was trying to figure out if I need to put it through a format conversion to get raw data. The old version operated directly on the locked bitmap, I'm wondering how much voodoo the Bitmap
object does to make the pixels of a lossy compressed image available (once you unlock it, I assume it's back to lossy JPEG again). I know when a bitmap region is locked, .NET actually makes another copy of that region and pins it in memory, so I'm guessing that it probably is decompressed at that point. So I suspect copying that really will get uncompressed pixels.
Originally I was thinking after the context is updated (by copying the locked Bitmap
data), the point above implies ImageContext.Raw
should then be true
-- but then I realized that's probably not correct. The raw flag only applies to the three RGB formats, and I suppose the copied bitmap could be something like YUV, does that sound right?
So if that's right, after the frame is processed, the raw-to-formatted conversion really shouldn't necessarily require raw input at all, either.
Do I sound like I'm on the right track here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was under the impression that the compression algorithm JPEG and other lossy formats use would remove full pixels, meaning the R, G, B and A if applicable would be removed. This means that you can still iterate over the data in the same way you would with raw data but there just wouldn't be as much of it. Once the data is lost by the lossy algorithm, the Bitmap
class would have no way of retrieving that data again. Hopefully I'm right in my assumptions and apologies for taking so long to get back to you on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After doing a bunch of reading and more thinking about it, my question doesn't really make sense. JPEG is lossy but it isn't storing pixels nor is it "losing" pixels, what it's losing is more abstract (DCT compression is based on perceptual color models and frequency thresholds). So it would never really make sense to operate on the compressed version of an image, the only thing that data is useful for is to decompress back to a bitmap -- it isn't stored as pixels, in other words, it's just a data stream meant to feed into a decompression algorithm.
I was basically mixing up "raw" with "uncompressed" -- the "raw" formats are image data which has never been compressed, versus "uncompressed" meaning a full-pixel-count bitmap that was stored as compressed. We'll have "all" of the pixels, but they will simply exhibit compression artifacts here and there.
I'm still hung up on TakeRawPicture
not passing a correct ImageContext
, even though I will get it working with non-raw images as well. I assume a raw image will perform better without the overhead of bitmap locking and copying.
I also have to apologize for taking so long to deal with this. Work is really running me ragged right now -- I'm doing weekends, evenings, etc. (spent about 20 hours last weekend starting this).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Earlier I had overlooked that the padded buffer also changed the vertical resolution -- turns out that's padded to 16-byte values. That means almost all resolutions have a raw buffer size that is larger than the image resolution. For convolution, this only impacts TakeRawPicture
since a formatted image from TakePicture
is digested by Bitmap
which produces a buffer matching the image resolution.
I committed an update that just adds cell sizes for the padded buffer dimensions. I can't decide if it would be better for EDIT - I temporarily added a line to set the true resolution but it trashes the image -- the stride is off! So I think just supporting the raw buffer dimensions with default cell values is the way to go.ImageContext.Resolution
to match the camera resolution or the true padded buffer size. As far as I can tell, the only effect of matching camera resolution would be to ignore the padding, which is what you'd probably want anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My TakeRawPicture test set the camera to 1296x972 as a side-effect of earlier motion-detection testing (it's a full-frame binning mode). To my surprise, a different resolution was reported in an exception in the debug log: Resolution 1312x976 has no recommended cell counts. After a bit of searching, it turns out that the driver bumps the buffer up to the nearest 32-byte value (RPi forum discussion):
I might need to do some investigation to check the camera's resolution is being configured correctly. The configuring of the camera component can be found in MMALCameraComponent
. I'm not sure if the driver aligns on your behalf - I explicitly align the width and height to 32 and 16 respectively via the Resolution
class and this library is influenced by Raspistill/Raspivid in terms of configuring the camera.
I'm wondering if you can think of where in the pipeline this might go wrong. (EDIT - I think it's the second part of the condition wrapping the output port callback, specifically (eos && !this.Trigger.Task.IsCompleted) ... I understand why it does that but I'm not sure how to address it.)
Well spotted, yes I think that's the problem. Could this be the solution:
var eos = bufferImpl.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_FRAME_END) ||
bufferImpl.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_EOS) ||
this.ComponentReference.ForceStopProcessing;
if ((bufferImpl.CheckState() && bufferImpl.Length > 0 && !eos && !failed && !this.Trigger.Task.IsCompleted) || (eos && !this.Trigger.Task.IsCompleted && bufferImpl.Length > 0))
{
this.CallbackHandler.Callback(bufferImpl);
}
Basically just making sure that the buffer length is always greater than 0 if we want to call the Callback Handler?
For some reason, ImageContext.Resolution isn't set when using TakePicture, although TakeRawPicture does set it. For now I read it from Bitmap after it loads the data, so this isn't a problem, but perhaps something to look into.
Hmm, it should do. What is happening here is when ConfigureOutputPort
is called against the encoder component (the output port being an instance of StillPort
), the configure stage will check if the MMALPortConfig
has a resolution set against it, and if not (which is the default), it will look here for the resolution to set it to. When your ImageContext
is populated, the Resolution
property is taken from the working port which in your case will be the encoder output port. If you are certain that it's not being retrieved correctly then please raise a ticket.
Raw-based output is still blue-tinted, although using encoded input is not. Curious...
I have a feeling the raw output is being sent as BGR instead of RGB, I haven't had chance to look into this but it warrants a ticket raising rather than blocking this PR.
Should StreamCaptureHandler silently suppress exceptions? Re-throw if logging is off?
Could you tell me where the exception is being suppressed and I'll review?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right about the raw image and RGB vs BGR, I temporarily added something to the end of the cell processing method to swap the R/B bytes and that fixes the color problem. I don't know where to start looking for an explanation though. Hmm, I wonder if this firmware issue is related? (Although I'm not clear about the context of that issue, it sounds a bit like it might be specific to HDMI.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Ian, my browser's view into this PR hadn't updated with your comments until just now, after I'd already been playing around with the BGR/RGB issue. See comments above about the firmware issue which sounds related. I will also open a ticket about this -- I just now wrote a stand-alone test that does nothing but send TakeRawPicture
to disk, then reads it back and does Bitmap.Save
-- no other manipulation -- and the resulting JPEG has the colors reversed. Definitely not related to this PR.
I might need to do some investigation to check the camera's resolution is being configured correctly. The configuring of the camera component can be found in MMALCameraComponent. I'm not sure if the driver aligns on your behalf - I explicitly align the width and height to 32 and 16 respectively via the Resolution class and this library is influenced by Raspistill/Raspivid in terms of configuring the camera.
I think this one is no longer a problem now that I've added the padded "resolutions" to the cell table, too. It's just the nature of the raw buffer that we have to work with it that way, I don't think there's benefit in messing with it further. About the only thing that might make sense is to reflect the true resolution on ImageContext
so that we don't waste time processing blank padding bytes, but honestly I think the overhead is immeasurably small.
Could this be the solution: Basically just making sure that the buffer length is always greater than 0 if we want to call the Callback Handler?
That does indeed fix the problem earlier in the pipeline! Committing that change now.
I will open a ticket about TakePicture
not setting the resolution, I went so far as to put a Console.WriteLine
as the first line of ApplyConvolution
and it's definitely 0,0.
Should StreamCaptureHandler silently suppress exceptions? Re-throw if logging is off?
Could you tell me where the exception is being suppressed and I'll review?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushed another fix to copy the cloned bitmap resolution back to the original context (which would still be 0,0 when invoked from TakePicture
) -- EDIT - this is really about an edge-case where you're going from a formatted picture to raw output with multiple passes:
imgCaptureHandler.Manipulate(context =>
{
context.Apply(new SharpenProcessor());
context.Apply(new GaussianProcessor(GaussianMatrix.Matrix5x5));
}, null);
Earlier I badly misstated how/why this is needed, but without this, the scenario above would crash. (Ironically, you'd still get raw output because of the way the stream handler works, but it wouldn't have effects applied.)
EDIT -- In case you read the original by email -- I realized I was thinking about that wrong (distracted by yesterday's BGR/RGB tests that used |
Hi Jon, I've merged this PR now so thank you for your efforts on this. It would be great when you get some spare time if you could produce some documentation to surround this work and some useful examples. Now that this is merged I will begin looking at the tickets created off the back of this PR. Thanks, |
Fantastic! Yes, I will get to work on the wiki soon. Thanks for all the help and patience with this one. |
Quick-pass wiki updates -- convolutions (the low-hanging fruit for documenting this PR): https://github.com/techyian/MMALSharp/wiki/Image-Processing ...and a note that motion detection changes a lot in v0.7: Finally, I started that new CCTV page, which I will start with motion detection, then get into still frame captures, the various options, real-time analysis output, etc. -- just a placeholder today, but I'll try to work on this a bit each afternoon: https://github.com/techyian/MMALSharp/wiki/CCTV-and-Motion-Detection Do you know if it's possible to store images in the wiki itself? I thought of a trick that might work -- open a new issue, paste the image there (which uploads it to github) then reference that URL from the wiki. The issue can be closed but new images can still be added later... |
Quite a bit of content added today to the motion/CCTV wiki... (had a work day that involved mostly waiting on other people!) |
Very impressed with the write up, Jon, so thank you for that. Although I've been following the work you've been doing lately, seeing it all written down has really helped me understand the full picture of what you've been working on. I've re-jigged the tickets in the v0.7 milestone so I think once the motion stuff is completed and other minor bits, I'll look at pushing a new official release out. Great job. |
I think it would probably be easier to create a resources (or similarly named) folder containing all the images used in the wiki. Send a PR in if you want and then you can reference them in your docs. |
Thanks. I know I can be hard to follow when I'm in the thick of it, but I'm still really happy about finding your library and your willingness to consider some pretty major changes. I went ahead and finished it up and I did reference my one motion analysis image from this PR, which I will probably point to a youtube capture of motion analysis at some point. Longer term I think there might be some benefit in diagrams elsewhere in the wiki to help explain component relationships and that sort of thing (I'm trying to think about stuff I had some trouble understanding initially), so I'll open a ticket when I'm feeling like I'm in a documenting mood. (It comes and goes -- and after writing that monster CCTV section, it's very gone right now!) I have some other ideas to further improve motion detection but I think the Really Big Stuff is settled. Have a good one, talk to you soon. |
And here we go again!
Improved summed RGB diff logic
New Stuff
Capture Handlers
IMotionCaptureHandler.DisableMotionCapture
adds a new argumentonlyDisableCallback
FrameBufferCaptureHandler
implements new argument and has renamedFrameDiffDriver
MotionAnalysisCaptureHandler
writes full raw frames to a file stream, motion algorithm callback-basedFrame Buffer Handling
FrameDiffDriver
handles frame buffers and invoking the selected motion detection algorithmFrameDiffMetrics
is a thread safe struct used to pass parameters to the motion algorithmIMotionAlgorithm
defines how the frame driver interacts with the algorithmsMotionAlgorithmBase
provides a couple of utility methodsMotionAlgorithmRGBDiff
is the parallel processing motion algorithm and analysis codeFrameDiffAnalyser
is gone (replaced byFrameDiffDriver
andIMotionAlgorithm
)Configuration
MotionConfig.MotionAlgorithm
propertyThreshold
is removed fromMotionConfig
Example