-
Notifications
You must be signed in to change notification settings - Fork 33
Write JPGs during motion-detection recording #163
Comments
Hi Jon, I've managed to track down the problem causing images not to be written to the capture handler. In the example being used for motion detection in the wiki, there are calls to configure the splitter output ports (e.g. public async Task DetectMotion()
{
MMALCamera cam = MMALCamera.Instance;
// When using H.264 encoding we require key frames to be generated for the Circular buffer capture handler.
MMALCameraConfig.InlineHeaders = true;
// Two capture handlers are being used here, one for motion detection and the other to record a H.264 stream.
using (var vidCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/videos/detections", "h264"))
using (var motionCircularBufferCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/videos/detections", "h264"))
using (var motionImageCaptureHandler = new ImageStreamCaptureHandler("/home/pi/images/detections", "jpg"))
using (var splitter = new MMALSplitterComponent())
using (var resizer = new MMALIspComponent())
using (var vidEncoder = new MMALVideoEncoder())
using (var imgEncoder = new MMALImageEncoder(continuousCapture: true))
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var callbackHandler = new MotionImageCallbackHandler((IVideoPort)imgEncoder.Outputs[0], motionImageCaptureHandler);
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
var imgEncoderPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420);
// The ISP resizer is being used for better performance. Frame difference motion detection will only work if using raw video data. Do not encode to H.264/MJPEG.
// Resizing to a smaller image may improve performance, but ensure that the width/height are multiples of 32 and 16 respectively to avoid cropping.
var resizerPortConfig = new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480);
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
resizer.ConfigureOutputPort<VideoPort>(0, resizerPortConfig, motionCircularBufferCaptureHandler);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
imgEncoder.ConfigureOutputPort(imgEncoderPortConfig, motionImageCaptureHandler);
imgEncoder.Outputs[0].RegisterCallbackHandler(callbackHandler);
cam.Camera.VideoPort.ConnectTo(splitter);
cam.Camera.PreviewPort.ConnectTo(renderer);
splitter.Outputs[0].ConnectTo(resizer);
splitter.Outputs[1].ConnectTo(vidEncoder);
splitter.Outputs[2].ConnectTo(imgEncoder);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(20));
// Here we are instructing the capture handler to record for 10 seconds once motion has been detected. A threshold of 130 is used. Lower
// values indicate higher sensitivity. Suitable range for indoor detection between 120-150 with stable lighting conditions.
var motionConfig = new MotionConfig(200);
await cam.WithMotionDetection(motionCircularBufferCaptureHandler, motionConfig,
async () =>
{
// Stop motion detection while we are recording.
motionCircularBufferCaptureHandler.DisableMotionDetection();
callbackHandler.ProcessImage = true;
// Prepare a token timeout to end recording after 10 seconds
var stopRecordingCts = new CancellationTokenSource(10 * 1000);
// Invoked when the token times out
stopRecordingCts.Token.Register(() =>
{
callbackHandler.ResetCallbackHandler();
// We want to re-enable the motion detection.
motionCircularBufferCaptureHandler.EnableMotionDetection();
// Optionally create new files for our next recording run
vidCaptureHandler.Split();
});
// Start recording our H.264 video, request an immediate h.264 key frame, and also record the raw stream.
var vidCaptureTask = vidCaptureHandler.StartRecording(vidEncoder.RequestIFrame, stopRecordingCts.Token);
await vidCaptureTask;
})
.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
}
public class MotionImageCallbackHandler : PortCallbackHandler<IVideoPort, IOutputCaptureHandler>
{
private int _numProcessedImages;
public bool ProcessImage { get; set; }
public MotionImageCallbackHandler(IVideoPort port, IOutputCaptureHandler handler) : base(port, handler)
{
}
public void ResetCallbackHandler()
{
this.ProcessImage = false;
_numProcessedImages = 0;
}
public override void Callback(IBuffer buffer)
{
var eos = buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_FRAME_END) ||
buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_EOS);
if (this.ProcessImage && _numProcessedImages < 3)
{
base.Callback(buffer);
if (eos && this.CaptureHandler is IFileStreamCaptureHandler)
{
((IFileStreamCaptureHandler)this.CaptureHandler).NewFile();
}
if (eos)
{
_numProcessedImages++;
}
}
}
} You might want to mess around with this example but hopefully that helps you. |
I'm updating the wiki with a paragraph about motion detection masks, so I'll also remove |
Thanks, Ian, this got me on the right path and is much simpler than what I thought it would take. As written it grabs the first three frames, whereas I'm looking for a frame from the first three seconds, but the required changes are trivial. If you think a way to trigger still-frame captures on demand might be useful (which is what I'll turn this into), I'll clean up my changes and PR it. I noticed that compared to the motion detection example, your revisions above don't call |
Or perhaps not so trivial 😀 ... My first change was to just capture one frame as shown below. In theory this would be the first frame, but I noticed the JPG was occasionally corrupted. A working JPG in my test app is around 275K or larger, and often these corrupted JPGs were as small as 48K. I think it's sometimes starting to record that JPG mid-frame, probably due to some race conditions somewhere or perhaps even other activity bogging down the system long enough that part of the frame data is lost (I'm not too clear on the pipeline closer to the hardware). public class OnDemandImageCallbackHandler : PortCallbackHandler<IVideoPort, IOutputCaptureHandler>
{
private bool _saveFrame = false;
public OnDemandImageCallbackHandler(IVideoPort port, IOutputCaptureHandler handler)
: base(port, handler)
{ }
public void SaveNextFrame()
{
_saveFrame = true;
}
public void ResetCallbackHandler()
{
_saveFrame = false;
}
public override void Callback(IBuffer buffer)
{
var eos = buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_FRAME_END)
|| buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_EOS);
if (_saveFrame)
{
base.Callback(buffer);
if (eos)
{
if (this.CaptureHandler is IFileStreamCaptureHandler)
{
((IFileStreamCaptureHandler)this.CaptureHandler).NewFile();
}
ResetCallbackHandler();
}
}
}
} Yes, there is a very obvious problem with processing the buffer at some arbitrary point in time, which I'll get to momentarily... I then added a pair of token-timeout calls using that same async () =>
{
motionCaptureHandler.DisableMotionDetection();
imgCallback.SaveNextFrame();
var stopRecordingCts = new CancellationTokenSource();
stopRecordingCts.Token.Register(() =>
{
imgCallback.ResetCallbackHandler();
motionCaptureHandler.EnableMotionDetection();
vidCaptureHandler.StopRecording();
vidCaptureHandler.Split();
});
var stillFrameOneSecondCts = new CancellationTokenSource();
stillFrameOneSecondCts.Token.Register(imgCallback.SaveNextFrame);
var stillFrameTwoSecondsCts = new CancellationTokenSource();
stillFrameTwoSecondsCts.Token.Register(imgCallback.SaveNextFrame);
stopRecordingCts.CancelAfter(recordSeconds * 1000);
stillFrameOneSecondCts.CancelAfter(1000);
stillFrameTwoSecondsCts.CancelAfter(2000);
await Task.WhenAny(
vidCaptureHandler.StartRecording(vidEncoder.RequestIFrame, stopRecordingCts.Token),
cts.Token.AsTask()
);
if (!stopRecordingCts.IsCancellationRequested)
{
stillFrameOneSecondCts.Cancel();
stillFrameTwoSecondsCts.Cancel();
stopRecordingCts.Cancel();
}
} The problem using the callback handler you showed is that it's designed to write every frame -- or with my changes, buffer everything while waiting around for my program to decide to save a frame, which means it generally outputs garbage that the JPEG compression algorithm is still occasionally able to work with. I think what needs to happen is to store one frame that is reset each time a new frame begins, and only switch to writing a file stream when there is a request to save it. A one-frame circular buffer, in other words. The interesting part is that the initial frame is also sometimes corrupted. I can see how triggering capture at some arbitrary point in time could lead to this but I'd have expected the initial capture to always be valid. So I also think the handler at startup needs to discard image buffer data until it receives an EOS. The part that makes me question my conclusions is that motion detection processes buffers the same way, and if there was an initial frame problem, I would expect that to trigger false motion detection events at startup -- it would buffer an incomplete first frame which would then have significant differences from the next complete frame. I've never seen this, so that isn't adding up. |
I took a few steps back and now I understand the processing better... though I won't be too surprised if I'm missing another cool trick like that callback! Since |
This ends up being pretty easy -- I created Then I change /// <summary>
/// Signals the underlying callback handler to call <see cref="WriteStreamToFile"/> when the frame is completely captured.
/// </summary>
public virtual void NewFile()
{
if (this.CurrentStream == null)
{
return;
}
SaveImage = true;
}
/// <summary>
/// The callback handler uses this to write the current completed buffer to a file.
/// </summary>
public void WriteStreamToFile()
{
if (this.CurrentStream == null || !this.SaveImage)
{
return;
}
this.SaveImage = false;
using (FileStream fs = new FileStream(this.CurrentPathname, FileMode.Create, FileAccess.Write))
{
this.CurrentStream.WriteTo(fs);
}
string newFilename = string.Empty;
if (_customFilename)
{
// If we're taking photos from video port, we don't want to be hammering File.Exists as this is added I/O overhead. Camera can take multiple photos per second
// so we can't do this when filename uses the current DateTime.
_increment++;
newFilename = $"{this.Directory}/{this.CurrentFilename} {_increment}.{this.Extension}";
}
else
{
string tempFilename = DateTime.Now.ToString("dd-MMM-yy HH-mm-ss");
int i = 1;
newFilename = $"{this.Directory}/{tempFilename}.{this.Extension}";
while (File.Exists(newFilename))
{
newFilename = $"{this.Directory}/{tempFilename} {i}.{this.Extension}";
i++;
}
}
this.CurrentPathname = newFilename;
}
/// <summary>
/// Resets the underlying <see cref="MemoryStream"/> without re-allocating.
/// </summary>
public void ResetStream()
=> this.CurrentStream.SetLength(0); Writes or buffer reset requires a very simple change to public override void Callback(IBuffer buffer)
{
base.Callback(buffer);
var eos = buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_FRAME_END) ||
buffer.AssertProperty(MMALBufferProperties.MMAL_BUFFER_HEADER_FLAG_EOS);
if(eos)
{
// try this first since it also implements IFileStreamCaptureHandler
var onDemand = this.CaptureHandler as OnDemandImageCaptureHandler;
if(onDemand != null)
{
if(onDemand.SaveImage)
{
onDemand.WriteStreamToFile();
}
onDemand.ResetStream();
}
else
{
// this continuously writes every frame
var fsHandler = this.CaptureHandler as IFileStreamCaptureHandler;
fsHandler?.NewFile();
}
}
} |
I suppose abstracting |
I'll PR this for your consideration. I made some other changes vs the file stream handler. Since this only writes a JPG on demand, it was misleading to have a filename based on a timestamp that was much older than when the file is actually written, so now this generates the filename as it writes. That also means a few other properties didn't make sense, and I added a |
So I just want to try and explain the issue you're seeing with "garbage" data first before delving into what's being discussed here. You will notice this line of code in the callback handler: Regarding your PR, what I'm trying to weigh up is in what other scenarios would you want to use this new capture handler? When this issue was first raised I had a thought that capture handlers could possibly feature an |
Right I understand the partial buffer / EOS situation, it's just that in the "record everything" scenario I had thought it would always start from a new frame. It certainly makes sense that with my changes it often starts at some random point in the middle of a frame. I suppose motion detection doesn't suffer from a bad-first-frame problem as it's up and running from the moment the camera starts sending data? The problem with the callback handler approach is that a file-stream-based handler is continuously outputting frame data to that stream and there's no turning back, you can't reset a file stream. So if you don't close the file and start a new one at EOS, then you're just writing garbage. That's the reason I had to go with a new handler, it's the only way to discard unwanted frames. I have a hard time imagining use-cases for the continuous capture feature. I know there are use cases but they seem like edge cases to me. I think it was in the Raspberry Pi forums that I saw someone working on an industrial control problem where he needed to run at the high-binning 90FPS rate because he was trying to capture something that might show up in just one frame. It's neat that it can do it, but I have a hard time thinking of a way almost anyone would use it versus just recording video. As for other use-cases for what I've PR'd, I haven't tried this (edit: I did try this, it works), but even if the video isn't actively recording, it's still receiving input and sending output right? In a CCTV scenario, the system is running motion detection full-time. I was thinking I could use this on-demand-frame for my control system UI to provide an interactive update as to what the camera is seeing. Some slow rate, 1 FPS or whatever (given I'll have a relatively large number of cameras). The point being, you can't simultaneously do motion detection and also do something like MJPEG streaming (as far as I can see, but maybe the video stream could be routed to CLVC?). No big deal if you don't think it's sufficiently broadly applicable though, I can build out my own little bits on the side. I certainly understand you're building a library, not a CCTV DVR. |
Coming at this from a different angle: Rather than creating a new handler and all the other stuff in my PR, I think I see how This could be accomplished with two bool properties, I was going to say that the per-frame write/no-write decision would be internal to the capture handler's |
Hi Jon, I'm going to commit some changes which I hope will help you with what you're trying to achieve. It's a change to the Happy to discuss this further and take on board any changes you think may be needed but I hope it's a step in the right direction. I've done some basic testing and it seems to be working correctly, I've also noticed an issue with empty files being created with the public async Task DetectMotion()
{
MMALCamera cam = MMALCamera.Instance;
// When using H.264 encoding we require key frames to be generated for the Circular buffer capture handler.
MMALCameraConfig.InlineHeaders = true;
// Two capture handlers are being used here, one for motion detection and the other to record a H.264 stream.
// We will not record the raw stream (which would be very large and probably not useful).
using (var vidCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/videos/detections", "h264"))
using (var motionCircularBufferCaptureHandler = new CircularBufferCaptureHandler(4000000))
using (var recordClipCaptureHandler = new CircularBufferCaptureHandler(4000000, "/home/pi/images/clips", "jpg"))
using (var splitter = new MMALSplitterComponent())
using (var resizer = new MMALIspComponent())
using (var vidEncoder = new MMALVideoEncoder())
using (var imgEncoder = new MMALImageEncoder(continuousCapture: true))
using (var renderer = new MMALVideoRenderer())
{
cam.ConfigureCameraSettings();
var splitterPortConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var vidEncoderPortConfig = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
var imgEncoderPortConfig = new MMALPortConfig(MMALEncoding.JPEG, MMALEncoding.I420);
// The ISP resizer is being used for better performance. Frame difference motion detection will only work if using raw video data. Do not encode to H.264/MJPEG.
// Resizing to a smaller image may improve performance, but ensure that the width/height are multiples of 32 and 16 respectively to avoid cropping.
var resizerPortConfig = new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480);
splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);
resizer.ConfigureOutputPort<VideoPort>(0, resizerPortConfig, motionCircularBufferCaptureHandler);
vidEncoder.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[1], null);
vidEncoder.ConfigureOutputPort(vidEncoderPortConfig, vidCaptureHandler);
imgEncoder.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), splitter.Outputs[2], null);
imgEncoder.ConfigureOutputPort(imgEncoderPortConfig, recordClipCaptureHandler);
imgEncoder.Outputs[0].RegisterCallbackHandler(new DefaultPortCallbackHandler(imgEncoder.Outputs[0], recordClipCaptureHandler));
cam.Camera.VideoPort.ConnectTo(splitter);
cam.Camera.PreviewPort.ConnectTo(renderer);
splitter.Outputs[0].ConnectTo(resizer);
splitter.Outputs[1].ConnectTo(vidEncoder);
splitter.Outputs[2].ConnectTo(imgEncoder);
// Camera warm up time
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(60));
// Here we are instructing the capture handler to use a difference threshold of 130. Lower values
// indicate higher sensitivity. Suitable range for indoor detection between 120-150 with stable lighting
// conditions. The testFrameInterval argument updates the test frame (which is compared to each new frame).
var motionConfig = new MotionConfig(threshold: 150, testFrameInterval: TimeSpan.FromSeconds(3));
await cam.WithMotionDetection(motionCircularBufferCaptureHandler, motionConfig,
async () =>
{
// This callback will be invoked when motion has been detected.
// Stop motion detection while we are recording.
motionCircularBufferCaptureHandler.DisableMotionDetection();
// This will control the duration of the recording.
var ctsStopRecording = new CancellationTokenSource();
// This will be invoked when the token is canceled.
ctsStopRecording.Token.Register(() =>
{
// We want to re-enable the motion detection.
motionCircularBufferCaptureHandler.EnableMotionDetection();
// Stop recording on our capture handler.
vidCaptureHandler.StopRecording();
// Stop recording JPEG clips.
recordClipCaptureHandler.StopRecording();
// Create a new file for our next recording run.
vidCaptureHandler.Split();
});
// Record for 10 seconds
ctsStopRecording.CancelAfter(10 * 1000);
// Record until the duration passes or the overall motion detection token expires. Passing
// vidEncoder.RequestIFrame to StartRecording initializes the clip with a key frame just
// as the capture handler begins recording.
await Task.WhenAny(
vidCaptureHandler.StartRecording(vidEncoder.RequestIFrame, ctsStopRecording.Token),
recordClipCaptureHandler.StartRecording(cancellationToken: ctsStopRecording.Token, recordNumFrames: 5, splitFrames: true),
cts.Token.AsTask()
);
// Stop the recording if the overall motion detection token expired
if (!ctsStopRecording.IsCancellationRequested)
{
ctsStopRecording.Cancel();
}
}).ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
// Only call when you no longer require the camera, i.e. on app shutdown.
cam.Cleanup();
} |
…ptureHandler. Also added additional check to the PostProcess method on StreamCaptureHandler as it was randomly throwing exceptions.
Thank you. I spent the past hour working on a couple different approaches in the circular buffer handler, too, but I keep coming to the realization that the circular buffer capability itself isn't particularly relevant to the problem. As a result, everything I tried felt a bit forced, but I'm definitely interested in seeing your implementation. What do you mean about overriding I'm still pretty set on the need for an on-demand solution, but I'm willing to write my own handler in my own project if that's an issue for some reason. (Not to stray off topic, but as you were posting that, I was considering taking advantage of the |
Edit: I was down in the weeds with my own code and didn't recognize your use of But the more I think about my comments that the circular buffer isn't relevant to either problem (setting aside the edge case of recording the motion detection raw stream), the more I feel like my request and motion capture would probably be better suited to a separate handler -- perhaps one focused on the use of Is that making any sense at all? |
techyian#163 - Allow individual frames to be recorded by the CircularBufferCa…
I think a pain point with Edit - I was in a hurry when I wrote that, but didn't want to lose the thought. I can explain further. I was thinking about what it would take to build my own handler in my project without the based-on-file-stream assumption. I'd have to implement new equivalents to Edit 2 ... when you said "override FastImageOutputCallbackHandler" were you referring to the |
I was able to build a small simple handler that does on-demand image capture as well as motion detection without any of those other dependencies that concerned me. If you like, I will PR this (and simplify the circular buffer handler), otherwise it can just live in my own projects. (I haven't made a pass to match your conventions yet.) Either way, I greatly appreciate the time and effort you've put into this! using System;
using System.IO;
using MMALSharp.Common;
using MMALSharp.Processors.Motion;
namespace MMALSharp.Handlers
{
/// <summary>
/// A capture handler focused on high-speed frame buffering, either for on-demand snapshots
/// or for motion detection.
/// </summary>
public class FrameBufferCaptureHandler : MemoryStreamCaptureHandler, IMotionCaptureHandler, IVideoCaptureHandler
{
private MotionConfig _motionConfig;
private bool _detectingMotion;
private FrameDiffAnalyser _motionAnalyser;
private bool _skippingFirstFrame = true;
private bool _writeFrameRequested = false;
/// <summary>
/// Creates a new <see cref="FrameBufferCaptureHandler"/> optionally configured to write on-demand snapshots.
/// </summary>
/// <param name="directory">Target path for image files</param>
/// <param name="extension">Extension for image files</param>
/// <param name="fileDateTimeFormat">Filename DateTime formatting string</param>
public FrameBufferCaptureHandler(string directory = "", string extension = "", string fileDateTimeFormat = "yyyy-MM-dd HH.mm.ss.ffff")
: base()
{
FileDirectory = directory.TrimEnd('/');
FileExtension = extension;
FileDateTimeFormat = fileDateTimeFormat;
Directory.CreateDirectory(FileDirectory);
}
/// <summary>
/// Creates a new <see cref="FrameBufferCaptureHandler"/> configured for motion detection using a raw video stream.
/// </summary>
public FrameBufferCaptureHandler()
: base()
{ }
/// <summary>
/// Target directory when <see cref="WriteFrame"/> is invoked without a directory argument.
/// </summary>
public string FileDirectory { get; set; } = string.Empty;
/// <summary>
/// File extension when <see cref="WriteFrame"/> is invoked without an extension argument.
/// </summary>
public string FileExtension { get; set; } = string.Empty;
/// <summary>
/// Filename format when <see cref="WriteFrame"/> is invoked without a format argument.
/// </summary>
public string FileDateTimeFormat { get; set; } = string.Empty;
/// <summary>
/// The filename (without extension) most recently created by <see cref="WriteFrame"/>, if any.
/// </summary>
public string MostRecentFilename { get; set; } = string.Empty;
/// <summary>
/// The full pathname to the most recent file created by <see cref="WriteFrame"/>, if any.
/// </summary>
public string MostRecentPathname { get; set; } = string.Empty;
/// <inheritdoc />
public MotionType MotionType { get; set; } = MotionType.FrameDiff;
/// <summary>
/// Outputs an image file to the specified location and filename.
/// </summary>
public void WriteFrame()
{
if (string.IsNullOrWhiteSpace(FileDirectory) || string.IsNullOrWhiteSpace(FileDateTimeFormat))
throw new Exception($"The {nameof(FileDirectory)} and {nameof(FileDateTimeFormat)} must be set before calling {nameof(WriteFrame)}");
_writeFrameRequested = true;
}
/// <inheritdoc />
public override void Process(ImageContext context)
{
// guard against partial frame data at startup
if(_skippingFirstFrame)
{
_skippingFirstFrame = !context.Eos;
if (_skippingFirstFrame)
{
return;
}
}
if(_detectingMotion)
{
_motionAnalyser.Apply(context);
}
// accumulate frame data in the underlying memory stream
base.Process(context);
if(context.Eos)
{
// write a full frame if a request is pending
if (_writeFrameRequested)
{
WriteStreamToFile();
_writeFrameRequested = false;
}
// reset the stream to begin the next frame
CurrentStream.SetLength(0);
}
}
/// <inheritdoc />
public void ConfigureMotionDetection(MotionConfig config, Action onDetect)
{
_motionConfig = config;
_motionAnalyser = new FrameDiffAnalyser(config, onDetect);
EnableMotionDetection();
}
/// <inheritdoc />
public void EnableMotionDetection()
{
_detectingMotion = true;
_motionAnalyser?.ResetAnalyser();
}
/// <inheritdoc />
public void DisableMotionDetection()
{
_detectingMotion = false;
}
// Unused, but required to handle a video stream.
public void Split()
{ }
private void WriteStreamToFile()
{
string directory = FileDirectory.TrimEnd('/');
string filename = DateTime.Now.ToString(FileDateTimeFormat);
string pathname = $"{directory}/{filename}.{FileExtension}";
using (var fs = new FileStream(pathname, FileMode.Create, FileAccess.Write))
{
CurrentStream.WriteTo(fs);
}
MostRecentFilename = filename;
MostRecentPathname = pathname;
}
}
} |
Hi Jon, This solution looks much cleaner! If you can send a PR in and an example on how to use it I'd really appreciate that and we can get it merged in. Thanks, |
For the past couple of days, I've been trying to alter the motion detection example to also capture JPEG images periodically. My use-case is to match my existing CCTV DVR, which captures one image every second for the first 3 seconds after motion-detection begins. These are emailed and/or messaged to us.
The wiki shows how to "manually" take a snapshot, but that relies upon calling
ProcessAsync
for the camera'sStillPort
, which is obviously not compatible with motion detection. The wiki also shows how to continuously write both video and stills (theVideoAndImages
example), but there doesn't appear to be a way to control when a still frame is captured while recording. Actually, I couldn't get continuous still-capture working with the motion detection sample either, it always wrote a single zero-length JPEG (I assume whenProcessAsync
gets everything started). When I try to wire it up to the splitter similar to the video handler/encoder, I get MMAL errors (I think because the splitter output is pointed to the camera video port).Failed attempts aside, is this combination possible with the current image encoder API? If not, could I perhaps modify
MMALImageEncoder
to write a JPG on demand? I haven't taken a look at the encoder for myself yet, as I clearly don't even understand how to get it working in the splitter pipeline -- which is probably another question I should ask: is the existing pipeline API compatible with this combination?The text was updated successfully, but these errors were encountered: