-
Notifications
You must be signed in to change notification settings - Fork 33
Creating a frame-based pipeline "filter" #176
Comments
Annnnd I think everything I wrote above is probably wrong, lol. Apparently How the test is set up: using (var capture = new VideoStreamCaptureHandler(rawPathname))
using (var resizer = new MMALIspComponent())
using (var encoder = new MMALVideoEncoder())
{
var camConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var rawConfig = new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480);
var h264Config = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
resizer.ConfigureInputPort(camConfig, cam.Camera.VideoPort, null);
encoder.ConfigureInputPort(rawConfig, null);
resizer.ConfigureOutputPort(rawConfig, null);
encoder.ConfigureOutputPort(h264Config, capture);
var connection = resizer.Outputs[0].ConnectTo(encoder, useCallback: true);
connection.RegisterCallbackHandler(new RawFrameCallbackFilter(connection));
cam.Camera.VideoPort.ConnectTo(resizer);
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(totalSeconds));
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
} Changes to that if-else block in MMALSharp sans timing and counters (which didn't break the resizer, I noticed): protected virtual int NativeConnectionCallback(MMAL_CONNECTION_T* connection)
{
if (MMALCameraConfig.Debug)
{
MMALLog.Logger.LogDebug("Inside native connection callback");
}
var queue = new MMALQueueImpl(connection->Queue);
var bufferImpl = queue.GetBuffer();
if (bufferImpl.CheckState())
{
if (MMALCameraConfig.Debug)
{
bufferImpl.PrintProperties();
}
if (bufferImpl.Length > 0)
{
this.CallbackHandler.InputCallback(bufferImpl);
}
this.InputPort.SendBuffer(bufferImpl);
}
else
{
MMALLog.Logger.LogInformation("Connection callback input buffer could not be obtained");
}
queue = new MMALQueueImpl(connection->Pool->Queue);
bufferImpl = queue.GetBuffer();
if (bufferImpl.CheckState())
{
if (MMALCameraConfig.Debug)
{
bufferImpl.PrintProperties();
}
if (bufferImpl.Length > 0)
{
this.CallbackHandler.OutputCallback(bufferImpl);
}
this.OutputPort.SendBuffer(bufferImpl);
}
else
{
MMALLog.Logger.LogInformation("Connection callback output buffer could not be obtained");
}
return (int)connection->Flags;
} And finally, my do-nothing Callback handler: public class RawFrameCallbackFilter : ConnectionCallbackHandler
{
public RawFrameCallbackFilter(IConnection connection)
: base(connection)
{ }
public override void InputCallback(IBuffer buffer)
{
base.InputCallback(buffer);
}
public override void OutputCallback(IBuffer buffer)
{
base.OutputCallback(buffer);
}
} I'll keep poking at it, but as we say, ELI5 ... Explain Like I'm 5. 😁 |
Backtracking from that test I tried, I see the resizer (well, I used ISP) by default sets up a general It doesn't look like |
To be clear, in the Connection callback the
This is confusing me a little. You are mentioning Are you saying that the connection callback handler you configured didn't get called in its |
When you ask for connection callbacks to be enabled the following should happen:
|
Ah! That makes sense. Yes it seems weird to be sending data back to something's output port. So in order to process raw full frames, would my usage pattern then be:
I'm sure it isn't you who is confused! Sorry about that, I see what you mean, I missed the output-vs-connection difference in the method names. I was just trying to see what happened after
I don't think it was, the input-calls counter is incremented just before invoking if (bufferImpl.Length > 0)
{
inputcalls++;
timer.Restart();
this.CallbackHandler.InputCallback(bufferImpl);
timer.Stop();
elapsed += timer.ElapsedMilliseconds;
} |
So there's no way to buffer until you have a full frame anywhere but the very end of the pipeline? Will Bad Things Happen if I just stall (so to speak) until I've collected a full frame? (If that's even possible... empty the buffer?)
Figured it out below... |
Ah, now I see, I found the queues and pools docs. The connection owns a pool of buffers, and also a queue of populated buffers received from upstream. When the populated buffer queue is empty, a new empty buffer from the local pool is passed back upstream to the upstream's (confusingly-named) output port. (Realistically I'm guessing 640x480x24bpp is a full-frame per buffer, since my tests made it look like it flip-flops between input and output, but it's obviously unwise to code to that assumption or they wouldn't need a queue mechanism.) My initial thought is that the callback handler could simply return a flag about whether to pass the buffer to the downstream port, but since that is also buffer-based, it looks like the handler would have to store copies of the buffers (I don't think it would be safe to simply accumulate references to the actual buffers over multiple passes?), then at frame-end assemble them into the bitmap, process it, then disassemble back into buffer-sized copies, then actually output buffers to the downstream. So maybe what gets returned from the callback handler is either null (indicating there's nothing to pass downstream yet), or an array of (copied?) buffers including headers, then Sounds too easy. And there is also the question of why my |
I've just copied your code and it's because you're using the |
I suspected that was the case. How do you get the resizer to work? Whatever I try fails to init:
|
It accepts YUV420, RGB16 or RGBA32 pixel formats, try changing that - not sure how a RGB16 pixel format would fit with the motion detection work? |
Interesting. It should work exactly the same with RGBA32, the buffers and stride will just be larger. The alpha channel is irrelevant -- seems weird that it's even an option, it isn't like the camera can do anything with transparency... (magic!). Generally though, motion detection / analysis aside, this should lead to general purpose in-line video filtering, if it works. Thanks! |
ISP does work -- I was misinterpreting the output of the connection But I've had difficulty getting resizer to work int he past -- I assume RGBA32 is what config calls RGBA. It reports an error, but then it works anyway...?
using (var capture = new VideoStreamCaptureHandler(rawPathname))
//using (var resizer = new MMALIspComponent())
using (var resizer = new MMALResizerComponent())
using (var encoder = new MMALVideoEncoder())
{
var camConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var rawConfig = new MMALPortConfig(MMALEncoding.RGBA, MMALEncoding.RGBA, width: 640, height: 480);
var h264Config = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
resizer.ConfigureInputPort(camConfig, cam.Camera.VideoPort, null);
encoder.ConfigureInputPort(rawConfig, null);
resizer.ConfigureOutputPort(rawConfig, null);
encoder.ConfigureOutputPort(h264Config, capture);
var connection = resizer.Outputs[0].ConnectTo(encoder, useCallback: true);
connection.RegisterCallbackHandler(new RawFrameCallbackFilter(connection));
cam.Camera.VideoPort.ConnectTo(resizer); |
Kind of interesting, I bumped the resolution up to 1920x1080 hoping that each frame would require more than 1 buffer -- it did, and attempting to store the real buffers (rather than copies) yields an out of memory error. (I did try properly calling the ref-count acquire/release methods.) I suspected it wouldn't work but it was the fastest and easiest thing to try -- but I didn't expect an out of memory error. Next I'll try copying them, although my guess is that it has to be passed to an input port to be de-allocated back to the pool queue, which means this won't work (which is a shame, this would be a really easy way to add in-line video FX, as well as the CCTV stuff I'm trying to do).
|
How are you storing the buffers? As an array of
I'm not sure if a combination of these two things can help progress it further but I'm just thinking out loud here more than anything. You need to be careful with incrementing the reference count on buffers though, I'm hoping that |
Yes, just a I know ref counts are sensitive, I actually found MMALSharp through your conversations the Raspberry Pi forum back when you were starting work on the library. I'm using those methods. Specifically:
But it doesn't work, so I'll try copying:
Another idea was to set up an additional "processed buffers" queue and push into that, and let I'm probably about to seriously anger the Hardware Gods. |
Buffer count is handled by the Port's
By all means, create your own You're in charge of your new Pool and need to make sure you destroy it on tear-down (this probably calls for |
I'm constantly amazed at the amount of work you've put into this library! |
Thanks Jon. The underlying MMAL library is a pretty awesome bit of kit. I think the stuff you're working on will really help people wanting to push it to it's limits. |
One thing I don't understand is how my list stored two buffers -- I see in the debug log that the pool does default to just one buffer ( Regardless, I think I've hit a dead-end for reasonably handing raw full-frame data via Connection callbacks. I can't "manually" restore a copy of
I do see that it's possible to calculate frame metrics (stride etc.) from the outset, so I could still (theoretically) use a buffer-only Connection callback to output my motion detection analysis (or even do actual motion detection), although the parallel processing aspect would be a migraine-inducing problem to solve (I assume the partial frame data in the buffers isn't even guaranteed to have full rows). There would be quite a lot of state to track between invocations, too, I think. But that's edging into "Hold my beer, watch this," unnecessary-complexity territory. I think I'll set that idea aside. I've spent the past couple hours studying I keep coming back to components like encoders -- something between input and output ports. But from what I read in the component list in the MMAL docs and the way the constructor for |
I've been working on an And I'm trying to make it generalized so that it could be used for FX filtering. I'm still early in this work, not sure what would happen with |
The ref-count behavior of
I assume they do that because both copies point to the same payload data, but I don't see how it could be useful (natively, but also in MMALSharp). If you already have the source buffer, why not use that reference? Seems weird to me. Oh well, learned a lot, in any case. I want to try using the I'm also wondering whether I should temporarily disable the component's output port(s) while I accumulate frame buffers. The docs for port enable/disable also say those calls will release and reallocate pools on both sides of the connection. I guess I'd also have to override the enable method to re-resize (ha) my pool (in case some other process disables my frame buffering thing, and assuming I can't figure out how to change Oh ... of course ... buffer number is in the constructor. Alrighty then... |
I wasn't aware of that. I'm not sure how fast it would be to retrieve the data from one buffer via the
Hmm I'm not sure about this. It would also disable the input port of the connected component. I don't know what kind of performance overhead is involved in disabling/enabling ports. |
That's what I'm doing, although there is no temporary buffer. I set aside each buffer in a local list until I have a full frame, then I
It's not for performance, it's because one frame at higher resolutions may span multiple buffers. When I tried this yesterday without increasing the pool size, my local buffer caching stored two copies of the same buffer. So unless I misunderstand how this works, I need a pool that is at least one buffer larger than whatever it takes to represent a full frame.
Great point. At the frequency I'm talking about (per frame!) I'm sure this would be an extremely bad idea. I'm pretty close to being able to run another test. Fingers crossed! |
Ian, do you know how the port Right now I'm throwing if the provided config has fewer than I need. I haven't done any exhaustive testing but the couple of checks I did always default to a pool size of 1. I'm also wondering if my minimum ought to be a multiple of the default minimum. Edit - thinking about it more, I suppose it's probably calculated based on resolution. But I should make sure it works at all before worrying about that. :) |
Typically |
Thank you for clarifying, that's very helpful. Is the hardware pipeline ever async? Even at 1920 x 1080 (mode 2, v1 camera), all of those buffer parameters (even
The other thing I can't figure out is why my overrides aren't actually executing. I have log messages in everything. I see the constructor called, I see configuration called -- then nothing. I get video but it's like my port is bypassed completely. I know the component is using my port, just before I start processing I dumped the But hey, it doesn't crash! (yet) I'm using this, or the splitter when I test at 1920x1080, or even a splitter outputting to a resizer then h.264 encoder, and also directly to a full-size h.264 encoder -- it all works, but none of my code seems to run. using (var capture = new VideoStreamCaptureHandler(h264Pathname))
using (var resizer = new MMALIspComponent())
using (var encoder = new MMALVideoEncoder())
{
var camConfig = new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420);
var rawConfig = new MMALPortConfig(MMALEncoding.RGB24, MMALEncoding.RGB24, width: 640, height: 480);
var h264Config = new MMALPortConfig(MMALEncoding.H264, MMALEncoding.I420, quality: 10, bitrate: MMALVideoEncoder.MaxBitrateLevel4);
resizer.ConfigureInputPort(camConfig, cam.Camera.VideoPort, null);
encoder.ConfigureInputPort<RawFrameInputPort>(rawConfig, null); // <---- my input port
resizer.ConfigureOutputPort(rawConfig, null);
encoder.ConfigureOutputPort(h264Config, capture);
cam.Camera.VideoPort.ConnectTo(resizer);
resizer.Outputs[0].ConnectTo(encoder);
await Task.Delay(2000);
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(totalSeconds));
await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
}
cam.Cleanup(); |
I'm guessing you're about to tell me I should have based this on |
I've been doing some reading, apparently I don't think it's necessarily impossible (replicate doesn't make new copies of the payload data, so it's not exponential growth of memory usage) but the complexity of configuring it correctly is starting to look unreasonable. I believe a component like the h.264 encoder (which requires two full frames to perform P-frame compression) -- simply releases input port buffers upon arrival, and does not generate output port buffers until it has finished either accumulating an I-frame or compressing a P-frame. Would you say my assumption is correct that a "software component" is not possible? Something which owns an input and output port, but is not an IL component? As written, the pipeline always maps a component to one of the "named" IL components but I'm not clear if that's strictly necessary. (Edit: It occurs to me that I don't see any MMAL functions to init a buffer -- by which I mean setting all the various header fields, so I suspect that's the "magic" that happens within a hardware-based component which makes a "software component" an unlikely option...?) |
Currently in MMALSharp, the only time you will receive callbacks to an input port is if you manually enabled it yourself and are supplying data to it manually, similar to the Standalone API. I have spoken previously about connection tunnelling and how this changes the behaviour of port callbacks, please see here regarding "Tunnelling connections" for more info, but basically it's used to efficiently communicate buffers in the pipeline without them being returned to the "client", i.e. us. When connection tunnelling is enabled (which is the default in MMALSharp), you will just receive your buffers on the final output port in your pipeline. I've tried turning tunnelling off but it doesn't work, so I need to investigate that further and #113 will track that. This is why I felt Connection callbacks were suitable for your requirements but I think you've since dismissed this as being plausible, however I'd like to quote a previous message:
All Managed MMAL classes have a
I'm not going to write this off completely as being possible. If we can get the library working without connection tunnelling then we should receive callbacks on each component's input and output ports, you could then detect if the current component is connected to a "software component" and take it from there. I will see if I can make any progress on #113. |
Very interesting.
Essentially at one of these points (in a connection or as a new But as I noted, I think the
At that point, my code would store the buffer, there would really be five copies of the buffer headers, and then it would start again until my code receives and end-of-frame buffer. But the point is that each call to replicate keeps the source buffer ref-counted even if the original "owner" releases it. So when my code stores a buffer waiting for end-of-frame, all of those upstream components must have additional buffers in their pool to start that process again. For my 1080p test, which generally required about 8 buffers for a full frame, that implies the camera, two connections, two input ports, and one output port all need pools of 8 buffers. That's what I meant about the configuration becoming unwieldy. But a "software component" could change that by simply releasing the input port buffer when it is stored, and generating new output port buffers, then using the
I've already tossed out my changes (I'm trying to accomplish my goal with |
I have high hopes that we (er, you 😄 ) can figure out the "software component" angle. Today I received a new wide-angle camera module. Image-distortion correction looks to be a relatively fast linear algorithm that might be a (tricky) parallel-processing candidate and it would be great as an early pipeline component. |
In #172 you suggested that a Connection callback handler might allow me to write a "filter" in the middle of the pipeline, such as:
camera -> resizer -> (FILTER) -> h.264 encoder -> video capture handler -> h.264 file
Given your warning this is new and somewhat uncharted territory, I thought I'd open a dedicated issue.
You mentioned that
MMALConnectionImpl.NativeConnectionCallback
will only invokeInputCallback
because of that if-else block. Could the fix be as simple as pulling the output callback out of the else block? In other words, should it check the input queue and invoke the callback if there is data, then do the same for the output queue and callback?Assuming both input and output are being invoked, is this basically the flow for a derived Callback handler?
InputCallback
to be invokedGetBufferdata
, readAssertProperty
etc.OutputCallback
to be invokedReadIntoBuffer
to populate any locally stored buffer dataI suppose I'm a little confused about what the port
SendBuffer
calls do with the buffer data inNativeConnectionCallback
. Is that because this type of component can have multiple inputs and outputs? (Can anything have multiple inputs? I don't think I've seen that in any examples.)Since you mentioned you aren't sure of the exact performance hit, I thought I would start with a simple pass-through and hack in something to display the raw ms/frame overhead at the end of processing. I'm going to try the above while waiting for your feedback. If that doesn't burn the house down, I'll try something more interesting.
The text was updated successfully, but these errors were encountered: