Skip to content
This repository has been archived by the owner on Feb 22, 2024. It is now read-only.

How Do I Pipe Video Into Emgu (OpenCV)? #117

Closed
d8ahazard opened this issue Jan 12, 2020 · 11 comments
Closed

How Do I Pipe Video Into Emgu (OpenCV)? #117

d8ahazard opened this issue Jan 12, 2020 · 11 comments
Labels

Comments

@d8ahazard
Copy link

Hey there!

So, I created a project in Python using OpenCV and PiCamera to capture video, process it, and do things with it in real time.

I'm now porting my app to C#, and looking at this for a replacement.

Could you point me to a sample where I can look at looping over the video and storing the frame in memory as an array for access by calls from another class?

Basically, I want to wrap the video receiver in a class, call a "start" function to initiate capture, and then reference a Frame attribute of that class whenever I want to grab the next video frame.

@techyian
Copy link
Owner

Hey,

It would be helpful to see how you're currently using picamera so I can give you a good alternative example. Are you using the camera's video port to capture image stills at a rapid rate (using an image encoder component)? Or are you using the video port to capture raw, unencoded video and passing that to OpenCV?

@d8ahazard
Copy link
Author

d8ahazard commented Jan 12, 2020 via email

@d8ahazard
Copy link
Author

d8ahazard commented Jan 12, 2020 via email

@techyian
Copy link
Owner

I think the example you're looking for is here. However, for your project I don't think you'll need all 4 splitter ports, and you'll also need to subclass InMemoryCaptureHandler in order to hook onto the Process method.

Something like the below might get you started, obviously you'll need to add the relevant EmguCV bits:

public class EmguInMemoryCaptureHandler : InMemoryCaptureHandler, IVideoCaptureHandler
{
    public override void Process(byte[] data, bool eos)
    {
        // The InMemoryCaptureHandler parent class has a property called "WorkingData". 
        // It is your responsibility to look after the clearing of this property.

        // The "eos" parameter indicates whether the MMAL buffer has an EOS parameter, if so, the data that's currently
        // stored in the "WorkingData" property plus the data found in the "data" parameter indicates you have a full image frame.

        // I suspect in here, you will want to have a separate thread which is responsible for sending data to EmguCV for processing?
        Console.WriteLine("I'm in here");
                        
        base.Process(data, eos);

        if (eos)
        {
            this.WorkingData.Clear();
            Console.WriteLine("I have a full frame. Clearing working data.");
        }
    }

    public void Split()
    {
        throw new NotImplementedException();
    }
}

public async Task TakeRawVideo()
{
    // By default, video resolution is set to 1920x1080 which will probably be too large for your project. Set as appropriate using MMALCameraConfig.VideoResolution
    // The default framerate is set to 30fps. You can see what "modes" the different cameras support by looking:
    // https://github.com/techyian/MMALSharp/wiki/OmniVision-OV5647-Camera-Module
    // https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camera-Module            
    using (var vidCaptureHandler = new EmguInMemoryCaptureHandler())
    using (var splitter = new MMALSplitterComponent())
    using (var renderer = new MMALNullSinkComponent())
    {                
        cam.ConfigureCameraSettings();

        // We are instructing the splitter to do a format conversion to BGR24.
        var splitterPortConfig = new MMALPortConfig(MMALEncoding.BGR24, MMALEncoding.BGR24, 0, 0, null);

        // By default in MMALSharp, the Video port outputs using proprietary communication (Opaque) with a YUV420 pixel format.
        // Changes to this are done via MMALCameraConfig.VideoEncoding and MMALCameraConfig.VideoSubformat.                
        splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);

        // We then use the splitter config object we constructed earlier. We then tell this output port to use our capture handler to record data.
        splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);
        
        cam.Camera.PreviewPort.ConnectTo(renderer);
        cam.Camera.VideoPort.ConnectTo(splitter);

        // Camera warm up time
        await Task.Delay(2000).ConfigureAwait(false);

        // Record for 10 seconds. Increase as required.
        var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
                        
        await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
    }
}

I've added some comments to hopefully clear up what's happening in this example. I hope that helps a bit? Let me know how you get on.

@d8ahazard
Copy link
Author

d8ahazard commented Jan 12, 2020

So, I think I came up with something on my own that works, but haven't actually tried it yet. Care to take a look?

`using System;
using System.Threading;
using System.Threading.Tasks;
using Emgu.CV;
using Emgu.CV.Structure;
using MMALSharp;
using MMALSharp.Common.Utility;
using MMALSharp.Handlers;

namespace HueDream.Models.DreamVision {
public class PiVideoStream : IVideoStream, System.IDisposable {
private MMALCamera cam;
private Image<Bgr, byte> frame;
public PiVideoStream() {
cam = MMALCamera.Instance;
MMALCameraConfig.VideoResolution = new Resolution(800, 600);
cam.ConfigureCameraSettings();
}

    public async Task Start(CancellationToken ct) {
        using (var vidCaptureHandler = new InMemoryCaptureHandler()) {
            frame = new Image<Bgr, byte>(800, 600);
            while (!ct.IsCancellationRequested) {
                await cam.TakeVideo(vidCaptureHandler, CancellationToken.None);
                var bytes = vidCaptureHandler.WorkingData;
                frame.Bytes = bytes.ToArray();
            }
        }
        cam.Cleanup();
    }

  
    public Image<Bgr, byte> GetFrame() {
        return frame;
    }

    #region IDisposable Support
    private bool disposedValue = false;

    protected virtual void Dispose(bool disposing) {
        if (!disposedValue) {
            if (disposing) {
                cam.Cleanup();
            }
            disposedValue = true;
        }
    }

    public void Dispose() {
        Dispose(true);
        GC.SuppressFinalize(this);
    }
    #endregion
}

}
`

The idea being that I initialize the camera when I instantiate the class, then fire a loop that updates the value of "frame" with the current video frame, and then I call "getFrame" as needed for current frame data.

Do I need to incorporate the eos check in this somehow?

@d8ahazard
Copy link
Author

Grr, sorry for the sloppy code...

@techyian
Copy link
Owner

There's a couple of things which I can see here.

  1. In your Start method, you're relying on TakeVideo() source. When taking videos, this method, and the ProcessAsync method which is subsequently called by this, will not return until your cancellation token has expired. Considering you're using the InMemoryCaptureHandler, you're soon going to starve your program of the RAM its allocated.
  2. The TakeVideo() helper method captures H.264 video using a YUV420 pixel format. As you're wanting raw image frames encoded as BGR24/RGB24, this method won't be suitable for use with EmguCV so you'll need to setup a manual pipeline to capture raw image frames.

You need to hook onto the callbacks made to the capture handler's Process method in order to receive the image data as it's being processed. As I mentioned in my previous comment, you will want to subclass the InMemoryCaptureHandler class and do your processing to EmguCV in there. You could also make use of the callback handler functionality but I think that's probably overkill here.

Could something like the below work? I haven't tested this code by the way, but I hope it will get you on the right track:

public class EmguEventArgs : EventArgs
{
    public byte[] ImageData { get; set; }
}

public class EmguInMemoryCaptureHandler : InMemoryCaptureHandler, IVideoCaptureHandler
{
    public event EventHandler<EmguEventArgs> MyEmguEvent;

    public override void Process(byte[] data, bool eos)
    {
        // The InMemoryCaptureHandler parent class has a property called "WorkingData". 
        // It is your responsibility to look after the clearing of this property.

        // The "eos" parameter indicates whether the MMAL buffer has an EOS parameter, if so, the data that's currently
        // stored in the "WorkingData" property plus the data found in the "data" parameter indicates you have a full image frame.

        // I suspect in here, you will want to have a separate thread which is responsible for sending data to EmguCV for processing?
        Console.WriteLine("I'm in here");

        base.Process(data, eos);

        if (eos)
        {
            this.MyEmguEvent(this, new EmguEventArgs { ImageData = this.WorkingData.ToArray() });

            this.WorkingData.Clear();
            Console.WriteLine("I have a full frame. Clearing working data.");
        }
    }

    public void Split()
    {
        throw new NotImplementedException();
    }
}

public async Task TakeRawVideo()
{
    MMALCameraConfig.VideoResolution = new Resolution(800, 600);

    // By default, video resolution is set to 1920x1080 which will probably be too large for your project. Set as appropriate using MMALCameraConfig.VideoResolution
    // The default framerate is set to 30fps. You can see what "modes" the different cameras support by looking:
    // https://github.com/techyian/MMALSharp/wiki/OmniVision-OV5647-Camera-Module
    // https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camera-Module            
    using (var vidCaptureHandler = new EmguInMemoryCaptureHandler())
    using (var splitter = new MMALSplitterComponent())
    using (var renderer = new MMALNullSinkComponent())
    {
        cam.ConfigureCameraSettings();
        
        // Register to the event.
        vidCaptureHandler.MyEmguEvent += this.OnEmguEventCallback;

        // We are instructing the splitter to do a format conversion to BGR24.
        var splitterPortConfig = new MMALPortConfig(MMALEncoding.BGR24, MMALEncoding.BGR24, 0, 0, null);

        // By default in MMALSharp, the Video port outputs using proprietary communication (Opaque) with a YUV420 pixel format.
        // Changes to this are done via MMALCameraConfig.VideoEncoding and MMALCameraConfig.VideoSubformat.                
        splitter.ConfigureInputPort(new MMALPortConfig(MMALEncoding.OPAQUE, MMALEncoding.I420), cam.Camera.VideoPort, null);

        // We then use the splitter config object we constructed earlier. We then tell this output port to use our capture handler to record data.
        splitter.ConfigureOutputPort<SplitterVideoPort>(0, splitterPortConfig, vidCaptureHandler);

        cam.Camera.PreviewPort.ConnectTo(renderer);
        cam.Camera.VideoPort.ConnectTo(splitter);

        // Camera warm up time
        await Task.Delay(2000).ConfigureAwait(false);

        // Record for 10 seconds. Increase as required.
        var cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));

        await cam.ProcessAsync(cam.Camera.VideoPort, cts.Token);
    }
}

protected virtual void OnEmguEventCallback(object sender, EmguEventArgs args)
{
    Console.WriteLine("I'm in OnEmguEventCallback.");

    var frame = new Image<Bgr, byte>(800, 600);
    frame.Bytes = args.ImageData;

    // Do something with the image data...
}

@d8ahazard
Copy link
Author

d8ahazard commented Jan 13, 2020 via email

@techyian
Copy link
Owner

No, I'd recommend you use the dev branch as it's had a number of improvements added since v0.5.1. The general release for v0.6 is coming very soon. You can either clone the source, or grab it from MyGet.

@techyian
Copy link
Owner

Hi, just checking whether this has resolved your issue? Am I ok to close the ticket?

@d8ahazard
Copy link
Author

d8ahazard commented Jan 18, 2020 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants