Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Kerberos Agent: Plans for USBCamera and Raspberry PI camera #35

Closed
cedricve opened this issue Jun 29, 2022 · 4 comments
Closed

New Kerberos Agent: Plans for USBCamera and Raspberry PI camera #35

cedricve opened this issue Jun 29, 2022 · 4 comments
Labels
help wanted Extra attention is needed

Comments

@cedricve
Copy link
Member

cedricve commented Jun 29, 2022

The new agent currently only supports a valid H264 stream (we plan to have H265 support). In the past (previous open source agent) we supported native integrations for USBCameras (v4l2) and Raspberry Pi Cameras (mmal), however this was very intensive to make sure the integrations are fully working. On top of that we required a different codebase for each camera.

To overcome this and make it easier to support cameras we are moving to a H264/H265 support only, this would mean that we will make a RTSP wrapper for USBCameras and Raspberry PI camera. So far this isn't documented properly, but we plan to ship an additional Docker container that will run next to the Kerberos Agent container.

Current work-a-round looks like this:

USB Camera

We rely on the rtsp-simple-server project, this allows us to run a RTSP server to which we can publish a stream and subscribe to from the Kerberos Agent. You can run the RTSP server as following.

docker run -d --network=host aler9/rtsp-simple-server

Above command will run the server in host mode and run as a deamon in background. Next we can use ffmpeg to inject the /dev/video1 or /dev/videox (you have to lookup the id) into the RTSP container. Notice that you'll need to add -c:v libx264 which will transcode the stream to H264, if your camera isn't outputting H264 but JPEG; this will require some compute.

ffmpeg -f v4l2 -i /dev/video1 -preset ultrafast -b:v 600k  -c:v libx264 -f rtsp rtsp://localhost:8554/mystream

Launch the Kerberos Agent and make sure to set the RTSP connection to rtsp://localhost:8554/mystream. You should see the stream coming up, and connected to Kerberos Hub. You will notice that all features are properly working just like if you would have connect to a real IP camera.

Raspberry Pi camera

The process is similar as above, make sure you first run the rtsp-simple-server container.

docker run -d --network=host aler9/rtsp-simple-server

Instead of using the ffmpeg command we can now use the libcamera library in a similar way. The benefit of the Raspberry Pi camera is that it comes with an H264 stream already, so no need to do any transcoding, only thing that happens is to send it into the RTSP server.

libcamera-vid -t 0 --inline -o - | ffmpeg -i pipe: -c copy -f rtsp rtsp://localhost:8554/mystream

Going next (need help)

The idea is to bundle this in a single container where you can just pass in some arguments with the type of camera you are target and which deviceID (/dev/video0, /dev/video1, etc). This container would then run next to the Kerberos Agent, on which the Kerberos Agent will read from the stream.

We are start the project here, feel free to collaborate on the Camera to RTSP repo.

Why does this matter

Converting any camera to a RTSP stream will make our code base much easier, and at the same time will allow us to work on features that are more important. We will handle any camera as a RTSP camera, by transforming the camera stream to a valid RTSP stream using the side container.

Once doing that we have a single integration and same capabilities such as H264 or H265 encoded recordings, livestreaming over MQTT and WEBRTC.

@cedricve cedricve added the help wanted Extra attention is needed label Jun 29, 2022
@olokos
Copy link
Contributor

olokos commented Aug 15, 2022

Regarding RaspiCam native support:

Using mmal for Raspi Camera isn't a very good idea, since it's not working at all with 64 bit OS, would require people to reinstall OS to 32 bit to use that feature and most likely would be limited to run only on 32 bit OS and most likely raspicam/raspivid, but I might be not 100% correct on that one.

More details available here:
raspberrypi/userland#688

But on the other hand MMAL is actually used in the latest kernel available, so I might have mixed things a little above.
raspberrypi/linux@a90c1b9

Since we're already using libcamera-vid in general, I think the right approach for Raspicam would be the support for the new Camera stack, which is libcamera + libcamera-apps.

Interestingly enough,
Libcamera
and
Libcamera-vid / Libcamera-still

Are actually developed by two separate groups of people, both very active and kind people, but those 2 projects shouldn't be considered as a single piece.

libcamera-vid/still is actually part of libcamera-apps and is aimed on supporting the full feature set of raspivid/raspistill, but open source. From my understanding this aims on fully supporting all of the Raspberry Foundation and other cameras built specifically for raspberry.

While libcamera, is mostly an interface between the kernel and the user/admin of the system, working on a much lower level in the code hierarchy, not using MMAL but more modern approach.

libcamera can exist as a separate package.

libcamera-vid / libcamera-still requires either a previously built and installed libcamera or at least having installed the packaged-prebuilt version of libcamera in order to function.

Since there's already support for the RTSP stream, I think it would be awesome to consider RTSP and RaspiCam as separate ways of connecting the camera.

By using pure libcamera-vid with RaspiCam option, we would skip the overhead of first FFMPEG copying and then the overhead of creating an RTSP stream out of this.

For all other cameras not directly connected to the Pi with a flex tape, RTSP and USB could still be used as it is now.

I'm not sure how easy or difficult it would be to implement this, but I think this would be the right way to go.

On top of that if we would have a support for native libcamera-vid with it's output being directly read by Agent, omitting ffmpeg/rtsp it would allow setting specific camera features inside of the Agent.

So additional libcamera-vid parameters could be defined in the web ui instead of having to edit the command directly, but that's just a nifty beginner-friendly feature reaching out far into the future.

@yllekz
Copy link

yllekz commented Sep 8, 2022

Watching this as I am running what is apparently considered the "old" Kerberos Docker container on a Pi with the Raspberry Pi Camera module. It all works fine but it hasn't been updated in a long, long time. Perhaps code from that project can come into this one?

@cedricve
Copy link
Member Author

cedricve commented Sep 8, 2022

Well no we will not port in native code, but will work with a side car proxy for non RTSP cameras. The whole idea is that we will treat every camera as a RTSP stream, so that a consistent experience can be provided independent from the camera. This approach also helps us to focus less on camera specific things. We have a working approach, but which is a bit of a duck tape method (run an additional container, copy the stream in the container using FFMPEG). We are working on this, as described above (@yllekz)

@cedricve
Copy link
Member Author

As we speak the issue was resolved by @aler9 from the rtsp-simple-server project. We now have implemented/integrated/documented this approach on the camera-to-rtsp repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants