-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Video 4 Zephyr - Video API, Drivers and Samples #17194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
All checks are passing now. checkpatch (informational only, not a failure)Tip: The bot edits this comment instead of posting a new one, so you can check the comment's history to see earlier messages. |
db3ab12 to
3720757
Compare
10f0fdd to
b991ac7
Compare
|
Hi all, any comments on this PR? |
I think with the 4th of July weekend and summer holidays a lot of people are out. I'll try to have a look at this Monday myself, though. |
|
That's the first time I see this API, so I don't think I can comment more, until I actually try to port my camera module to it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me there are few blank areas in this API definition. I will try to highlight them using he following example:
[ Camera ] => [ Software scaler/filter/color space converter ] => [ HW Encoder ] => Storage
Assumptions:
- All components in the brackets could (and should) be represented using this API.
Questions:
- Each of the device provides its own video frame allocator. Am I supposed to reallocate frame and copy its contents manually between each processing step or I can pass the frame directly?
- I assume that I enqueue uncompressed frame to the HW encoder and dequeue the compressed one. Since most of the video compression algorithms do not support compression in place, the HW encoder will allocate the output frame. Who is responsible for freeing the input frame? If user of the API, how he/she could know when the frame is no longer used. If the HW encoder driver, how it could know which driver is owner of the frame and which version of video_api_release_frame_t should be called?
- Some of the video encoders need some kind of out of band data describing properties of the compressed stream (the H264 is good example). How we are supposed to retrieve this data from the encoder and pass it to the decoder? Will we have concept of the stream in the API?
include/drivers/video.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this API valid for YUV data with color subsampling?
|
From the API meeting:
|
MaureenHelm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tried sending video to the LCD display?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks done
dts/arm/nxp/nxp_rt.dtsi
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stray change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, fixed now
This generic video API can be used to capture/output video frames. Once a video buffer is enqueued to a video device endpoint, device owns the buffer and can process to capture (camera), output (disk, display), convert (hw encoder)... User can then call dequeue to retrieve the processed buffer (video driver ensure cache coherency). Once dequeued, video buffer is owned by user (e.g. for frame processing, display buffer update, write to media, etc...). For each video-buffer, user needs allocate the associated frame buffer via video_buffer_alloc. Buffer format is defined by video device endpoint configuration. Video device can be controlled (e.g. contrast, brightness, flip...) via controls. Signed-off-by: Loic Poulain <[email protected]>
Add support for CMOS Sensor Interface video driver. Signed-off-by: Loic Poulain <[email protected]>
Add CSI node to generic nxp rt dtsi. Add corresponding dts binding. Add CSI capability for rt MCUs. Signed-off-by: Loic Poulain <[email protected]>
This enables CSI node, and configures pinmux when driver is enabled. Signed-off-by: Loic Poulain <[email protected]>
Add myself to owner of drivers/video. Signed-off-by: Loic Poulain <[email protected]>
MT9M114 is a CMOS digital image sensor. Implement video interface. Only VGA (640x480) supported for now. Signed-off-by: Loic Poulain <[email protected]>
Sensor can be connected to the 24pin camera connector. Signed-off-by: Loic Poulain <[email protected]>
This is a virtual device generating video pattern for testing purpose. It supports colobar pattern for now. Signed-off-by: Loic Poulain <[email protected]>
Simple video sample getting frames from video capture device. Tested with mimxrt1064_evk and MT9M114 sensor. Signed-off-by: Loic Poulain <[email protected]>
This samples captures frames from video capture device (in any format), and sends them to its TCP client. Tested with: - mimxrt1064 + MT9M114 video sensor. - Gstreamer 1.8.3 running on host Signed-off-by: Loic Poulain <[email protected]>
Keep flat video driver directory for now. Signed-off-by: Loic Poulain <[email protected]>
No, I do not have LCD for the iMX RT, but AFAIR, @JunYangNXP knows quite well how to do this, not sure if he has time, but it would be a good end-to-end sample to add. |
I'm wondering about the interaction between this video api and the existing display api in There is a display driver for iMX RT in |
It's certainly do-able yes, the APIs are quite similar and a glue driver is possible. I'll also investigate if it would be worth to convert display API to video one. But it's a long term task. There is steps in between like making both API including same pixformat, fourcc definitions, etc...
Ok, I'm going to try getting one. |
@vanwinkeljan could you please take a look at the interaction between this new API and the existing display API? |
@carlescufi / @MaureenHelm It looks pretty straightforward to build a glue logic video endpoint on top of the existing |
|
.
Thanks for your review, indeed it's easy to feed the display API with video API buffer content. Let's see later if there is a need and a way to extend video API to include display/graphics capabilities. For now we keep both and I'll submit a sample using Video API for capture + colorspace conversion and Display API for rendering. |
Thank you @vanwinkeljan for the review. |
stale, not involved in the project anymore
DEPENDS on HAL_NXP change: zephyrproject-rtos/hal_nxp#6
PRESENTATION: https://docs.google.com/presentation/d/1j44YHUqynN-Vw67NgHHx741yzLIbjbc76tcObi8J3Zc/edit?usp=sharing