-
Notifications
You must be signed in to change notification settings - Fork 8.3k
video: display: misc changes - draft #98513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
4588f92 to
db5f566
Compare
|
Update:
|
danieldegrasse
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm aware this is a draft- just thought I'd add some initial comments:
- I agree that making the video API work with buffer requests rather than allowing the application to allocate buffers at a size it chooses is likely better, but this will be a stable API change. The application should already be able to determine the size needed for each video buffer based on the format set, right? Or is there a reason I'm missing that we need this API change
- I like the rework of the PXP into a M2M driver- we might want a sample of how to use this though, I know that customers have requested this in the past. Another issue to figure out is how to integrate the PXP into LVGL's processing, right? Or would we expect an LVGL user to use the LVGL 2D accelerator support for the PXP rather than the M2M driver
| VIDEO_BUF_TYPE_INPUT = 1, | ||
| /** output buffer type */ | ||
| VIDEO_BUF_TYPE_OUTPUT, | ||
| VIDEO_BUF_TYPE_OUTPUT = 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are going to limit this type to 1 byte, we should likely set a VIDEO_BUF_TYPE_MAX in this enum as well to prevent developers from adding too many constants in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can do this as an extremely careful prevention. But as other enums in video.h (e.g. video_frmival_type, etc.), I think it is very difficult to add another item into this, we only see these two items are needed to support m2m, otherwise if someone add more type, he/she must have a very strong reason so the prevention will not be needed, I think
| uint8_t *buffer; | ||
| /** index of the buffer, optionally set by the application */ | ||
| uint8_t index; | ||
| /** index of the buffer in the video buffer pool */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit- if you want the space savings from struct packing here, the uint16_t should be placed next to the uint8_t, right? The uint8_t * should not be defined between these members
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, will do. Not sure if the compiler can optimize automatically the padding alignment but better to do it ourselves, thanks.
drivers/video/video_common.c
Outdated
| } | ||
| } | ||
|
|
||
| int video_enqueue(const struct device *dev, struct video_buffer *buf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are enqueuing the internal buffer (and therefore assuming the video buffer is allocated from the buffer pool), why have the buffer as a parameter to the API at all? All we really care about is the buffer index, right? I guess we are doing this since the API is stable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, not only the index, for user-provided buffer for example, we will need other fields like ->data field for the buffer address, etc. It's right that we don't need all fields in video_buffer and can define a smaller struct to pass it as parameter here, but for the sake of simplicity, I reuse the same video_buffer (avoid to define too much different struct ...)
| * | ||
| * @retval 0 on success, otherwise a negative errno code. | ||
| */ | ||
| int video_request_buffers(struct video_buffer_request *const vbr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea here, but this API is stable right? Is the reasoning for this change that now the video framework has full control over where buffers are allocated, and in particular control over their size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
video_request_buffers reflect exactly the situation where application don't and shouldn't care about the allocated video buffer because it is done by the framework. Another reason is with this, we can request several buffers at a time (in batch). There are also several advantages as well that I need to recall. This is also inspired by the well-establish v4l2 in Linux.
Some misc fixes: - Use uint16_t for video_buffer's index - Don't use enum in struct definition as it will be translated to int or other types depending on compiler and platform. Use uint8_t to save some bytes. - Other minor fixes Signed-off-by: Phi Bang Nguyen <[email protected]>
The mem_block structure and the video_block array are not needed. Drop them. Signed-off-by: Phi Bang Nguyen <[email protected]>
Currently, the video_enqueue() API enqueues the whole external video buffer container structure from the application and the driver stores this container in its FIFO queue. While it works in simple applications where the enqueued video_buffer container persists for the whole program lifecycle, it does not work in situations where we cannot keep this container, e.g. enqueuing a buffer inside a function, the local variable will be destroyed when the function returns and hence the buffer is no longer valid. Video buffers can be tracked via their indices in the buffer pool. Set the index field when buffers are allocated and enqueue the internal buffer rather than the external one to fix the issue. Signed-off-by: Phi Bang Nguyen <[email protected]>
Replace video_buffer_alloc() and video_buffer_aligned_alloc() APIs with the new video_request_buffers() API. With this, - We can request multiple buffers at once. - Application does not need to keep the empty allocated buffer pointer as it can enqueue the buffer via its index. - Application may allocate its own buffers and provide them to the framework. Signed-off-by: Phi Bang Nguyen <[email protected]>
Add support for buffers allocated outside the video buffer heap. Signed-off-by: Phi Bang Nguyen <[email protected]>
Move buffer management stuffs into video_buffer.c Signed-off-by: Phi Bang Nguyen <[email protected]>
Currently, buffer allocation is using the same alignment specified in the CONFIG_VIDEO_BUFFER_POOL_ALIGN. Add support for a different alignment in each buffer request. Signed-off-by: Phi Bang Nguyen <[email protected]>
Add a field to video_caps to expose buffer alignment requirement. Signed-off-by: Phi Bang Nguyen <[email protected]>
Add VIDEO_CID_ROTATE which is needed for some m2m devices. Signed-off-by: Phi Bang Nguyen <[email protected]>
Add VIDEO_PIX_FMT_YVU422M and VIDEO_PIX_FMT_YVU420M formats Signed-off-by: Phi Bang Nguyen <[email protected]>
Return an errno instead of _ASSERT because in some cases, the propram needs to be continued even if a function failed. Furthermore, _ASSERT shouldn't be used in production. Signed-off-by: Phi Bang Nguyen <[email protected]>
For m2m video device, the get_caps() API give the supported caps of both input and output sides. However, there is currently no guaranteed way to know the relationship between input and output caps. Introduce a new transform_caps() API that help to transform a video format cap from one end to the other end of a m2m device. Signed-off-by: Phi Bang Nguyen <[email protected]>
Add PIXEL_FORMAT_XRGB_8888 and the corresponding panel format. Signed-off-by: Phi Bang Nguyen <[email protected]>
The eLCDIF supports XRGB format but the ARGB format was used instead. Fix it. Signed-off-by: Phi Bang Nguyen <[email protected]>
PxP is a distinct HW IP for image processing which is independent of the display controller. Decouple it from the elcdif driver. Signed-off-by: Phi Bang Nguyen <[email protected]>
Drop the old dma PxP driver to favor the new one in v4z m2m category. Signed-off-by: Phi Bang Nguyen <[email protected]>
The NXP's PxP is a 2D pixel engine which is capable to do some image transformation such as rotation, flipping, color conversion, etc. Make a driver for it in the video m2m category. Signed-off-by: Phi Bang Nguyen <[email protected]>
Expose the high framerates first so that it can be listed first and chosen as the default frame rate in some applications. Signed-off-by: Phi Bang Nguyen <[email protected]>
The VIDEO_PIX_FMT_XRGB32 corresponds to the PIXEL_FORMAT_XRGB_8888 display format. A fallback to PIXEL_FORMAT_ARGB_8888 could also be considered if supported. Signed-off-by: Phi Bang Nguyen <[email protected]>
This is a temporary fix: wap rgb565 and bgr565. Signed-off-by: Phi Bang Nguyen <[email protected]>
Define pxp as a chosen video transform node. Signed-off-by: Phi Bang Nguyen <[email protected]>
970f95e to
f9b05c6
Compare
|
Hi @danieldegrasse , Sorry for the late reply. Yes, the difficult part is not on the driver but on how to integrate it into different Zephyr samples as a separate m2m driver (where in the current upstream it is coupled with the LCDIF to ease the work). For the video capture sample, I leveraged the sample to achieve this as can be seen here #92366 However, for other samples (LVGL, display), it needs to be done similarly but will be more difficult as it requires a generic abstraction that must work for all other boards not only NXP's. For LVGL in particular, I think we should not pass by LVGL as LVGL bypasses Zephyr layer and goes directly to the low level driver, this is not good IMHO and can cause some conflicts as mentionned partly in this comment. With libMP, hopefully the PxP integration into Zephyr samples will be easier. |
|




This draft PR contains various changes to the video subsystem and some for display that are already or will be presented in other PRs.
The purpose of collecting them here is to serve as a dependency for the libMP PR