Skip to content

Conversation

avolmat-st
Copy link

@avolmat-st avolmat-st commented Oct 12, 2025

Allow creating a pipeline as follow
camera receiver -> encoder -> uvc

I post this PR while there are still some points to improve (cf hardcoded stuff detailed below) in order to get some first feedbacks. Since this depends on the UVC PR (PR #93192) and DCMIPP UVC PR (PR #94562), there are lots of commits in this PR. However only the LAST 8 COMMITS is relevant for this PR.

If the chosen zephyr,videoenc is available, the sample will pipe the camera receiver to the encoder and then the UVC device instead of directly the camera receiver to the UVC.

In order to making the change as simple as possible, the source device of the UVC device is renamed uvc_src_dev instead of video_dev previously since, depending on the configuration, the UVC source might be either the video_dev or the encoder_dev.

Current implementation has several points hardcoded for the time being:
1. intermediate pixel format between the camera receiver and encoder
is set to NV12. This is temporary until proper analysis of video_dev
caps and encoder caps is done, allowing to select the common
format of the two devices.
2. it is considered that encoder device do NOT perform any resolution
change and that encoder output resolution is directly based on the
camera receiver resolution. Thanks to this, UVC exposed formats
are thus the encoder output pixel format & camera receiver
resolutions.

This has been tested using the STM32N6-DK and the JPEG codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> JPEG -> UVC

compiled via:

west build -p -b stm32n6570_dk//sb --shield st_b_cams_imx_mb1854 samples/subsys/usb/uvc/ -DCONFIG_USE_STM32_MW_ISP=y -DFILE_SUFFIX=jpeg

And also VENC codec, leading to the following pipe:
IMX335 -> CSI/DCMIPP -> VENC -> UVC

compiled via:

west build -p -b stm32n6570_dk//sb --shield st_b_cams_imx_mb1854 samples/subsys/usb/uvc/ -DCONFIG_USE_STM32_MW_ISP=y -DFILE_SUFFIX=venc
 and can be displayed on linux machine via gst-launch command:
gst-launch-1.0 v4l2src device=/dev/videoXX ! 'video/x-h264,width=1920,height=1080' ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

@erwango
Copy link
Member

erwango commented Oct 13, 2025

Moving assignee to Video subsystem maintainer, as it seems more appropriate to the content of the RFC

@josuah josuah added the DNM This PR should not be merged (Do Not Merge) label Oct 14, 2025
@josuah
Copy link
Contributor

josuah commented Oct 14, 2025

Adding a DNM flag for PRs dependencies.
Please anyone feel free to remove it once dependencies are merged and this PR rebased.

josuah
josuah previously approved these changes Oct 14, 2025
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this submission. It makes sense for me as it is, and AFAI the FIXME are for planning future infrastructure of the video area, not for this particular video sample

Comment on lines 250 to 369
ret = video_stream_start(video_dev, VIDEO_BUF_TYPE_OUTPUT);
if (ret != 0) {
LOG_ERR("Failed to start %s", video_dev->name);
return ret;
}
ret = video_stream_start(uvc_src_dev, VIDEO_BUF_TYPE_INPUT);
if (ret != 0) {
LOG_ERR("Failed to start %s", uvc_src_dev->name);
return ret;
}
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using videoenc_dev for both here?

Then if a videoscaler_dev is introduced, this becomes another #if DT_HAS_CHOSEN(zephyr_videoscaler) that uses the associated videoscaler_dev for both input/output?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to find a better way to handle it, using another #if DT_HAS_CHOSEN() is not acceptable to me.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reworked

#if DT_HAS_CHOSEN(zephyr_videoenc)
vbuf_enc_in = &(struct video_buffer){.type = VIDEO_BUF_TYPE_OUTPUT};

if (video_dequeue(video_dev, &vbuf_enc_in, K_NO_WAIT) == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works well for this sample but this is where things will start to become difficult.

One way to solve this maybe is to have a configuration table at the top:

video_dst = video_dev;
prev = &video_dst;

#if DT_HAS_CHOSEN(zephyr_videoisp)
videoisp_src = *prev;
*prev = videoisp_dev;
prev = &videoisp_dst;
#endif

#if DT_HAS_CHOSEN(zephyr_videoscaler)
videoscaler_src = *prev;
*prev = videoscaler_dev;
prev = &videoscaler_dst;
#endif

#if DT_HAS_CHOSEN(zephyr_videoenc)
videoenc_src = *prev;
*prev = videoenc_dev;
prev = &videoenc_dst;
#endif

videousb_src = *prev;
*prev = videousb_dev;

Then in each block, it is possible to just use _src and _dst.

Of course a component-based system is better, but this would be a different sample for libMP. :]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for discussion and not necessarily to implement on this PR, thoough, as there is only one M2M device.

@erwango erwango added this to the v4.3.0 milestone Oct 15, 2025
@avolmat-st avolmat-st force-pushed the pr-uvc-videoenc-support branch from 75eec12 to 3edf0b3 Compare October 19, 2025 15:47
ngphibang
ngphibang previously approved these changes Oct 23, 2025
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I caught 2 small issues, and found a few things that you might be curious about but not causing issues.

Thanks, getting close!

CONFIG_VIDEO_BUFFER_POOL_NUM_MAX=2
CONFIG_VIDEO_BUFFER_POOL_SZ_MAX=24576
CONFIG_VIDEO_LOG_LEVEL_WRN=y
CONFIG_VIDEO_ENCODER_H264=y
Copy link
Contributor

@jfischer-no jfischer-no Oct 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the only significant difference to prj.conf is CONFIG_VIDEO_ENCODER_H264=y and
CONFIG_VIDEO_ENCODER_JPEG=y, if so then document it and pass it to build command is the right way.

Also please add

 sample.subsys.usb.uvc.encoder.h264:
    depends_on:
      - usbd
    tags: usb video
    extra_configs:
      - CONFIG_VIDEO_ENCODER_H264=y
    extra_args:
      - EXTRA_DTC_OVERLAY_FILE="app_h264enc.overlay"
    filter: dt_chosen_enabled("zephyr,camera")
    integration_platforms:
      - fancy_stm32n6_board

to the samples/subsys/usb/uvc/sample.yaml. And a similar entry for the jpeg.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests via sample.yaml added. I didn't yet add the update documentation, I will add a commit for that.

@avolmat-st
Copy link
Author

Sorry, messed up the rebase on previous push leading to still having the PR #93192 commits included in this PR. Fixed that. Now only commits from PR #94562 and this commits from this PR are there. Once PR #94562 is merged I'll rebase again to only keep real commits from this PR inside.

@avolmat-st avolmat-st force-pushed the pr-uvc-videoenc-support branch from dd47e3c to da92d84 Compare October 23, 2025 20:10
Comment on lines +460 to +462
/* Rough estimate for the worst case */
fmt->pitch = 0;
fmt->size = fmt->width * fmt->height * 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly the same code as in previous case, could be fallthrough ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally put this separately since in the end with better estimate that's most probably going to be different from JPEG but yes, since currently it is same maybe I could just have the two case together I guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's way too much for H.264 (equivalent to raw uncompressed image frame). I think for H.264 it depends a number of parameters like profile, bitrate, etc. which cannot be passed to this function (where we have only width / height)

Alain Volmat added 11 commits October 23, 2025 22:50
Addition of a helper function which takes care of dequeue
from a source video device and queue into a sink video device.

Signed-off-by: Alain Volmat <[email protected]>
Replace video_dequeue / video_enqueue buffer exchange code by
video_transfer_buffer helper function.

Signed-off-by: Alain Volmat <[email protected]>
In preparation for the introduction of video encoder support
add an indirection for handling of the buffers of the UVC
source device. Currently this is only video_dev however it
can also be an encoder device when encoder is introduced
between video capture device and the UVC device.

Signed-off-by: Alain Volmat <[email protected]>
Allow creating a pipeline as follow
   camera receiver -> encoder -> uvc

If the chosen zephyr,videoenc is available, the sample will pipe
the camera receiver to the encoder and then the UVC device instead
of directly the camera receiver to the UVC.

Current implementation has several points hardcoded for the time
being:
1. intermediate pixel format between the camera receiver and encoder
   is set to NV12. This shouldn't be hardcoded and should instead be
   discovered as a commonly capable format from the encoder / video dev
2. it is considered that encoder device do NOT perform any resolution
   change and that encoder output resolution is directly based on the
   camera receiver resolution.  Thanks to this, UVC exposed formats
   are thus the encoder output pixel format & camera receiver
   resolutions.

Signed-off-by: Alain Volmat <[email protected]>
Add rough estimate of a worth case H264 output size.

Signed-off-by: Alain Volmat <[email protected]>
This commit prepares introduction of the UVC Frame Based support by
using the struct uvc_frame_descriptor as parameter of most of the UVC
functions. struct uvc_frame_descriptor contains the common fields for
all supported frame type and then depending on the DescriptorSubtype
the pointer is casted in the correct struct definition.

Signed-off-by: Alain Volmat <[email protected]>
The frame_based descriptors differ from the frame descriptors
in that there is no dwMaxVideoFrameBufferSize field.
In order to do that, add a new uvc_frame_based_discrete_descriptor
structure to be used to fill in proper information into the
frame descriptor. In addition to that, a new format descriptor
is also added for frame based transfer.

Signed-off-by: Alain Volmat <[email protected]>
Add proper check of the return value of video_enqueue / video_dequeue.

Signed-off-by: Alain Volmat <[email protected]>
Add overlay files in order to enable usage of
the encoder in the UVC sample.
This work with platform defining node label
    zephyr_jpegenc
    zephyr_h264enc

Mode can be selected by using -DFILE_SUFFIX="jpegenc" or
-DFILE_SUFFIX="h264enc" when building the sample while
also adding -DCONFIG_VIDEO_ENCODER_JPEG or
-DCONFIG_VIDEO_ENCODER_H264 as well in the command line.

Signed-off-by: Alain Volmat <[email protected]>
Add zephyr_h264enc and zephyr_jpegenc labels on node in order to
be able to use VENC and JPEG codec from samples.

Signed-off-by: Alain Volmat <[email protected]>
Add entries in sample.yaml for enabling h264enc / jpegenc
uvc based test on the stm32n6570_dk/stm32n657xx/sb platform.

Signed-off-by: Alain Volmat <[email protected]>
@jfischer-no jfischer-no removed the DNM This PR should not be merged (Do Not Merge) label Oct 23, 2025
@avolmat-st avolmat-st force-pushed the pr-uvc-videoenc-support branch from da92d84 to 1ab716d Compare October 23, 2025 21:53
@zephyrbot zephyrbot requested a review from ngphibang October 23, 2025 21:54
@avolmat-st
Copy link
Author

Had to push a new version to correct an issue in the app_add_filtered_formats function when doing iteration over the caps of the encoder & camera receiver. Code slightly reworked to avoid potential access to wrong index of caps table while still looking properly at the resolution exposed by the camera receiver. Thank you @josuah for finding the issue and working together to figure out an acceptable solution.

@sonarqubecloud
Copy link

Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for finding the a simple solution that works for all current supported hardware!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like nothing but it saves a lot of variable definition in the samples 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: Boards/SoCs area: Samples Samples area: USB Universal Serial Bus area: Video Video subsystem platform: STM32 ST Micro STM32 Release Notes To be mentioned in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants