Reading camera RGB and depth buffers #756
Replies: 2 comments 2 replies
-
Take a look at the vsgscreenshot example in vsgExamples; it's probably closest to what you want to do. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi Bryan,
It's currently more complicated than with OpenGL/OSG, but still perfectly
possible. Vulkan is a lower level than OpenGL so we can to explicitly
allocate staging buffer memory to copy to and then transfer the data, as
Vulkan is multi-threaded one also need to use barrieres to avoid issues.
As well as vsgscreenshot, that Tim mentioned, which does copying of the
window's depth and colour buffer, there is the vsgheadless which does the
same thing for an offscreen framebuffer.
https://github.com/vsg-dev/vsgExamples/blob/master/examples/app/vsgheadless/vsgheadless.cpp
There is common functionality between vsgscreenshot and vsgheadless which
may make sense to come up with some convenience functions for creating the
required commands.
Cheers,
Robert.
Message ID: ***@***.***>
… |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
baryn
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I have been using OSG for several years now, especially for emulating sensors like camera, depth cameras and so on. To do so, I used the RGB and depth buffers generated at each frame. I would like to move on VSG but I am wondering if it is possible to do the same. In OSG, I did something like this:
Set up camera and image
Get the data
I had a look on the sources but could not find any clue. Is it feasible in VSG or part of further developments ?
Thank you
Beta Was this translation helpful? Give feedback.
All reactions