You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following a discussion with @Wumpf, we would like to experiment with a Renderer that is capable of raytracing through discrete volumes (i.e. an Amanatides & Woo kinda thing).
This renderer should ideally be capable of using either a 3D texture + bounding box or a 2D texture (interpreted as depth) + bounding box as input and render that in a spatial view as needed.
Our first real use-case would be to try and render depth maps in spatial views.
Today, the only way we have at our disposal to do that is to project the depth map back into 3D on the client-side, build a point cloud out of that, and upload the result to Rerun (see nyud example).
Of course this is pretty costly and doesn't scale nicely with the resolution: a 4K depth map would result in a 8M+ point cloud each frame 😰
Here's a sneak peek of the first experiments on that front.
As it stands right now it is completely unusable in real conditions, not to mention the math that barely makes any sense at all... but it does show that it can look both fairly fancy and much more "readable" than a point cloud indeed!
23-02-19_17.51.49.patched.mp4
teh-cmc
changed the title
Depth maps as discrete raytraced volumes
Automatically visualize point clouds from depth maps
Mar 3, 2023
Following a discussion with @Wumpf, we would like to experiment with a
Renderer
that is capable of raytracing through discrete volumes (i.e. an Amanatides & Woo kinda thing).This renderer should ideally be capable of using either a
3D texture + bounding box
or a2D texture (interpreted as depth) + bounding box
as input and render that in a spatial view as needed.Our first real use-case would be to try and render depth maps in spatial views.
Today, the only way we have at our disposal to do that is to project the depth map back into 3D on the client-side, build a point cloud out of that, and upload the result to Rerun (see
nyud
example).Of course this is pretty costly and doesn't scale nicely with the resolution: a 4K depth map would result in a 8M+ point cloud each frame 😰
The text was updated successfully, but these errors were encountered: