Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat camera projected primitives in 3D space more like 2D objects #1025

Open
Wumpf opened this issue Feb 1, 2023 · 2 comments
Open

Treat camera projected primitives in 3D space more like 2D objects #1025

Wumpf opened this issue Feb 1, 2023 · 2 comments
Labels
enhancement New feature or request 🔺 re_renderer affects re_renderer itself

Comments

@Wumpf
Copy link
Member

Wumpf commented Feb 1, 2023

Consider a camera projection (typically perspective, but orthographic shouldn't make a difference) in 3D space.
image
Today, the renderer renders all shapes that are part of the camera projection (there might not only be images but also lines, points or even complex meshes) just like anything else in the scene. I.e. the user is expected to set the respective projection matrix as world matrix for all these objects.

This works nicely for the most part but there are are some things we need to fix (the actual tasks of this issue!):

  • any "viewspace generated geometry" needs to behave like it is being looked with the camera in question. Points & lines which today follow along the Eye/main camera should actually follow their camera, making them appear flat on the surface instead
  • auto sizes need to be treated like they live in the realm their camera
  • ...TODO other?

Open questions:

  • how should we communicate these things to the renderer in a somewhat unified manner? Depending on depth handling, not all objects need the same amount of knowledge
  • what with remaining depth? I.e. when we had 3D objects under the frustum that would have yielded a depth buffer?
    • Bunch of options, maybe should expose them?
      • flatten them onto the plane, i.e. ignore their depth for everything but depth offsetting (!), making it look like the same in a 2D space view
        • Primitive depth offsetting only goes that far for larger ranges
      • use the "virtual depth buffer depth", i.e. objects are distorted within the confines of the camera frustum
      • use view space depth, making it more behave like one would expect a depth map that someone placed into 3D\
  • Should there be clipping to the frustum? If so, probably as an option.
    • is there a way to achieve this without expensive discard instructions in the shader?
  • ...TODO other?

Note that the seemingly straight forward solution would be to have a literal 2d surface that we render in 3D space. There is a whole lot of drawbacks with this though as well:

  • 2D surface would have limited resolution, meaning we're loosing a lot of quality (there is no "good enough" resolution in this resampling problem!)
  • Need another render to texture pass which is something our renderer doesn't have either!
  • Very hard to handle "depth after projection", if at all we get a literal depth buffer
@Wumpf
Copy link
Member Author

Wumpf commented May 5, 2023

Since we're also hitting issues in compositing pure 2D views we need a more radical approach:

  • allow defining an arbitrary amount of RenderCamera which are added to ViewBuilder
    • each camera has its own draw data objects, they are no longer added to ViewBuilder
    • each camera defines how it wants to be composited with the screen. One option is to project what it sees a long a rectangle just how we render 2D scenes in 3D today

there's quite a few open questions on how the 3D->2D works exactly. Since there can be many 2D views in a 3D scene, we need to avoid allocating new render targets for each of them, so we should try to render them directly in the main color pass, not repeating the currently defined render phases.

The major challenge here is that we'd like to use a different depth range in each 2D views. We can restart the depth buffer, but we'd need to stencil out the actually seen part of the 2D plane in the 3D scene. A literal stencil buffer would solve this nicely but conflicts with our current depth buffer setup.
Otherwise, we might need to manipulate the depth buffer directly on base of the 3D depth buffer. This is feasable but likely to run into compatibility issues (treat depth as color and vice versa?) and potentially slow as we need to handle MSAA samples on both depth buffers.

As long as we stick with depth offsetting we don't need to worry about above depth buffering issues too much. We still need to define the dual camera setup as described in the original post though. It seems advantageous to still use an explicit camera setup, only that our compositing abilities would be more limited, forcing the use of the same color & depth buffer in the entire process (apart maybe from hard clears?).

@Wumpf
Copy link
Member Author

Wumpf commented Sep 13, 2023

Via https://discord.com/channels/1062300748202921994/1151053465989165076/1151091958970843136

One slightly different representation of 2D in 3D is in form of a projective texture:
image

This seems to be another case that is best solved with a render-to-texture indirection.
Maybe this is the path to go forward paired with special optimizations for the many images case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request 🔺 re_renderer affects re_renderer itself
Projects
None yet
Development

No branches or pull requests

1 participant