-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Treat camera projected primitives in 3D space more like 2D objects #1025
Comments
Since we're also hitting issues in compositing pure 2D views we need a more radical approach:
there's quite a few open questions on how the 3D->2D works exactly. Since there can be many 2D views in a 3D scene, we need to avoid allocating new render targets for each of them, so we should try to render them directly in the main color pass, not repeating the currently defined render phases. The major challenge here is that we'd like to use a different depth range in each 2D views. We can restart the depth buffer, but we'd need to stencil out the actually seen part of the 2D plane in the 3D scene. A literal stencil buffer would solve this nicely but conflicts with our current depth buffer setup. As long as we stick with depth offsetting we don't need to worry about above depth buffering issues too much. We still need to define the dual camera setup as described in the original post though. It seems advantageous to still use an explicit camera setup, only that our compositing abilities would be more limited, forcing the use of the same color & depth buffer in the entire process (apart maybe from hard clears?). |
Via https://discord.com/channels/1062300748202921994/1151053465989165076/1151091958970843136 One slightly different representation of 2D in 3D is in form of a projective texture: This seems to be another case that is best solved with a render-to-texture indirection. |
Consider a camera projection (typically perspective, but orthographic shouldn't make a difference) in 3D space.
Today, the renderer renders all shapes that are part of the camera projection (there might not only be images but also lines, points or even complex meshes) just like anything else in the scene. I.e. the user is expected to set the respective projection matrix as world matrix for all these objects.
This works nicely for the most part but there are are some things we need to fix (the actual tasks of this issue!):
Eye
/main camera should actually follow their camera, making them appear flat on the surface insteadOpen questions:
discard
instructions in the shader?Note that the seemingly straight forward solution would be to have a literal 2d surface that we render in 3D space. There is a whole lot of drawbacks with this though as well:
The text was updated successfully, but these errors were encountered: