Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Splatfacto new POV image quality #2863

Open
pierremerriaux-leddartech opened this issue Feb 1, 2024 · 17 comments
Open

Splatfacto new POV image quality #2863

pierremerriaux-leddartech opened this issue Feb 1, 2024 · 17 comments

Comments

@pierremerriaux-leddartech
Copy link
Contributor

Hi,
It not really a bug, but more a question about splatfacto image quality/parameters. If it is not the right place, just let me know.
I am working to reconstruct driving scene from pandaset. Camera poses are along a car trajectory. When I render an image from this trajectory (train or eval set) the quality is very good.
Below a example of one image from eval set:
image

But if I move a bit the target vehicle and the ego camera (1m on left and 0.5m up), the camera direction stays very close to the original one.
image
The quality decrease very quickly. The quality is very sensitive to direction.
I tried to reduce sh degree from 3 to 1, to avoid to much overfitting, with no really improvement.
Sure that a driving scene images is less dense, than a unique object scene with video sequence around it. But few months ago, I did it with nerfacto, the quality from initial poses was less, but really less sensible to new POV. Below a short video of nerfacto reconstruction from 5 cameras (front, side, and front-side) on the vehicle:

2023-09-21_234748.mp4

I improved my results with nerfacto by using camera pose opt. So I tried to do it for splatfacto by cherry pick viewmat gradient backward from gsplat branch https://github.com/nerfstudio-project/gsplat/tree/vickie/camera-grads. But for now, it didn't improve it.
In the same way, if I tried with multi cameras (or just a side camera in place of the front), the quality is very less impressive. Below example with 5 cameras and cameras optimization (but I seem to not have a big influence).
Left view:
image
Front view
image

Do you have any idea, about what I should try firstly to improve new POV synthesis and secondly to improve multi cameras reconstruction?
And did someone work one camera pose optimization for SplatFacto ?

And just for the fun, below an example of what we can do with objects and multiples splatFacto instance:

render_scene037_added_4cars.mp4

Thanks for your inputs, I will go to implement depth loss from lidar to see if it help.

@jb-ye
Copy link
Collaborator

jb-ye commented Feb 2, 2024

How did you calculate the poses for those cameras?

@pierremerriaux-leddartech
Copy link
Contributor Author

Hi @jb-ye, I got it directly from the dataset. Cameras, lidar point clouds and objects are provided in world coordinate.

@jb-ye
Copy link
Collaborator

jb-ye commented Feb 2, 2024

Hi @jb-ye, I got it directly from the dataset. Cameras, lidar point clouds and objects are provided in world coordinate.

How do you validate the accuracy of those data? Running Nerf/Gaussian Splatting model expects a much higher accuracy standard of pose estimation than most autonomous robot stacks. It probably wouldn't work at all if you use those pre-calculated poses.

@pierremerriaux-leddartech
Copy link
Contributor Author

pierremerriaux-leddartech commented Feb 3, 2024

Hi @jb-ye, thanks for your message.
No really way to validate the accuracy for camera pose. I only validated lidar frames accumulation in world referential, and it was pretty good. With nerffacto and camera opt activated, it worked pretty well on the same sequence. I tested camera opt with splatfacto, but not really improvement. I have mainly 2 interrogations:

  • reconstruction with 1 camera (front) is very good over the initial trajectory, but if I shift it from 1m meter we could see the quality decrease quickly.
  • reconstruction from more than 1 camera (front and left for example), the quality is very bad.

I did this experimentation about camera pose optimization: nerfstudio-project/gsplat#119

@jb-ye
Copy link
Collaborator

jb-ye commented Feb 4, 2024

(1) I don't think camera opt would work with gaussian splatting by just back propagating gradients, it requires some non-trivial innovation.
(2) the fact the shifting 1m and see significant quality decrease indicates the pose is not sufficiently accurate.

@kerrj
Copy link
Collaborator

kerrj commented Feb 5, 2024

Couple things you can try:

  1. COLMAP the poses to get a sort of upper bound on quality
  2. export the poses from a trained nerfacto model into splatfacto

We're working on camera backprop in gsplat, but not sure when it will be finished. Something you could do is use the pytorch implementation of project_gaussians this PR, which is slower than the CUDA version but would backprop gradients through the camera matrix. We haven't tested it much in 3D pose optimization though, and I'd expect there might need to be some work done on the nerfstudio side to make pose optimization work well (optimizer param tuning, maybe warmup on gaussians etc)

@pierremerriaux-leddartech
Copy link
Contributor Author

Hi @kerrj and @jb-ye
Thanks for your answers and your help.

  • When you said "camera opt will requires some non-trivial innovation", what do you have in mind ?
  • Do you think is this better to test with pytorch implementation or this branch to test camera opt ? the second solution don't solve my quality issue with multi-cameras but seems to be able to converge one right camera poses camera poses.
  • we are working to optimize camera poses with colmap we will keep you in touch.
  • I also exported the gaussian means, to see gaussians density:
    image

thanks

@lxzbg
Copy link

lxzbg commented Feb 7, 2024

(1) I don't think camera opt would work with gaussian splatting by just back propagating gradients, it requires some non-trivial innovation. (2) the fact the shifting 1m and see significant quality decrease indicates the pose is not sufficiently accurate.

@jb-ye "camera opt will requires some non-trivial innovation", I'm very interested in this question, can you tell me more about it?
I thought camera opt didn't work because unlike nerfacto's pixel-level training, 3DGS trains on the image level.

@kerrj
Copy link
Collaborator

kerrj commented Feb 7, 2024

I haven't tested the pytorch implementation within splatfacto for camera optimization yet, but I'd be interested in what happens if you try! The gradients should be correct for camera optimization, but significantly slower than the CUDA version

Also, for all of these changes you redefined the camera optimizer inside splatfacto.py and use apply_to_camera inside get_outputs right?

@pierremerriaux-leddartech
Copy link
Contributor Author

Hi @kerrj ,
Sure I have reinserted apply_to_camera and other stuffs. And I displayed also camera poses evolution during training.
I will keep you in touch when we will test the pytorch implementation.
thanks

@jb-ye
Copy link
Collaborator

jb-ye commented Feb 8, 2024

(1) I don't think camera opt would work with gaussian splatting by just back propagating gradients, it requires some non-trivial innovation. (2) the fact the shifting 1m and see significant quality decrease indicates the pose is not sufficiently accurate.

@jb-ye "camera opt will requires some non-trivial innovation", I'm very interested in this question, can you tell me more about it? I thought camera opt didn't work because unlike nerfacto's pixel-level training, 3DGS trains on the image level.

You are right, 3DGS operates per image thus the gradient doesn't reflect cross frame consistency. Let's why I said it is a non-trivial work, and needs some fresh ideas.

@MartinEthier
Copy link

MartinEthier commented Feb 9, 2024

Couple things you can try:

  1. COLMAP the poses to get a sort of upper bound on quality
  2. export the poses from a trained nerfacto model into splatfacto

We're working on camera backprop in gsplat, but not sure when it will be finished. Something you could do is use the pytorch implementation of project_gaussians this PR, which is slower than the CUDA version but would backprop gradients through the camera matrix. We haven't tested it much in 3D pose optimization though, and I'd expect there might need to be some work done on the nerfstudio side to make pose optimization work well (optimizer param tuning, maybe warmup on gaussians etc)

@kerrj I am trying to get a proper evaluation setup while using pose optimization on my dataset. The poses come from a SLAM system so they're not as accurate as COLMAP. Before trying to implement test-time pose optimization, I figured a simpler idea would be to do what you suggested: train a nerfacto with pose optimization on a merged train and eval dataset, export the poses, and then train and eval models on the exported poses without pose optimization. However, when I then try training a model on the exported poses without pose optimization, I get worse train and eval performance than when I just train on the original dataset without pose optimization, which is not what I expect. Do you have any ideas on how to get this working? It also seems the final optimized poses are different for different models. I did a pose optimization run with nerfacto and nerfacto-big and the camera_opt_translation and camera_opt_rotation values they converged to differ by about 0.1 to 0.2

@Nplace-su
Copy link

@pierremerriaux-leddartech Hi, I wonder how did you add objects in the scene in your last video, it's like an implement of street-gaussians?

@pierremerriaux-leddartech
Copy link
Contributor Author

@Nplace-su, yes we inspired from street gaussians

@jb-ye
Copy link
Collaborator

jb-ye commented Feb 19, 2024

@MartinEthier It is possible that pose optimization may get worse. Things you can try:

(1) decrease the learning rate of poses and even play a bit more with the learning rate schedule.
(2) Optimize poses may not be always deterministic or convergent, but we know there is only one possible global optimum for poses. Therefore, one has to do a determinism/convergence check and this is to me a non-trivial task.
(3) What you observe with nerfacto and nerfacto-big just shows that this is not a robust technique, one has to use it with cautions.

@li199603
Copy link

(1) I don't think camera opt would work with gaussian splatting by just back propagating gradients, it requires some non-trivial innovation. (2) the fact the shifting 1m and see significant quality decrease indicates the pose is not sufficiently accurate.

@jb-ye "camera opt will requires some non-trivial innovation", I'm very interested in this question, can you tell me more about it? I thought camera opt didn't work because unlike nerfacto's pixel-level training, 3DGS trains on the image level.

You are right, 3DGS operates per image thus the gradient doesn't reflect cross frame consistency. Let's why I said it is a non-trivial work, and needs some fresh ideas.

3DGS always tries its best to fits the input images, even if the camera attitude is not accurate. In other words, 3dgs always creates local optimizations for camera optimization.

@karthik101200
Copy link

@Nplace-su, yes we inspired from street gaussians

Hi a little late to the party. I have lidar odomtery and transformation between camrea and lidar in my ROS TF tree and it on paper should give better localization than COLMAP. I am saving this in the transforms.json after converting to OpenGL to bypass COLMAP but the results are much much worse. splattfacto gives some output with very bad depth but nerffacto (or any other nerfstudio nerf model) doesnt run at all. Is there a way to debug this issue according to you. its a custom dataset that I am generating from a rosbag
thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants