Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsatisfactory reconstruction effect on Tank and Temples dataset #23

Open
YuhsiHu opened this issue Mar 11, 2021 · 16 comments
Open

Unsatisfactory reconstruction effect on Tank and Temples dataset #23

YuhsiHu opened this issue Mar 11, 2021 · 16 comments

Comments

@YuhsiHu
Copy link

YuhsiHu commented Mar 11, 2021

Firstly, thank you for your great work and excellent code.
The pre-trained model perform perfectly on DTU dataset. However, pre-trained model cannot reconstruct on other dataset, such as Tank and Temples.
I resized the images in Tank and Temples from 1920X1080 to 1600X1200.
The param I set is: --dataset=general_eval --batch_size=1 --testpath=$TESTPATH --testlist=$TESTLIST --loadckpt $CKPT_FILE --outdir $save_results_dir --interval_scale 1.06 --max_h=2048 --max_w=2048$

The results are like this:
image

Could you please help me how can I get results in your paper?

@gxd1994
Copy link
Collaborator

gxd1994 commented Mar 11, 2021 via email

@YuhsiHu
Copy link
Author

YuhsiHu commented Mar 17, 2021

Hi, thank you for your answer!
This time I didn't change the size of the picture, I used the original size: 1920X1080.
However, the result is slightly better than before, but it is still not as perfect as the paper shows.
image

hi,if you resize the image,the intrinsic matrix also need to rescale

------------------ Original ------------------ From: YuhsiHu <[email protected]> Date: 周四,3月 11,2021 13:59 To: alibaba/cascade-stereo <[email protected]> Cc: Subscribed <[email protected]> Subject: Re: [alibaba/cascade-stereo] Unsatisfactory reconstruction effect on Tank and Temples dataset (#23) Firstly, thank you for your great work and excellent code. The pre-trained model perform perfectly on DTU dataset. However, pre-trained model cannot reconstruct on other dataset, such as Tank and Temples. I resized the images in Tank and Temples from 1920X1080 to 1600X1200. The param I set is: --dataset=general_eval --batch_size=1 --testpath=$TESTPATH --testlist=$TESTLIST --loadckpt $CKPT_FILE --outdir $save_results_dir --interval_scale 1.06 --max_h=2048 --max_w=2048$ The results are like this: Could you please help me how can I get results in your paper? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

@agenthong
Copy link

Just don't edit the codes.
Or use the original repo.

@YuhsiHu
Copy link
Author

YuhsiHu commented Apr 7, 2021

Just don't edit the codes.
Or use the original repo.

This is the result of the original code and the pre-trained model, which is slightly different from that in the paper. In addition, MVSNet will not have so many outstanding versions if it is only used as toys and nobody wants to improve it.

@agenthong
Copy link

Just don't edit the codes.
Or use the original repo.

This is the result of the original code and the pre-trained model, which is slightly different from that in the paper. In addition, MVSNet will not have so many outstanding versions if it is only used as toys and nobody wants to improve it.

I can reproduce the result by original codes.
image

@YuhsiHu
Copy link
Author

YuhsiHu commented Apr 7, 2021

Just don't edit the codes.
Or use the original repo.

This is the result of the original code and the pre-trained model, which is slightly different from that in the paper. In addition, MVSNet will not have so many outstanding versions if it is only used as toys and nobody wants to improve it.

I can reproduce the result by original codes.
image

it is good on DTU dataset but some results in Tank&Temple are not satisfactory.

@wln19
Copy link

wln19 commented Apr 4, 2022

hello, Have you finally solved the problem

@wangchengze001
Copy link

@wln19 hello,do you get good reconstruct in tanks? i face the same question

@wangchengze001
Copy link

Hi, thank you for your answer! This time I didn't change the size of the picture, I used the original size: 1920X1080. However, the result is slightly better than before, but it is still not as perfect as the paper shows. image

hi,if you resize the image,the intrinsic matrix also need to rescale

------------------ Original ------------------ From: YuhsiHu <[email protected]> Date: 周四,3月 11,2021 13:59 To: alibaba/cascade-stereo <[email protected]> Cc: Subscribed <[email protected]> Subject: Re: [alibaba/cascade-stereo] Unsatisfactory reconstruction effect on Tank and Temples dataset (#23) Firstly, thank you for your great work and excellent code. The pre-trained model perform perfectly on DTU dataset. However, pre-trained model cannot reconstruct on other dataset, such as Tank and Temples. I resized the images in Tank and Temples from 1920X1080 to 1600X1200. The param I set is: --dataset=general_eval --batch_size=1 --testpath=$TESTPATH --testlist=$TESTLIST --loadckpt $CKPT_FILE --outdir $save_results_dir --interval_scale 1.06 --max_h=2048 --max_w=2048$ The results are like this: Could you please help me how can I get results in your paper? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

Hi, excuse me , do you solve the problem , i meet the same question

@YuhsiHu
Copy link
Author

YuhsiHu commented Apr 21, 2022

I think maybe the network is finetuned on Tanks&Temples dataset, or some params are different from configs on DTU.

@xiaomingHUST
Copy link

I think maybe the network is finetuned on Tanks&Temples dataset, or some params are different from configs on DTU.

hi,when you test on the dataset tanks and temples,do you use the fusion method that offered.but in my case, the gipuma fusion method needs much more than 48G memory,it's strange.
could you please check the resources the fusion step needs.
thanks a lot.

@YuhsiHu
Copy link
Author

YuhsiHu commented Jun 15, 2022

I think maybe the network is finetuned on Tanks&Temples dataset, or some params are different from configs on DTU.

hi,when you test on the dataset tanks and temples,do you use the fusion method that offered.but in my case, the gipuma fusion method needs much more than 48G memory,it's strange. could you please check the resources the fusion step needs. thanks a lot.

Yes I used the fusion method that offered. Gipuma fusion method needs GPU and the image size of this dataset is large.

@xiaomingHUST
Copy link

I think maybe the network is finetuned on Tanks&Temples dataset, or some params are different from configs on DTU.

hi,when you test on the dataset tanks and temples,do you use the fusion method that offered.but in my case, the gipuma fusion method needs much more than 48G memory,it's strange. could you please check the resources the fusion step needs. thanks a lot.

Yes I used the fusion method that offered. Gipuma fusion method needs GPU and the image size of this dataset is large.

thanks for your reply. and one more question, when you test on the tanks and temples dataset, which camera file do you use.
i just download the preprocess data in MVSNet proj, but the range of my point model is strange. in other issue ,someone mentioned the the short depth range data, do you use that file and how to get this data? thanks again.

@YuhsiHu
Copy link
Author

YuhsiHu commented Jun 15, 2022

I download the dataset from original MVSNet repo. You can follow the instructions in their README.md

@xiaomingHUST
Copy link

I download the dataset from original MVSNet repo. You can follow the instructions in their README.md

ok,I will have a try . thanks !

@zhao-you-fei
Copy link

@gxd1994 @YuhsiHu @xiaomingHUST @wangchengze001 @agenthong
请问我按照casmvsnet流程测试T&T数据集出现以下效果该怎么有效解决,filter-method默认normal
最终融合:
image
ply_local文件夹点云:
image

深度图:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants