You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Last month, someone emailed me about this problem. I investigated this and confirmed that our code has a different score than the score reported in the paper.
In case you want to use NeX as a baseline comparison. In that case, you can report the score from either retraining, measuring directly from the provided result in the dataset directory, or taking directly from the paper, which is fine for me.
Here is the table showing how the score is drifting on Crest and Trex
I used python train.py -scene ${PATH_TO_SCENE} -model_dir ${MODEL_TO_SAVE_CHECKPOINT} -http-cv2resize in the flower scene of llff, But did not get the grade of the paper, using four rtx2080Ti,
Measurement Result
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
name PSNR SSIM LPIPS
images_IMG_2962.JPG 25.110090 0.873111 0.24526712
images_IMG_2970.JPG 26.380335 0.898152 0.20776317
images_IMG_2978.JPG 25.021733 0.861492 0.22302648
images_IMG_2986.JPG 28.067660 0.920080 0.17454995
images_IMG_2994.JPG 28.821845 0.926312 0.17679015
PSNR 26.680332
SSIM 0.895829
LPIPS 0.205479
The text was updated successfully, but these errors were encountered: