PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.
IBRNet: Learning Multi-View Image-Based Rendering
Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser
CVPR 2021
Clone this repo with submodules:
git clone --recurse-submodules https://github.com/googleinterns/IBRNet
cd IBRNet/
The code is tested with Python3.7, PyTorch == 1.5 and CUDA == 10.2. We recommend you to use anaconda to make sure that all dependencies are in place. To create an anaconda environment:
conda env create -f environment.yml
conda activate ibrnet
├──data/
├──ibrnet_collected_1/
├──ibrnet_collected_2/
├──real_iconic_noface/
├──spaces_dataset/
├──RealEstate10K-subset/
├──google_scanned_objects/
Please first cd data/
, and then download datasets into data/
following the instructions below. The organization of the datasets should be the same as above.
We captured 67 forward-facing scenes (each scene contains 20-60 images). To download our data ibrnet_collected.zip (4.1G) for training, run:
gdown https://drive.google.com/uc?id=1dZZChihfSt9iIzcQICojLziPvX1vejkp
unzip ibrnet_collected.zip
P.S. We've captured some more scenes in ibrnet_collected_more.zip, but we didn't include them for training. Feel free to download them if you would like more scenes for your task, but you wouldn't need them to reproduce our results.
(b) LLFF released scenes
Download and process real_iconic_noface.zip (6.6G) using the following commands:
# download
gdown https://drive.google.com/uc?id=1m6AaHg-NEH3VW3t0Zk9E9WcNp4ZPNopl
unzip real_iconic_noface.zip
# [IMPORTANT] remove scenes that appear in the test set
cd real_iconic_noface/
rm -rf data2_fernvlsb data2_hugetrike data2_trexsanta data3_orchid data5_leafscene data5_lotr data5_redflower
cd ../
(c) Spaces Dataset
Download spaces dataset by:
git clone https://github.com/augmentedperception/spaces_dataset
(d) RealEstate10K
The full RealEstate10K dataset is very large and can be difficult to download. Hence, we provide a subset of RealEstate10K training scenes containing only 200 scenes. In our experiment, we found using more scenes from RealEstate10K only provides marginal improvement. To download our camera files (2MB):
gdown https://drive.google.com/uc?id=1IgJIeCPPZ8UZ529rN8dw9ihNi1E9K0hL
unzip RealEstate10K_train_cameras_200.zip -d RealEstate10K-subset
Besides the camera files, you also need to download the corresponding video frames from YouTube. You can download the frames (29G) by running the following commands. The script uses ffmpeg
to extract frames, so please make sure you have ffmpeg installed.
git clone https://github.com/qianqianwang68/RealEstate10K_Downloader
cd RealEstate10K_Downloader
python generate_dataset.py train
cd ../
Google Scanned Objects contain 1032 diffuse objects with various shapes and appearances. We use gaps to render these objects for training. Each object is rendered at 512 × 512 pixels from viewpoints on a quarter of the sphere. We render 250 views for each object. To download our renderings (7.5GB), run:
gdown https://drive.google.com/uc?id=1tKHhH-L1viCvTuBO1xg--B_ioK7JUrrE
unzip google_scanned_objects_renderings.zip
The mapping between our renderings and the public Google Scanned Objects can be found in this spreadsheet.
├──data/
├──deepvoxels/
├──nerf_synthetic/
├──nerf_llff_data/
The evaluation datasets include DeepVoxel synthetic dataset, NeRF realistic 360 dataset and the real forward-facing dataset. To download all three datasets (6.7G), run the following command under data/
directory:
bash download_eval_data.sh
First download our pretrained model under the project root directory:
gdown https://drive.google.com/uc?id=1wNkZkVQGx7rFksnX7uVX3NazrbjqaIgU
unzip pretrained_model.zip
You can use eval/eval.py
to evaluate the pretrained model. For example, to obtain the PSNR, SSIM and LPIPS on the fern scene in the real forward-facing dataset, you can first specify your paths in configs/eval_llff.txt
and then run:
cd eval/
python eval.py --config ../configs/eval_llff.txt
You can use render_llff_video.py
to render videos of smooth camera paths for the real forward-facing scenes. For example, you can first specify your paths in configs/eval_llff.txt
and then run:
cd eval/
python render_llff_video.py --config ../configs/eval_llff.txt
You can also capture your own data of forward-facing scenes and synthesize novel views using our method. Please follow the instructions from LLFF on how to capture and process the images.
We strongly recommend you to train the model with multiple GPUs:
# this example uses 8 GPUs (nproc_per_node=8)
python -m torch.distributed.launch --nproc_per_node=8 train.py --config configs/pretrain.txt
Alternatively, you can train with a single GPU by setting distributed=False
in configs/pretrain.txt
and running:
python train.py --config configs/pretrain.txt
To finetune on a specific scene, for example, fern, using the pretrained model, run:
# this example uses 2 GPUs (nproc_per_node=2)
python -m torch.distributed.launch --nproc_per_node=2 train.py --config configs/finetune_llff.txt
- Our current implementation is not well-optimized in terms of the time efficiency at inference. Rendering a 1000x800 image can take from 30s to over a minute depending on specific GPU models. Please make sure to maximize the GPU memory utilization by increasing the size of the chunk to reduce inference time. You can also try to decrease the number of input source views (but subject to performance loss).
- If you want to create and train on your own datasets, you can implement your own Dataset class following our examples in
ibrnet/data_loaders/
. You can verify the camera poses usingdata_verifier.py
inibrnet/data_loaders/
. - Since the evaluation datasets are either object-centric or forward-facing scenes, our provided view selection methods are very simple (based on either viewpoints or camera locations). If you want to evaluate our method on new scenes with other kinds of camera distributions, you might need to implement your own view selection methods to identify the most effective source views.
- If you have any questions, you can contact [email protected].
@inproceedings{wang2021ibrnet,
author = {Wang, Qianqian and Wang, Zhicheng and Genova, Kyle and Srinivasan, Pratul and Zhou, Howard and Barron, Jonathan T. and Martin-Brualla, Ricardo and Snavely, Noah and Funkhouser, Thomas},
title = {IBRNet: Learning Multi-View Image-Based Rendering},
booktitle = {CVPR},
year = {2021}
}