Skip to content
/ refine Public

Official PyTorch implementation of the SIGGRAPH 2024 paper "ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation"

Notifications You must be signed in to change notification settings

TRI-ML/refine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation

This repository contains the PyTorch implementation of our paper, ReFiNe.

Sergey Zakharov · Katherine Liu · Adrien Gaidon · Rares Ambrus
SIGGRAPH, 2024

Installation

To set up the environment using docker, execute the following command:

make docker-build

Once installed, start an interactive session:

make docker-interactive

Once inside the interactive session, you can train and validate the method within this environment.

Generate Training Data

To replicate the workflow of our pipeline, first download 3 objects from the HB dataset. Once downloaded, put them under the demo/ folder and unzip. To generate GT files, run the following command:

python -m data.db_generate --config configs/config.yaml

Training and Inference

The config.yaml file stores default parameters for training and evaluation and points to the provided 3 models. To start training, run the following script:

python train.py --config configs/config.yaml

To visualize the trained model using Open3D, run:

python visualize.py --path_net log/demo

This command will extract a dense point cloud from each of the decoded neural fields and visualize them sequentially. Press 9 to visualize normals, 1 to visualize RGB, - and + to decrease or increase the size of the points, and q to proceed to the next object.

Additionally, you can specify the lod_inc parameter to apply an increment on top of the default Level of Detail (LoD) to further densify the output point cloud. By default, this parameter is set to 1.

python visualize.py --config configs/config.yaml --lod_inc 1

Pre-trained Models

We provide pre-trained models on various datasets:

Dataset (GB) # Objects Latent Size (MB) Link
Thingi32 (0.47) 32 64 3.4 model
ShapeNet150 (0.63) 150 96 4.1 model
HB (0.53) 33 64 4 model
BOP (0.91) 201 512 92.8 model
GSO (13.6) 1024 512 94.5 model

To visualize the pre-trained model, download it under the pretrained folder and run:

python visualize.py --config pretrained/[model]/config.yaml

Acknowledgements

We used functions from NVIDIA's Kaolin and Kaolin Wisp libraries, as well as from Open3D, in our implementation.

Reference

@inproceedings{refine,
    title={ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation},
    author={Sergey Zakharov, Katherine Liu, Adrien Gaidon, Rares Ambrus},
    journal={SIGGRAPH},
    year={2024}
}

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

About

Official PyTorch implementation of the SIGGRAPH 2024 paper "ReFiNe: Recursive Field Networks for Cross-Modal Multi-Scene Representation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published