Retinal fundus photography is significant in diagnosing and monitoring retinal diseases. However, systemic imperfections and operator/patient-related factors can hinder the acquisition of high-quality retinal images. Previous efforts in retinal image enhancement primarily relied on GANs, which are limited by the trade-off between training stability and output diversity. In contrast, the Schrödinger Bridge (SB), offers a more stable solution by utilizing Optimal Transport (OT) theory to model a stochastic differential equation (SDE) between two arbitrary distributions. This allows SB to effectively transform low-quality retinal images into their high-quality counterparts. In this work, we leverage the SB framework to propose an image-to-image translation pipeline for retinal image enhancement. Additionally, previous methods often fail to capture fine structural details, such as blood vessels. To address this, we enhance our pipeline by introducing Dynamic Snake Convolution, whose tortuous receptive field can better preserve tubular structures. We name the resulting retinal fundus image enhancement framework the Context-aware Unpaired Neural Schrödinger Bridge (CUNSB-RFIE). To the best of our knowledge, this is the first endeavor to use the SB approach for retinal image enhancement. Experimental results on a large-scale dataset demonstrate the advantage of the proposed method compared to several state-of-the-art supervised and unsupervised methods in terms of image quality and performance on downstream tasks.
[IPMI '2023] OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing
To get a local copy up and running follow these simple steps.
- Clone the repo
git clone https://github.com/Retinal-Research/CUNSB-RFIE.git
- Create a Python Environment and install the required libraries by running
conda env create -f environment.yml
The original EyeQ dataset can be downloaded by following the instructions provided here. The synthetic degraded images were generated using the algorithms described here.
The pre-trained weights for the EyeQ dataset can be found in the ./pretrained directory.
To train from scratch on a custom dataset, first create a directory named datasets to store your data. Organize the data in the following format: PhaseA / B (e.g., trainA, testB, valB) to do AtoB transformation. Once organized, start the training process by running the following command:
bash run_train.sh
To test your custom dataset, run the testing script:
bash run_test.sh
All arguments for training and testing are stored in the options folder and in ./models/sb_model.py.
UNSB: https://github.com/cyclomon/UNSB DSCNet: https://github.com/yaoleiqi/dscnet EyeQ:https://github.com/HzFu/EyeQ CofeNet:https://github.com/joanshen0508/Fundus-correction-cofe-Net