Skip to content

Code for the paper "Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement"

License

Notifications You must be signed in to change notification settings

Retinal-Research/Contextual-OT

Repository files navigation

Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement (under review at WACV 2025)

Retinal fundus photography offers a non-invasive way to diagnose and monitor a variety of retinal diseases, but is prone to inherent quality glitches arising from systemic imperfections or operator/patient-related factors. However, high-quality retinal images are crucial for carrying out accurate diagnoses and automated analyses. The fundus image enhancement is typically formulated as a distribution alignment problem, by finding a one-to-one mapping between a low-quality image and its high-quality counterpart. This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement. In contrast to standard generative image enhancement methods, which struggle with handling contextual information (e.g., over-tampered local structures and unwanted artifacts), the proposed context-aware OT learning paradigm better preserves local structures and minimizes unwanted artifacts. Leveraging deep contextual features, we derive the proposed context-aware OT using the earth mover's distance, and show that the proposed context-OT has a solid theoretical guarantee.
Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods in terms of signal-to-noise ratio, structural similarity index, as well as two downstream tasks. By enhancing image quality and performance in downstream tasks, the proposed method shows potential for advancing the utility of retinal fundus image-driven pipelines in routine clinical practice.

The baseline Models training code:

Link to the paper

Data Pre-Processing

To train our model, we used the pubicly available EyeQ dataset, which can be downloaded from here. We adapted the degradation technique mentioned in "Modeling and Enhancing Low-quality Retinal Fundus Images" [IEEE TMI, 2021]. Code for degradation is available here.

Pictorial representation of our approach

Contextual OT figure

Installing modules for Contextual loss

This work is inspired from "The Contextual Loss for Image Transformation with Non-Aligned Data" [ECCV 2018].Please clone the repo to use the pretrained VGG-19 to extract contextual embeddings.

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

Installation

  1. Clone the repo
    git clone https://github.com/Retinal-Research/Contextual-OT.git
    
  2. Create a Python Environment and install the required libraries by running
    pip install -r requirements.txt
    

Inference using Pre-trained weights

The pre-trained weights can be downloaded from here. Download the weights into ./Experiment/ in the current directory. Also verify the path to weights in test.py script. Post making the changes, you can run test.py to save the enhanced images in ./results folder.

Training our model.

Visual results

image

Downstream Task results

image

About

Code for the paper "Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages