Skip to content

Latest commit

 

History

History
26 lines (16 loc) · 727 Bytes

README.md

File metadata and controls

26 lines (16 loc) · 727 Bytes

EnlightenGAN-inference

Very minimalistic wrapper for EnlightenGAN inference. It uses carefully converted pretrained weights (+ baked in preprocessing) from the original repo and only requires onnxruntime as inference engine.

Installation

pip install git+https://github.com/arsenyinfo/EnlightenGAN-inference

Usage

from enlighten_inference import EnlightenOnnxModel
import cv2

img = cv2.imread('/path/to/image.jpg')

# by default, CUDAExecutionProvider is used
model = EnlightenOnnxModel()
# however, one can choose the providers priority, e.g.: 
model = EnlightenOnnxModel(providers = ["CPUExecutionProvider"])

processed = model.predict(img)