This is the official PyTorch implementation of our paper:
Elevating Flow-Guided Video Inpainting with Reference Generation, AAAI 2025
Suhwan Cho, Seoung Wug Oh, Sangyoun Lee, Joon-Young Lee
Link: [arXiv]
![](https://private-user-images.githubusercontent.com/54178929/396552044-04bd58ff-330e-4e6e-8b5f-402d0afade6d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk5Mjk4MzgsIm5iZiI6MTczOTkyOTUzOCwicGF0aCI6Ii81NDE3ODkyOS8zOTY1NTIwNDQtMDRiZDU4ZmYtMzMwZS00ZTZlLThiNWYtNDAyZDBhZmFkZTZkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE5VDAxNDUzOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY3NWI0ZTk4ZDg4MzJhYzllN2ZkMjQwZDRmN2Y3N2YyMzNkNmE5NWQ4OGQ4ODRiNGM1MWM3MmNkMzE5OWJjNmMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.qtBodt_SBQURprLWoGaySvAxfpgBTjUUjZI_ildnATY)
You can also find other related papers at awesome-video-inpainting.
demo.mp4
Existing VI approaches face challenges due to the inherent ambiguity between known content propagation and new content generation. To address this, we propose a robust VI framework that integrates a large generative model to decouple this ambiguity. To further improve pixel distribution across frames, we introduce an advanced pixel propagation protocol named one-shot pulling. Furthermore, we present the HQVI benchmark, a dataset specifically designed to evaluate VI performance in diverse and realistic scenarios.