This repo supplements our paper: https://arxiv.org/abs/2106.07085
Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian
We intend to regularly update this repo with new papers. If you see any paper missing here, please create an issue or PR.
Neural networks are prone to overfitting and memorizing data patterns. To avoid over-fitting, and enhance their generalization and performance, various methods have been suggested in the literature, including dropout, regularization, label smoothing, etc. One such method is augmentation which introduces different types of corruption in the data to prevent the model from overfitting and memorizing patterns present in the data. A sub-area of data augmentation is image mixing and deleting. This specific type of augmentation either deletes image regions or mixes two images to hide or make particular characteristics of images confusing for the network, forcing it to emphasize the overall structure of the object in an image. Models trained with this approach have proven to perform and generalize well compared to those trained without image mixing or deleting. An added benefit that comes with this method of training is robustness against image corruption. Due to its low computational cost and recent success, researchers have proposed many image mixing and deleting techniques. We furnish an in-depth survey of image mixing and deleting techniques and provide categorization via their most distinguishing features. We initiate our discussion with some fundamental relevant concepts. Next, we present essentials, such as each category’s strengths and limitations, describing their working mechanism, basic formulations, and applications. We also discuss the general challenges and recommend possible future research directions for image mixing and deleting data augmentation techniques.
- Improved regularization of convolutional neural networks with cutout, arXiv, 2017. [Paper] [Code]
- Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization, ICCV, 2017. [Paper] [Code]
- Random erasing data augmentation, AAAI, 2020. [Paper] [Code]
- Gridmask data augmentation, arXiv, 2020. [Paper] [Code]
- Ricap: Random image cropping and patching data augmentation for deep cnns, ACML, 2018. [Paper] [Code]
- Improved mixed example data augmentation, WACV, 2019. [Paper] [Code]
- Cutmix: Regularization strategy to train strong classifiers with localizable features, ICCV, 2019. [Paper] [Code]
- Resizemix: Mixing data with preserved object information and true labels, arXiv, 2020. [Paper]
- Attentive cutmix: An enhanced data augmentation approach for deep learning based image classification, ICASSP, 2020. [Paper]
- Saliencymix: A saliency guided data augmentation strategy for better regularization, ICLR, 2021. [Paper] [Code]
- Keepaugment: A simple information-preserving data augmentation approach, CVPR, 2021. [Paper] [Code]
- Milking cowmask for semi-supervised image classification, VISIGRAPP, 2022. [Paper] [Code]
- Recursivemix: Mixed learning with history, NeurIPS, 2022. [Paper] [Code]
- Lumix: Improving mixup by better modelling label uncertainty, arXiv, 2022. [Paper]
- Saliency grafting: Innocuous attribution-guided mixup with calibrated label mixing, AAAI, 2022. [Paper]
- Data augmentation by pairing samples for images classification, arXiv, 2018. [Paper]
- mixup: Beyond empirical risk minimization, ICLR, 2018. [Paper] [Code]
- Manifold mixup: Better representations by interpolating hidden states, ICML, 2019. [Paper] [Code]
- Remix: Rebalanced mixup, ECCV, 2020. [Paper]
- Augmix: A simple data processing method to improve robustness and uncertainty, ICLR, 2020. [Paper] [Code]
- Smoothmix: A simple yet effective data augmentation to train robust classifiers, CVPRW, 2020. [Paper]
- Puzzle mix: Exploiting saliency and local statistics for optimal mixup, ICML, 2020. [Paper] [Code]
- Co-mixup: Saliency guided joint mixup with supermodular diversity, ICLR, 2021. [Paper] [Code]
- Supermix: Supervising the mixing data augmentation, CVPR, 2021. [Paper] [Code]
- Stylemix: Separating content and style for enhanced data augmentation, CVPR, 2021. [Paper] [Code]
- Automix: Unveiling the power of mixup for stronger classifier, ECCV, 2022. [Paper] [Code]
- Noisy feature mixup, ICLR, 2022. [Paper] [Code]
- Pixmix: Dreamlike pictures comprehensively improve safety measures, CVPR, 2022. [Paper] [Code]
- Alignmixup: Improving representations by interpolating aligned features, CVPR, 2022. [Paper] [Code]
- Attribute mix: semantic data augmentation for fine grained recognition, VCIP, 2020. [Paper]
- Intra-class part swapping for fine-grained image classification, WACV, 2021. [Paper]
- Snapmix: Semantically proportional mixing for augmenting fine-grained data, AAAI, 2021. [Paper] [Code]
- A-fast-rcnn: Hard positive generation via adversary for object detection, CVPR, 2017. [Paper] [Code]
- Cut, paste and learn: Surprisingly easy synthesis for instance detection, ICCV, 2017. [Paper] [Code]
- Modeling visual context is key to augmenting object detection datasets, ECCV, 2018. [Paper] [Code]
- Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework, CVPR, 2021. [Paper]
- Transmix: Attend to mix for vision transformers, CVPR, 2022. [Paper] [Code]
- Tokenmix: Rethinking image mixing for data augmentation in vision transformers, ECCV, 2022. [Paper] [Code]
- Tokenmixup: Efficient attention-guided token-level data augmentation for transformers, NeurIPS, 2022. [Paper] [Code]
- Improving contrastive learning by visualizing feature transformation, ICCV, 2021. [Paper] [Code]
- i-mix: A domain-agnostic strategy for contrastive representation learning, ICLR, 2021. [Paper] [Code]
- A simple data mixing prior for improving self-supervised learning, CVPR, 2022. [Paper] [Code]
- Mixmatch: A holistic approach to semi-supervised learning, NeurIPS, 2019. [Paper] [Code]
- Remixmatch: Semi-supervised learning with distribution matching andaugmentation anchoring, ICLR, 2020. [Paper] [Code]
- Dividemix: Learning with noisy labels as semi-supervised learning, ICLR, 2020. [Paper] [Code]
- Focalmix: Semi-supervised learning for 3d medical image detection, CVPR, 2020. [Paper]
- Fix-match: Simplifying semi-supervised learning with consistency and confidence, NeurIPS, 2020. [Paper] [Code]
- Virtual mixup training for unsupervised domain adaptation, arXiv, 2019. [Paper] [Code]
- Improve unsupervised domain adaptation with mixup training, arXiv, 2020. [Paper]
- Fixbi:Bridging domain spaces for unsupervised domain adaptation, CVPR, 2021. [Paper] [Code]
- Center-wise local image mixture for contrastive representation learning, BMVC, 2021. [Paper]
- Un-mix: Rethinking image mixtures for unsupervised visual representation learning, AAAI, 2022. [Paper] [Code]
- Inter-polated adversarial training: Achieving robust neural networks without sacrificing too much accuracy, AISec@CCS, 2019. [Paper] [Code]
- Adversarial vertex mixup: Toward better adversarially robust generalization, CVPR, 2020. [Paper] [Code]
- Strong data augmentation sanitizes poisoning and backdoor attacks without an accuracy tradeoff, ICASSP, 2021. [Paper]
- Dp-instahide: Provably defusing poisoning and backdoor attacks with differentially private data augmentations, arXiv, 2021. [Paper]
- Instahide: Instance-hiding schemes for private distributed learning, ICML, 2020. [Paper] [Code]
- Datamix: Efficient privacy-preserving edge-cloud inference, ECCV, 2020. [Paper]
- Xor mixup: Privacy-preserving data augmentation for one-shot federated learning, arXiv, 2020. [Paper] [Code]
- Pointmixup: Augmentation for point clouds, ECCV, 2020. [Paper] [Code]
- Regularization strategy for point cloud via rigidly mixed sample, CVPR, 2021. [Paper] [Code]
- Pointcutmix: Regularization strategy for point cloud classification, Neurocomputing, 2022. [Paper] [Code]
If you find this paper useful in your research, please cite the paper:
@article{naveed2021survey,
title={Survey: Image mixing and deleting for data augmentation},
author={Naveed, Humza and Anwar, Saeed and Hayat, Munawar and Javed, Kashif and Mian, Ajmal},
journal={arXiv preprint arXiv:2106.07085},
url = {https://arxiv.org/abs/2106.07085},
year={2021}
}