Skip to content

fundamentalvision/Uni-Perceiver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Uni-Perceiver

This repository contains training (pre-training, fine-tuning, prompt-tuning), evaluation code and pretrained models for the following papers:

Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks, CVPR 2022.

Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs, NeurIPS 2022.

Introduction

Uni-Perceiver is a generalist model (generic perception model) that can process a variety of modalities and tasks with unified modeling and shared parameters. Different perception tasks are modeled as the same formulation, that is, finding the maximum likelihood target for each input through the similarity of their representations. Meanwhile, Uni-Perceiver is pre-trained on several uni-modal and multi-modal tasks, and evaluated on a variety of downstream tasks, including novel tasks that did not appear in the pre-training stage. Thanks to the unified formulation, it shows the ability of zero-shot inference on novel tasks, and shows promising performance close to or on par with SOTA results by prompt tuning or finetuning.

UnPerceiver-intro

In Uni-Perceiver-MoE, we found that the interference among different tasks and modalities can lead to performance degradation of generalist models on some tasks compared with task-specialized models. We introduce the Conditional Mixture-of-Experts (Conditional MoEs) to mitigate such interference. By incorporating the proposed Conditional MoEs, Uni-Perceiver-MoE can effectively mitigate the interference across tasks and modalities, and achieves state-of-the-art results on a series of downstream tasks via prompt tuning on 1% of downstream data. Moreover, the introduction of Conditional MoEs still holds the generalization ability of generalist models to conduct zero-shot inference on new tasks,

UnPerceiver-moe-intro

Main Results and Pretrained Models

Base Models

Task Image Classification Image Caption Image Retrieval Video ClassificationVideo CaptionVideo Retrieval
DatasetImageNet-1kMSCOCOFlickr30kMSCOCOFlickr30kKinetics-400MSVDMSVD
SplitILSVRC 2012 valKarpathy testtestKarpathy testtesttest-devvalvalval
MetricAcc@1BLEU-4BLEU-4R@1 i2tR@1 t2iR@1 i2tR@1 t2iAcc@1BLEU-4R@1 v2tR@1 t2v
Uni-PerceiverBASE w/o Tuning79.2 32.014.7 64.9 50.7 82.3 71.1 74.5 22.6 50.338.7
Uni-PerceiverBASE PT (1%)80.9 35.530.268.4 51.9 91.0 76.0 74.8 59.5 62.7 43.8
Uni-PerceiverBASE FT (100%)84.036.4 31.2 69.853.9 92.777.577.7 63.3 62.845.8
Uni-Perceiver-MoEBASE w/o Tuning80.3 33.215.9 64.6 51.6 82.1 75.8 76.8 23.4 52.840.0
Uni-Perceiver-MoEBASE PT (1%)82.0 36.830.768.9 52.6 91.3 78.5 77.2 60.0 65.6 45.3
Uni-Perceiver-MoEBASE FT (100%)84.537.3 32.4 70.554.1 93.679.879.3 65.4 65.047.8

Large Models

Task Image Classification Image Caption Image Retrieval Video ClassificationVideo CaptionVideo Retrieval
DatasetImageNet-1kMSCOCOFlickr30kMSCOCOFlickr30kKinetics-400MSVDMSVD
SplitILSVRC 2012 valKarpathy testtestKarpathy testtesttest-devvalvalval
MetricAcc@1BLEU-4BLEU-4R@1 i2tR@1 t2iR@1 i2tR@1 t2iAcc@1BLEU-4R@1 v2tR@1 t2v
Uni-PerceiverLARGE w/o Tuning82.7 35.3 15.1 67.8 54.1 83.7 74.2 79.524.7 45.4 34.2
Uni-PerceiverLARGE PT (1%)84.2 38.6 32.9 73.3 56.2 92.1 80.0 80.0 67.2 65.5 48.6
Uni-PerceiverLARGE FT (100%)86.2 39.2 35.5 74.4 57.9 94.7 82.181.9 68.3 65.2 50.8
Uni-Perceiver-MoELARGE w/o Tuning83.4 35.5 15.8 67.9 55.3 83.6 75.9 82.124.6 45.7 41.9
Uni-Perceiver-MoELARGE PT (1%)84.9 39.3 33.7 73.3 57.1 92.4 80.6 83.0 67.6 66.4 50.3
Uni-Perceiver-MoELARGE FT (100%)86.4 40.5 36.2 74.7 58.3 94.1 83.784.2 68.9 67.6 52.3
  • The numbers are slightly better than the original paper of Uni-Perceiver, which are from the reproduced version of Uni-Perceiver used as the baseline of Uni-Perceiver-MoE.
  • The image resolution for all tasks is 224x224.
  • See OtherResults.md for results on more tasks and datasets.

Usage

Requirements

  • Linux, CUDA>=10.1, GCC>=5.4

  • Python >=3.7

  • pytorch >= 1.8.0

  • JAVA >= 1.8 (for caption task evaluation)

Installation

git clone https://github.com/fundamentalvision/Uni-Perceiver
cd Uni-Perceiver
pip install -r requirements.txt

Data

See prepare_data.md.

Pre-trained Model Weights

See checkpoints.md.

Pre-training

See pretraining.md.

Fine-tuning

See finetuning.md.

Prompt-tuning

See prompt_tuning.md.

Inference

See inference.md.

TODO

  • release more pretrained models

    • Uni-Perceiver Tiny model
    • Uni-Perceiver Small model
    • Uni-Perceiver Huge model
  • support more datasets and tasks

License

Uni-Perceiver is licensed under the Apache-2.0 License.



Citing Uni-Perceiver

If you find Uni-Perceiver useful in your research, please consider giving a star ⭐ and citing:

 @article{zhu2021uni,
  title={Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks},
  author={Zhu, Xizhou and Zhu, Jinguo and Li, Hao and Wu, Xiaoshi and Wang, Xiaogang and Li, Hongsheng and Wang, Xiaohua and Dai, Jifeng},
  journal={arXiv preprint arXiv:2112.01522},
  year={2021}

}
@article{zhu2022uni,
  title={Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs},
  author={Zhu, Jinguo and Zhu, Xizhou and Wang, Wenhai and Wang, Xiaohua and Li, Hongsheng and Wang, Xiaogang and Dai, Jifeng},
  journal={arXiv preprint arXiv:2206.04674},
  year={2022}
}

Acknowledgements

Many thanks to following codes that help us a lot in building this codebase:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published