Jiabo Ye*, Anwen Hu*, Haiyang Xu†, Qinghao Ye, Ming Yan†, Yuhao Dan, Chenlin Zhao, Guohai Xu, Chenliang Li, Junfeng Tian, Qian Qi, Ji Zhang, Fei Huang
DAMO Academy, Alibaba Group
*Equal Contribution; † Corresponding Author
-
An OCR-free end-to-end multimodal large language model.
-
Applicable to various document-related scenarios.
-
Capable of free-form question-answering and multi-round interaction.
-
Comming soon
- Online Demo on ModelScope.
- Online Demo on HuggingFace.
- Source code.
- Instruction Training Data.
mPLUG-DocOwl follows the training and inference code of UReader.
The evaluation dataset DocLLM can be found in ./DocLLM
.
If you found this work useful, consider giving this repository a star and citing our paper as followed:
@misc{ye2023ureader,
title={UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model},
author={Jiabo Ye and Anwen Hu and Haiyang Xu and Qinghao Ye and Ming Yan and Guohai Xu and Chenliang Li and Junfeng Tian and Qi Qian and Ji Zhang and Qin Jin and Liang He and Xin Alex Lin and Fei Huang},
year={2023},
eprint={2310.05126},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{ye2023mplugdocowl,
title={mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding},
author={Jiabo Ye and Anwen Hu and Haiyang Xu and Qinghao Ye and Ming Yan and Yuhao Dan and Chenlin Zhao and Guohai Xu and Chenliang Li and Junfeng Tian and Qian Qi and Ji Zhang and Fei Huang},
year={2023},
eprint={2307.02499},
archivePrefix={arXiv},
primaryClass={cs.CL}
}