Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
In this paper, we present an open-set object detector, called Grounding DINO, by marrying Transformer-based detector DINO with grounded pre-training, which can detect arbitrary objects with human inputs such as category names or referring expressions. The key solution of open-set object detection is introducing language to a closed-set detector for open-set concept generalization. To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion. While previous works mainly evaluate open-set object detection on novel categories, we propose to also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs remarkably well on all three settings, including benchmarks on COCO, LVIS, ODinW, and RefCOCO/+/g. Grounding DINO achieves a 52.5 AP on the COCO detection zero-shot transfer benchmark, i.e., without any training data from COCO. It sets a new record on the ODinW zero-shot benchmark with a mean 26.1 AP.
cd $MMDETROOT
# source installation
pip install -r requirements/multimodal.txt
# or mim installation
mim install mmdet[multimodal]
Grounding DINO utilizes BERT as the language model, which requires access to https://huggingface.co/. If you encounter connection errors due to network access, you can download the required files on a computer with internet access and save them locally. Finally, modify the lang_model_name
field in the config to the local path. Please refer to the following code:
from transformers import BertConfig, BertModel
from transformers import AutoTokenizer
config = BertConfig.from_pretrained("bert-base-uncased")
model = BertModel.from_pretrained("bert-base-uncased", add_pooling_layer=False, config=config)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
config.save_pretrained("your path/bert-base-uncased")
model.save_pretrained("your path/bert-base-uncased")
tokenizer.save_pretrained("your path/bert-base-uncased")
cd $MMDETROOT
wget https://download.openmmlab.com/mmdetection/v3.0/grounding_dino/groundingdino_swint_ogc_mmdet-822d7e9d.pth
python demo/image_demo.py \
demo/demo.jpg \
configs/grounding_dino/grounding_dino_swin-t_pretrain_obj365_goldg_cap4m.py \
--weights groundingdino_swint_ogc_mmdet-822d7e9d.pth \
--texts 'bench . car .'
Model | Backbone | Style | COCO mAP | Official COCO mAP | Pre-Train Data | Config | Download |
---|---|---|---|---|---|---|---|
Grounding DINO-T | Swin-T | Zero-shot | 48.5 | 48.4 | O365,GoldG,Cap4M | config | model |
Grounding DINO-T | Swin-T | Finetune | 58.1(+0.9) | 57.2 | O365,GoldG,Cap4M | config | model | log |
Grounding DINO-B | Swin-B | Zero-shot | 56.9 | 56.7 | COCO,O365,GoldG,Cap4M,OpenImage,ODinW-35,RefCOCO | config | model |
Grounding DINO-B | Swin-B | Finetune | 59.7 | COCO,O365,GoldG,Cap4M,OpenImage,ODinW-35,RefCOCO | config | model | log | |
Grounding DINO-R50 | R50 | Scratch | 48.9(+0.8) | 48.1 | config | model | log |
Note:
- The weights corresponding to the zero-shot model are adopted from the official weights and converted using the script. We have not retrained the model for the time being.
- Finetune refers to fine-tuning on the COCO 2017 dataset. The R50 model is trained using 8 NVIDIA GeForce 3090 GPUs, while the remaining models are trained using 16 NVIDIA GeForce 3090 GPUs. The GPU memory usage is approximately 8.5GB.
- Our performance is higher than the official model due to two reasons: we modified the initialization strategy and introduced a log scaler.
To facilitate fine-tuning on custom datasets, we use a simple cat dataset as an example, as shown in the following steps.
cd mmdetection
wget https://download.openmmlab.com/mmyolo/data/cat_dataset.zip
unzip cat_dataset.zip -d data/cat/
cat dataset is a single-category dataset with 144 images, which has been converted to coco format.
Due to the simplicity and small number of cat datasets, we use 8 cards to train 20 epochs, scale the learning rate accordingly, and do not train the language model, only the visual model.
The Details of the configuration can be found in grounding_dino_swin-t_finetune_8xb2_20e_cat
Due to the Grounding DINO is an open detection model, so it can be detected and evaluated even if it is not trained on the cat dataset.
The single image visualization is as follows:
cd mmdetection
python demo/image_demo.py data/cat/images/IMG_20211205_120756.jpg configs/grounding_dino/grounding_dino_swin-t_finetune_8xb2_20e_cat.py --weights https://download.openmmlab.com/mmdetection/v3.0/grounding_dino/groundingdino_swint_ogc_mmdet-822d7e9d.pth --texts cat.
The test dataset evaluation on single card is as follows:
python tools/test.py configs/grounding_dino/grounding_dino_swin-t_finetune_8xb2_20e_cat.py https://download.openmmlab.com/mmdetection/v3.0/grounding_dino/groundingdino_swint_ogc_mmdet-822d7e9d.pth
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.867
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.931
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.867
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.903
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.907
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.907
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.907
./tools/dist_train.sh configs/grounding_dino/grounding_dino_swin-t_finetune_8xb2_20e_cat.py 8 --work-dir cat_work_dir
The model will be saved based on the best performance on the test set. The performance of the best model (at epoch 16) is as follows:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.905
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.923
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.905
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.927
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.937
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.937
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.937
We can find that after fine-tuning training, the training of the cat dataset is increased from 86.7 to 90.5.
If we do single image inference visualization again, the result is as follows:
cd mmdetection
python demo/image_demo.py data/cat/images/IMG_20211205_120756.jpg configs/grounding_dino/grounding_dino_swin-t_finetune_8xb2_20e_cat.py --weights cat_work_dir/best_coco_bbox_mAP_epoch_16.pth --texts cat.