-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
It seems that in AID's paper, the models are first pre-trained on COCO for 210 epochs with the default settings and then fine-tuned with AID for another 210 epochs. The authors report that this training strategy will bring some gains, but we did not fully check this. |
Beta Was this translation helpful? Give feedback.
-
oh, thanks. i will try both. once i have result i will paste it here |
Beta Was this translation helpful? Give feedback.
It seems that in AID's paper, the models are first pre-trained on COCO for 210 epochs with the default settings and then fine-tuned with AID for another 210 epochs. The authors report that this training strategy will bring some gains, but we did not fully check this.