Skip to content

Latest commit

 

History

History
37 lines (28 loc) · 3.09 KB

README.md

File metadata and controls

37 lines (28 loc) · 3.09 KB

Pretrained Model Weights and Configurations

Model Config Log
FastMaskVim-B.ckpt FastMaskVim-B.yaml FastMaskVim-B.csv
FastMaskVim-L.ckpt FastMaskVim-L.yaml FastMaskVim-L.csv
FastMaskVim-H.ckpt FastMaskVim-H.yaml FastMaskVim-H.csv
Vim-B.ckpt Vim-B.yaml Vim-B.csv
Vim-L.ckpt Vim-L.yaml Vim-L.csv

Notes:

  • For reproducibility, make sure overall batch size remains 4096 across GPUs/Nodes. Flag accum_iter can be used.
  • trainer/global_step in log files refers to gradient steps with batch size 4096.
  • Modify imagenet_train_dir_path flag in datasets_mae.py.

Finetuned Model Weights and Configurations

Model Top-1 Acc. Config Log
FastVim-B.ckpt 83.0 FastVim-B.yaml FastVim-B.csv
FastVim-L.ckpt 84.9 FastVim-L.yaml FastVim-L.csv
FastVim-H.ckpt 86.1 FastVim-H.yaml FastVim-H.csv
FastVim-H_488.ckpt 86.7 FastVim-H_448.yaml FastVim-H_448.csv
Vim-B.ckpt 83.3 Vim-B.yaml Vim-B.csv
Vim-L.ckpt 85.1 Vim-L.yaml Vim-L.csv

Notes:

  • For reproducibility, make sure overall batch size remains 1024 across GPUs/Nodes. Flag accum_iter can be used.
  • trainer/global_step in log files refers to gradient steps with batch size 1024.
  • Modify imagenet_train_dir_path and imagenet_val_dir_path flags in datasets_finetune.py.