Official Pytorch Implementation for "Learning to Learn from APIs: Black-Box Data-Free Meta-Learning" (ICML-2023)
For our paper accepted by ICML-2023, please refer to [camera-ready] (on-going) or arxiv with the supplementary materials.
- python == 3.8.15
- torch == 1.13.1
Datasets:
-
CIFAR-FS:
-
Please manually download the CIFAR-FS dataset from here to obtain "cifar100.zip".
-
Unzip ".zip". The directory structure is presented as follows:
cifar100 ├─mete_train ├─apple (label_directory) └─ ***.png (image_file) ... ├─mete_val ├─ ... ├─ ... └─mete_test ├─ ... ├─ ...
-
Place it in "./DFL2Ldata/".
-
-
Mini-Imagenet: Please manually download it here. Unzip and then place it in "./DFL2Ldata/".
-
CUB: Please manually download it here. Unzip and then place it in "./DFL2Ldata/".
Pre-trained models:
- You can pre-train the models following the instructions below (Step 3).
-
Make sure that the root directory is "./BiDf-MKD".
-
Prepare the dataset files.
-
For CIFAR-FS:
python ./write_cifar100_filelist.py
After running, you will obtain "meta_train.csv", "meta_val.csv", and "meta_test.csv" files under "./DFL2Ldata/cifar100/split/".
-
For MiniImageNet:
python ./write_miniimagenet_filelist.py
After running, you will obtain "meta_train.csv", "meta_val.csv", and "meta_test.csv" files under "./DFL2Ldata/Miniimagenet/split/".
-
For CUB:
python ./write_CUB_filelist.py
After running, you will obtain "meta_train.csv", "meta_val.csv", and "meta_test.csv" files under "./DFL2Ldata/CUB_200_2011/split/".
-
-
Prepare the pre-trained models.
bash ./scriptskd/pretrain.sh
Some options you may change:
Option Help --dataset cifar100/miniimagenet/cub --pre_backbone conv4/resnet10/resnet18 -
Meta training
- For API-SS scenario:
bash ./scriptskd/ss.sh
- For API-SH scenario:
bash ./scriptskd/sh.sh
- For API-MH scenario:
bash ./scriptskd/mh.sh
Some options you may change:
Option Help --dataset cifar100/miniimagenet/cub for API-SS and API-SH, mix for API-MH --num_sup_train 1 for 1-shot, 5 for 5-shot --backbone conv4, the architecture of meta model --pre_backbone conv4/resnet10/resnet18 for SS, mix for SH and MH --q inference times of zero-order gradient estimation --numsplit for parallel inference, reduce it for low memory cost - For API-SS scenario:
If you find this code is useful to your research, please consider to cite our paper.
@inproceedings{hu2023learning,
title={Learning to Learn from APIs: Black-Box Data-Free Meta-Learning},
author={Zixuan Hu, Li Shen, Zhenyi Wang, Baoyuan Wu, Chun Yuan, Dacheng Tao},
booktitle={International Conference on Machine Learning},
year={2023}
}
Some codes are inspired from CMI.
- Zixuan Hu: [email protected]