Torchvision model zoo provides number of implementations of various state-of-the-art architectures, however, most of them are defined and implemented for ImageNet. Usually it is very straightforward to use them on other datasets, but sometimes these models need manual setup.
Unfortunately, none of the pytorch repositories with ResNets on CIFAR10 provides an implementation as described in the original paper. If you just use the torchvision's models on CIFAR10 you'll get the model that differs in number of layers and parameters. This is unacceptable if you want to directly compare ResNet-s on CIFAR10 with the original paper. The purpose of this repo is to provide a valid pytorch implementation of ResNet-s for CIFAR10 as described in the original paper. Following models are provided:
Name | # layers | # params | Test err(paper) | Test err(this impl.) |
---|---|---|---|---|
ResNet20 | 20 | 0.27M | 8.75% | 8.27% |
ResNet32 | 32 | 0.46M | 7.51% | 7.37% |
ResNet44 | 44 | 0.66M | 7.17% | 6.90% |
ResNet56 | 56 | 0.85M | 6.97% | 6.61% |
ResNet110 | 110 | 1.7M | 6.43% | 6.32% |
ResNet1202 | 1202 | 19.4M | 7.93% | 6.18% |
The implementation matches description of the original paper, with comparable or better test error.
git clone https://github.com/akamaster/pytorch_resnet_cifar10
cd pytorch_resnet_cifar10
chmod +x run.sh && ./run.sh
This implementation follows the paper in straightforward manner with some caveats: First, training in the paper uses 45k/5k train/validation split on the train data, and selects the best performing model based on the performance on the validation set. This implementation does not do any validation testing, so if you need to compare your results on ResNet head-to-head to the orginal paper keep this in mind. Second, if you want to train ResNet1202 keep in mind that you need 16GB memory on GPU.
- ResNet20, 8.27% err
- ResNet32, 7.37% err
- ResNet44, 6.90% err
- ResNet56, 6.61% err
- ResNet110, 6.32% err
- ResNet1202, 6.18% err
If you find this implementation useful and using it in your production/academic work please cite/mention this page and the author Yerlan Idelbayev.