Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding EfficientNetV2 architecture #5450

Merged
merged 18 commits into from
Mar 2, 2022
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion docs/source/models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ architectures for image classification:
- `ResNeXt`_
- `Wide ResNet`_
- `MNASNet`_
- `EfficientNet`_
- `EfficientNet`_ v1 & v2
- `RegNet`_
- `VisionTransformer`_
- `ConvNeXt`_
Expand Down Expand Up @@ -70,6 +70,9 @@ You can construct a model with random weights by calling its constructor:
efficientnet_b5 = models.efficientnet_b5()
efficientnet_b6 = models.efficientnet_b6()
efficientnet_b7 = models.efficientnet_b7()
efficientnet_v2_s = models.efficientnet_v2_s()
efficientnet_v2_m = models.efficientnet_v2_m()
efficientnet_v2_l = models.efficientnet_v2_l()
regnet_y_400mf = models.regnet_y_400mf()
regnet_y_800mf = models.regnet_y_800mf()
regnet_y_1_6gf = models.regnet_y_1_6gf()
Expand Down Expand Up @@ -122,6 +125,9 @@ These can be constructed by passing ``pretrained=True``:
efficientnet_b5 = models.efficientnet_b5(pretrained=True)
efficientnet_b6 = models.efficientnet_b6(pretrained=True)
efficientnet_b7 = models.efficientnet_b7(pretrained=True)
efficientnet_v2_s = models.efficientnet_v2_s(pretrained=True)
efficientnet_v2_m = models.efficientnet_v2_m(pretrained=True)
efficientnet_v2_l = models.efficientnet_v2_l(pretrained=True)
regnet_y_400mf = models.regnet_y_400mf(pretrained=True)
regnet_y_800mf = models.regnet_y_800mf(pretrained=True)
regnet_y_1_6gf = models.regnet_y_1_6gf(pretrained=True)
Expand Down Expand Up @@ -238,6 +244,9 @@ EfficientNet-B4 83.384 96.594
EfficientNet-B5 83.444 96.628
EfficientNet-B6 84.008 96.916
EfficientNet-B7 84.122 96.908
EfficientNetV2-s 84.228 96.878
EfficientNetV2-m 85.112 97.156
EfficientNetV2-l 85.810 97.792
regnet_x_400mf 72.834 90.950
regnet_x_800mf 75.212 92.348
regnet_x_1_6gf 77.040 93.440
Expand Down Expand Up @@ -439,6 +448,9 @@ EfficientNet
efficientnet_b5
efficientnet_b6
efficientnet_b7
efficientnet_v2_s
efficientnet_v2_m
efficientnet_v2_l

RegNet
------------
Expand Down
3 changes: 3 additions & 0 deletions hubconf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@
efficientnet_b5,
efficientnet_b6,
efficientnet_b7,
efficientnet_v2_s,
efficientnet_v2_m,
efficientnet_v2_l,
)
from torchvision.models.googlenet import googlenet
from torchvision.models.inception import inception_v3
Expand Down
22 changes: 21 additions & 1 deletion references/classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Then we averaged the parameters of the last 3 checkpoints that improved the Acc@
and [#3354](https://github.com/pytorch/vision/pull/3354) for details.


### EfficientNet
### EfficientNet-V1

The weights of the B0-B4 variants are ported from Ross Wightman's [timm repo](https://github.com/rwightman/pytorch-image-models/blob/01cb46a9a50e3ba4be167965b5764e9702f09b30/timm/models/efficientnet.py#L95-L108).

Expand All @@ -114,6 +114,26 @@ torchrun --nproc_per_node=8 train.py --model efficientnet_b7 --interpolation bic
--val-resize-size 600 --val-crop-size 600 --train-crop-size 600 --test-only --pretrained
```


### EfficientNet-V2
```
torchrun --nproc_per_node=8 train.py \
--model $MODEL --batch-size 128 --lr 0.5 --lr-scheduler cosineannealinglr \
--lr-warmup-epochs 5 --lr-warmup-method linear --auto-augment ta_wide --epochs 600 --random-erase 0.1 \
--label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --weight-decay 0.00002 --norm-weight-decay 0.0 \
--train-crop-size $TRAIN_SIZE --model-ema --val-crop-size $EVAL_SIZE --val-resize-size $EVAL_SIZE \
--ra-sampler --ra-reps 4
```
Here `$MODEL` is one of `efficientnet_v2_s` and `efficientnet_v2_m`.
Note that the Small variant had a `$TRAIN_SIZE` of `300` and a `$EVAL_SIZE` of `384`, while the Medium `384` and `480` respectively.

Note that the above command corresponds to training on a single node with 8 GPUs.
For generatring the pre-trained weights, we trained with 4 nodes, each with 8 GPUs (for a total of 32 GPUs),
and `--batch_size 32`.

The weights of the Large variant are ported from the original paper rather than trained from scratch. See the `EfficientNet_V2_L_Weights` entry for their exact preprocessing transforms.


### RegNet

#### Small models
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading