-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CodeCamp2023-339] New Version of config
Adapting Vision Transformer Algorithm
#1727
Conversation
config
Adapting Vision Transformer Algorithmconfig
Adapting Vision Transformer Algorithm
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## dev #1727 +/- ##
==========================================
- Coverage 68.16% 65.38% -2.78%
==========================================
Files 295 358 +63
Lines 23372 25989 +2617
Branches 3713 4134 +421
==========================================
+ Hits 15932 16994 +1062
- Misses 6880 8377 +1497
- Partials 560 618 +58
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
mmpretrain/configs/vision_transformer/vit_base_p16_32xb128_mae_in1k.py
Outdated
Show resolved
Hide resolved
Hi @zeyuanyin, We'd like to express our appreciation for your valuable contributions to the mmpretrain. Your efforts have significantly aided in enhancing the project's quality. If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID". Thanks again for your awesome contribution, and we're excited to have you as part of our community! |
ViT algorithm config files are added to adapt new version of
config
.Based on open-mmlab/OpenMMLabCamp#339
Motivation
MMEngine establishes a renovated configuration infrastructure enabling us to tailor the configuration files according to requirements.
Modification
config
.Checklist
Before PR:
After PR: