Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CodeCamp2023-339] New Version of config Adapting Vision Transformer Algorithm #1727

Merged
merged 18 commits into from
Aug 2, 2023

Conversation

zeyuanyin
Copy link
Contributor

ViT algorithm config files are added to adapt new version of config.

Based on open-mmlab/OpenMMLabCamp#339

Motivation

MMEngine establishes a renovated configuration infrastructure enabling us to tailor the configuration files according to requirements.

Modification

  • Add 9 ViT config python files in mmpretrain/configs/vision_transformer/
  • Add corresponding model, dataset, and schedule config python files in mmpretrain/configs/base/
  • All added config files adapt new version of config.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@CLAassistant
Copy link

CLAassistant commented Jul 26, 2023

CLA assistant check
All committers have signed the CLA.

@zeyuanyin zeyuanyin changed the title [Enhancement] New Version of config Adapting Vision Transformer Algorithm [CodeCamp2023-339] New Version of config Adapting Vision Transformer Algorithm Jul 27, 2023
@codecov
Copy link

codecov bot commented Jul 28, 2023

Codecov Report

Patch coverage: 33.17% and project coverage change: -2.78% ⚠️

Comparison is base (f9dcae2) 68.16% compared to head (c33efa7) 65.38%.
Report is 64 commits behind head on dev.

❗ Current head c33efa7 differs from pull request most recent head 7840fb5. Consider uploading reports for the commit 7840fb5 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##              dev    #1727      +/-   ##
==========================================
- Coverage   68.16%   65.38%   -2.78%     
==========================================
  Files         295      358      +63     
  Lines       23372    25989    +2617     
  Branches     3713     4134     +421     
==========================================
+ Hits        15932    16994    +1062     
- Misses       6880     8377    +1497     
- Partials      560      618      +58     
Flag Coverage Δ
unittests 65.38% <33.17%> (-2.78%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Changed Coverage Δ
configs/_base_/datasets/imagenet_bs128_mbv3.py 100.00% <ø> (ø)
configs/_base_/datasets/imagenet_bs32.py 100.00% <ø> (ø)
...onfigs/_base_/datasets/imagenet_bs32_pil_resize.py 100.00% <ø> (ø)
configs/_base_/datasets/imagenet_bs64_swin_224.py 100.00% <ø> (ø)
configs/_base_/datasets/imagenet_bs64_swin_384.py 100.00% <ø> (ø)
mmpretrain/apis/feature_extractor.py 37.50% <0.00%> (ø)
mmpretrain/apis/image_caption.py 30.64% <0.00%> (ø)
mmpretrain/apis/image_retrieval.py 21.42% <0.00%> (ø)
mmpretrain/apis/visual_grounding.py 27.53% <0.00%> (ø)
mmpretrain/apis/visual_question_answering.py 25.67% <0.00%> (ø)
... and 32 more

... and 65 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@fangyixiao18 fangyixiao18 changed the base branch from main to dev July 31, 2023 11:32
@fangyixiao18 fangyixiao18 merged commit 2fb52ee into open-mmlab:dev Aug 2, 2023
5 of 6 checks passed
@OpenMMLab-Assistant-004
Copy link

Hi @zeyuanyin,

We'd like to express our appreciation for your valuable contributions to the mmpretrain. Your efforts have significantly aided in enhancing the project's quality.
It is our pleasure to invite you to join our community thorugh Discord_Special Interest Group (SIG) channel. This is a great place to share your experiences, discuss ideas, and connect with other like-minded people. To become a part of the SIG channel, send a message to the moderator, OpenMMLab, briefly introduce yourself and mention your open-source contributions in the #introductions channel. Our team will gladly facilitate your entry. We eagerly await your presence. Please follow this link to join us: ​https://discord.gg/UjgXkPWNqA.

If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID".

Thanks again for your awesome contribution, and we're excited to have you as part of our community!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants