Skip to content

Conversation

@ydshieh
Copy link
Collaborator

@ydshieh ydshieh commented Dec 2, 2022

What does this PR do?

These are vision models, and they don't form encoder-decoder themselves (unlike some text models like Bart).

Furthermore, the current default value (specified in each config class __init__) for these configs are False, which is the same as the default value in PretrainedConfig. So we can just remove it from the parameters, and rely on **kwargs in the call to super.__init__ .

@ydshieh ydshieh requested review from NielsRogge and sgugger December 2, 2022 17:11
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Dec 2, 2022

The documentation is not available anymore as the PR was closed or merged.

@ydshieh ydshieh marked this pull request as draft December 2, 2022 17:56
@ydshieh ydshieh force-pushed the cleanup_config_attrs branch from 8f8bafd to f90ba87 Compare December 2, 2022 17:58
@ydshieh ydshieh marked this pull request as ready for review December 2, 2022 18:26
@ydshieh ydshieh marked this pull request as draft December 2, 2022 18:28
@ydshieh ydshieh marked this pull request as ready for review December 2, 2022 18:55
Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the cleanup!

@ydshieh ydshieh merged commit 9ffbed2 into main Dec 5, 2022
@ydshieh ydshieh deleted the cleanup_config_attrs branch December 5, 2022 14:12
mpierrau pushed a commit to mpierrau/transformers that referenced this pull request Dec 15, 2022
* Remove is_encoder_decoder from some vision models

* cleanup more

* cleanup more

Co-authored-by: ydshieh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants