Skip to content

Conversation

@eenzeenee
Copy link
Contributor

@eenzeenee eenzeenee commented Oct 2, 2023

What does this PR do?

Adds resources of CLIP according to this issue

Part of #20055

Before submitting

  • [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@stevhliu, @jungnerd, @wonhyeongseo may you please review this PR?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@jungnerd
Copy link
Contributor

jungnerd commented Oct 2, 2023

LGTM! Thanks for adding resources for CLIP.
By the way, please fix Part of #20555 to Part of #20055 since 20055 is the issue of Model resources contribution.

@eenzeenee
Copy link
Contributor Author

Thank you for reviewing!! I fixed it!

wonhyeongseo

This comment was marked as outdated.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution! 🤗

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Copy link
Contributor

@NielsRogge NielsRogge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing resources! It's very helpful, however, a lot of resources here are a bit random.

  • assigning the "text-to-image" pipeline tag to CLIP is a bit weird since CLIP is only a text encoder, vision encoder model. It can't be used for generating images using text. It is however used as a building block in Stable Diffusion for instance to condition the model on text prompts.
  • the "deploy" section includes several resources which are not about deployment. Deployment is about optimizing the model for inference, using tools like ONNX, 🤗 Optimum, quantization, etc.
  • the "inference" section includes a notebook about... explainability. Then it makes more sense to include this in a separate "explainability" section. Inference is about showcasing predictions with the model

Some relevant resources for CLIP include:

Co-authored-by: Steven Liu <[email protected]>
Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing, can you also include the training script and blog post @NielsRogge linked to?

Co-authored-by: Steven Liu <[email protected]>
@eenzeenee
Copy link
Contributor Author

eenzeenee commented Oct 6, 2023

Thanks for fixing, can you also include the training script and blog post @NielsRogge linked to?

Sorry for the late reply. It seems to appear in 86 and 87 lines. Do you want me to change the description to something else?

@stevhliu
Copy link
Member

stevhliu commented Oct 6, 2023

Do you want me to change the description to something else?

Should be good then! 👍

@stevhliu stevhliu requested a review from NielsRogge October 6, 2023 19:35
Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more minor nits sorry! 😅

@stevhliu stevhliu merged commit d6e5b02 into huggingface:main Oct 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants