Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference Client] fix zero_shot_image_classification's parameters #2665

Merged
merged 2 commits into from
Nov 18, 2024

Conversation

hanouticelina
Copy link
Contributor

This PR fixes InferenceClient.zero_shot_image_classification to accept candidate_labels as a parameter rather than an input. See this internal slack message for more context.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

I assume we never had a test for that task? (maybe not worth it?, well https://huggingface.co/openai/clip-vit-base-patch32 is relevant and warm on Inference so we can add a test)

@hanouticelina
Copy link
Contributor Author

I assume we never had a test for that task?

actually, we have a test here but we did not see the error since it returned the recorded response from the cassette

@Wauplin
Copy link
Contributor

Wauplin commented Nov 18, 2024

Yes indeed. These tests are definitely not very satisfying as we've not caught the error... 😕 Dunno exactly how we can update them, maybe we should just get rid of the VCR at some point and make sure the InferenceAPI is able to handle the load from the test (with cache + rate limits for the CI). Anyway, that's another topic.

I've tracked down the problem. The VCR test is correct and the client was working up to v0.25.0 included. Release v0.26.0 introduced 2 issues:

  • since PR 2561 , candidateLabels is sent in the payload which is fixed by this PR
  • since PR 2601, the input image is not sent as a base64 input anymore. This has to be fixed by setting payload = _prepare_payload(image, parameters=parameters, expect_binary=True). @hanouticelina can you fix that in this PR?

These two errors are just oversights and should have been caught by the CI if the tests were a bit better^^

@hanouticelina
Copy link
Contributor Author

These tests are definitely not very satisfying as we've not caught the error... 😕 Dunno exactly how we can update them, maybe we should just get rid of the VCR at some point and make sure the InferenceAPI is able to handle the load from the test (with cache + rate limits for the CI). Anyway, that's another topic.

agree that we should rework the InferenceClient tests as it's too manual and not that simple to add/update tests. I'll definitely look into this later.

since PR 2601, the input image is not sent as a base64 input anymore.

Actually it was the case, see: huggingface_hub/inference/_client.py#L2917. However, i forgot to pass expect_binary=True here, thanks for pointing this out! it's fixed in 3ba7c2e.

@hanouticelina hanouticelina merged commit 4ce7bf1 into main Nov 18, 2024
17 checks passed
@hanouticelina hanouticelina deleted the fix-zero-shot-image-classification branch November 18, 2024 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants