🚨 Add Blip2ForImageTextRetrieval#29261
Conversation
|
cc @NielsRogge and @younesbelkada if one of you want to review on @jpizarrom makes the CIs go green! |
Hi, what could I do to makes the CIs go green! shall I just merge to upstream/main, or rebase to it? |
|
@jpizarrom It's preferable for you to rebase onto main. To see how to make the CIs green, you'll need to click on |
0e82065 to
9aa9a15
Compare
amyeroberts
left a comment
There was a problem hiding this comment.
Thanks for adding this! Overall looks great, just a few small comments
Once they're addressed we can move the checkpoints to be under the salesforce org
There was a problem hiding this comment.
I don't think it's necessary to add a separate method here. We can just make text_config optional in from_vision_qformer_text_config
There was a problem hiding this comment.
from_vision_qformer_configs was removed
There was a problem hiding this comment.
Autocasting and typing should be handled outside of the model definition
| if self.device != torch.device("cpu"): | |
| with torch.cuda.amp.autocast(dtype=torch.float16): | |
| vision_outputs = self.vision_model( | |
| pixel_values=pixel_values, | |
| output_attentions=output_attentions, | |
| output_hidden_states=output_hidden_states, | |
| return_dict=return_dict, | |
| ) | |
| else: | |
| vision_outputs = self.vision_model( | |
| pixel_values=pixel_values, | |
| output_attentions=output_attentions, | |
| output_hidden_states=output_hidden_states, | |
| return_dict=return_dict, | |
| ) | |
| vision_outputs = self.vision_model( | |
| pixel_values=pixel_values, | |
| output_attentions=output_attentions, | |
| output_hidden_states=output_hidden_states, | |
| return_dict=return_dict, | |
| ) |
There was a problem hiding this comment.
this was done because in the original model the autocast was applied only to the vision layers, don't know yet how to do this in a different way.
There was a problem hiding this comment.
it was removed, as discussed in #29261 (comment)
There was a problem hiding this comment.
Instead of using this config argument to conditionally call and create this layer, I'd suggest instead call self.embeddings if input_ids is not None
| if config.use_qformer_text_input: | |
| self.embeddings = Blip2TextEmbeddings(config) | |
| self.embeddings = Blip2TextEmbeddings(config) |
There was a problem hiding this comment.
when this layer is created_always_, I got this type of errors, don't know how to fix them.
Some Blip2 models do not use this bert based embeddings, they use opt or flan-t5 to create the query_embeds. Maybe I could try to refactor the code to move the Blip2TextEmbeddings outside of Blip2QFormerModel and pass always query_embeds. what do you think?
FAILED tests/models/blip_2/test_modeling_blip_2.py::Blip2ForConditionalGenerationDecoderOnlyTest::test_training_gradient_checkpointing - AssertionError: False is not true : qformer.embeddings.word_embeddings.weight in Blip2ForConditionalGeneration has no gradient!
FAILED tests/models/blip_2/test_modeling_blip_2.py::Blip2ModelTest::test_training_gradient_checkpointing - AssertionError: False is not true : qformer.embeddings.word_embeddings.weight in Blip2ForConditionalGeneration has no gradient!
There was a problem hiding this comment.
I did a refactor, embeddings were removed from Blip2QFormerModel, and place them into Blip2ForImageTextRetrieval and Blip2TextModelWithProjection, but to do so i needed to add query_length param to Blip2QFormerModel.forward.
There was a problem hiding this comment.
| if self.config.use_qformer_text_input: | |
| if input_ids is not None: |
There was a problem hiding this comment.
this is outdated, because embeddings were removed from Blip2QFormerModel
There was a problem hiding this comment.
Is this even necessary?
There was a problem hiding this comment.
Should not be necessary indeed given that modeling code is by default in torch.float32
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
05327aa to
da0cc83
Compare
| ) | ||
|
|
||
| if self.device != torch.device("cpu"): | ||
| with torch.cuda.amp.autocast(dtype=torch.float16): |
There was a problem hiding this comment.
As far as I can tell we don't add torch.cuda.amp.autocast code to modeling files, they are just in float32 by default. This was discussed on the original BLIP-2 model addition PR from what I remember. It's up to users to call something like torch.cuda.amp.autocast themselves if they wish to load the model in a different precision than the default one (cc @younesbelkada).
Hence in the conversion script I casted both the original weights and my BLIP-2 implementation to float32 in order to verify the conversion.
There was a problem hiding this comment.
ok, so this means that i need to remove maybe_autocast from https://github.com/NielsRogge/LAVIS/blob/blip2_float32/lavis/models/blip2_models/blip2_image_text_matching.py#L57-L58, right?
There was a problem hiding this comment.
it was removed, a PR on your fork was opened to also remove the autocast from the ITM model NielsRogge/LAVIS#1
|
|
||
|
|
||
| @dataclass | ||
| class Blip2ImageTextMatchingModelOutput(ModelOutput): |
There was a problem hiding this comment.
Not sure if feasible, but it'd be nice to match the output class of CLIP, which is also an image-text matching model. It consists of the following keys:
- loss
- logits_per_image (this I assume is the itm_score)
- logits_per_text (this I assume is the itm_score transposed)
- and some other keys which are CLIP-specific.
Making sure that Blip2ForImageTextRetrieval matches this would allow it to be added to the zero-shot image classification pipeline, which relies on this output key:
There was a problem hiding this comment.
Otherwise we will have a hard time adding BLIP-2 support to the zero-shot image classification pipeline.
There was a problem hiding this comment.
Hi @NielsRogge, i updated the output to match CLIP output, but this PR is not being updated with my latest commits
NielsRogge
left a comment
There was a problem hiding this comment.
Thanks for your work! Would request some changes however in order to be able to make BLIP-2 compatible with the zero-shot image classification pipeline.
| input_ids: Optional[torch.FloatTensor] = None, | ||
| position_ids: Optional[torch.LongTensor] = None, | ||
| query_embeds: Optional[torch.FloatTensor] = None, | ||
| past_key_values_length: int = 0, |
There was a problem hiding this comment.
| past_key_values_length: int = 0, |
past_key_values are not used I assume
There was a problem hiding this comment.
it was removed. thanks
| test_attention_outputs = False | ||
| test_torchscript = False | ||
|
|
||
| # TODO: Fix the failed tests |
There was a problem hiding this comment.
Not on this PR, I don't believe it is related to the changes of this PR, Blip2ForConditionalGeneration fails there, but i will verify that.
I wanted to make the test pass, and leave a comment about this, as i saw similar comments on other models.
There was a problem hiding this comment.
this is the error i was getting, it also occurs on the main branch
FAILED tests/models/blip_2/test_modeling_blip_2.py::Blip2ModelTest::test_pipeline_visual_question_answering_fp16 - RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
There was a problem hiding this comment.
@jpizarrom Could you open an issue to track this to make sure this isn't lost?
There was a problem hiding this comment.
Refer to this comment on why the test fails. You're probably running this on CPU.
…t_retrieval_model
…t_retrieval_model
| base_model_prefix = "blip" | ||
| supports_gradient_checkpointing = True | ||
| _no_split_modules = ["Blip2Attention", "T5Block", "OPTDecoderLayer"] | ||
| _no_split_modules = [ |
There was a problem hiding this comment.
@NielsRogge I updated _no_split_modules, i hope this will fix the slow multi-gpu tests that were failing.
|
Hi @amyeroberts, now the slow tests are passing. please let me know if I need to make any more changes. |
|
Please excuse me as I am not OP. However, I have an inquiry about this feature as I would really like to be able to use it. Generally, how much time does it take once a feature has been merged into the main branch to be made available in the next release? |
|
@jpizarrom Was an issue created to track the failing test c.f. this comment: #29261 (comment) |
Not yet, I can do it, but I don't have my computer with me until the first week of September. |
|
@amyeroberts @jpizarrom Thanks a lot for adding this feature! I've noticed that the relevant model weights hosted here are still missing some license information. Do you need a dedicated ticket for that or is this post enough? |
Hi, i am not sure, other blip2 models like Salesforce/blip2-opt-2.7b-coco show MIT, but in the repo LAVIS there is BSD 3-Clause License |
|
Is there any update on the license? |
|
Hi, thanks for the ping, I'll reach out to the authors to ask for adding a license tag |
|
Do I understand correctly that the license for https://huggingface.co/Salesforce/blip2-itm-vit-g has been set to MIT now? |
|
Yes the authors have added it |
* add Blip2ForImageTextRetrieval * use one line and remove unnecessary space in tests Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * use value from the config, rather than hardcoded * change order of params in Blip2QFormerModel.forward * update docstring * fix style * update test_inference_opt * move embeddings out of Blip2QFormerModel * remove from_vision_qformer_configs * remove autocast float16 in Blip2QFormerModel * rename fiels into vision_projection,text_projection,use_image_text_matching_head * use CLIPOutput for Blip2ImageTextMatchingModelOutput * remove past_key_values_length from Blip2TextEmbeddings * fix small typo in the CLIPOutput docstring * add Blip2ForImageTextRetrieval to Zero Shot Image Classification mapping * update docstring and add require_torch_fp16 * rollback test_inference_opt * use use_image_text_matching_head=True in convert * skip test_model_get_set_embeddings * fix create_rename_keys error on new itm fields * revert to do scale after dot product between "query" and "key" * fix ValueError on convert script for blip2-opt-2.7b * update org of paths to Salesforce * add is_pipeline_test_to_skip for VisualQuestionAnsweringPipelineTests * [run_slow] blip_2 * removed Blip2ForImageTextRetrieval from IGNORE_NON_AUTO_CONFIGURED * fix docstring of Blip2ImageTextMatchingModelOutput * [run_slow] blip_2 * fix multi-gpu tests * [run_slow] blip_2 * [run_slow] blip_2 --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
What does this PR do?
Add
Blip2ForImageTextRetrieval,Blip2TextModelWithProjection,Blip2VisionModelWithProjectionmodels to be able to get Image Text Matching scores, and extract text,image,multimodal features.Fixes part of #25300 #25245
This is continuation of #25612, I tried to apply most of the feedback received in that PR.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @amyeroberts