🚨 Refactor DETR to updated standards#41549
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…sks, vision input embeds and query embeds
| if not isinstance(line, str): | ||
| line = line.decode() |
There was a problem hiding this comment.
line was an str when I tried to use this, not sure why! I can open a separate PR for it though
There was a problem hiding this comment.
Ah yes I can try to remove this, maybe it's not an issue anymore. Thanks for the reminder 😁
There was a problem hiding this comment.
no worries haha :D can be removed for sure?
| if pixel_values is None and inputs_embeds is None: | ||
| raise ValueError("You have to specify either pixel_values or inputs_embeds") | ||
|
|
||
| if inputs_embeds is None: | ||
| batch_size, num_channels, height, width = pixel_values.shape | ||
| device = pixel_values.device | ||
|
|
||
| if pixel_mask is None: | ||
| pixel_mask = torch.ones(((batch_size, height, width)), device=device) | ||
| vision_features = self.backbone(pixel_values, pixel_mask) | ||
| feature_map, mask = vision_features[-1] | ||
|
|
||
| # Apply 1x1 conv to map (N, C, H, W) -> (N, d_model, H, W), then flatten to (N, HW, d_model) | ||
| # (feature map and position embeddings are flattened and permuted to (batch_size, sequence_length, hidden_size)) | ||
| projected_feature_map = self.input_projection(feature_map) | ||
| flattened_features = projected_feature_map.flatten(2).permute(0, 2, 1) | ||
| spatial_position_embeddings = ( | ||
| self.position_embedding(shape=feature_map.shape, device=device, dtype=pixel_values.dtype, mask=mask) | ||
| .flatten(2) | ||
| .permute(0, 2, 1) | ||
| ) | ||
| flattened_mask = mask.flatten(1) | ||
| else: | ||
| batch_size = inputs_embeds.shape[0] | ||
| device = inputs_embeds.device | ||
| flattened_features = inputs_embeds | ||
| # When using inputs_embeds, we need to infer spatial dimensions for position embeddings | ||
| # Assume square feature map | ||
| seq_len = inputs_embeds.shape[1] | ||
| feat_dim = int(seq_len**0.5) | ||
| # Create position embeddings for the inferred spatial size | ||
| spatial_position_embeddings = ( | ||
| self.position_embedding( | ||
| shape=torch.Size([batch_size, self.config.d_model, feat_dim, feat_dim]), | ||
| device=device, | ||
| dtype=inputs_embeds.dtype, | ||
| ) | ||
| .flatten(2) | ||
| .permute(0, 2, 1) | ||
| ) | ||
| # If a pixel_mask is provided with inputs_embeds, interpolate it to feat_dim, then flatten. | ||
| if pixel_mask is not None: | ||
| mask = nn.functional.interpolate(pixel_mask[None].float(), size=(feat_dim, feat_dim)).to(torch.bool)[0] | ||
| flattened_mask = mask.flatten(1) | ||
| else: | ||
| # If no mask provided, assume all positions are valid | ||
| flattened_mask = torch.ones((batch_size, seq_len), device=device, dtype=torch.long) |
There was a problem hiding this comment.
Now truly supports passing input_embeds instead of silently doing nothing with it
| if decoder_inputs_embeds is not None: | ||
| queries = decoder_inputs_embeds | ||
| else: | ||
| queries = torch.zeros_like(object_queries_position_embeddings) |
There was a problem hiding this comment.
Same, truly supports decoder_inputs_embeds as input
| attention_mask=None, | ||
| object_queries=object_queries, | ||
| query_position_embeddings=query_position_embeddings, | ||
| attention_mask=decoder_attention_mask, |
There was a problem hiding this comment.
Supports masking of queries (as advertised)
| @@ -967,65 +960,36 @@ def forward( | |||
| intermediate = () if self.config.auxiliary_loss else None | |||
|
|
|||
| # decoder layers | |||
| all_hidden_states = () if output_hidden_states else None | |||
| all_self_attns = () if output_attentions else None | |||
| all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None | |||
|
|
|||
| for idx, decoder_layer in enumerate(self.layers): | |||
| # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) | |||
| if output_hidden_states: | |||
| all_hidden_states += (hidden_states,) | |||
| if self.training: | |||
| dropout_probability = torch.rand([]) | |||
| if dropout_probability < self.layerdrop: | |||
| continue | |||
|
|
|||
| layer_outputs = decoder_layer( | |||
| hidden_states = decoder_layer( | |||
| hidden_states, | |||
| combined_attention_mask, | |||
| object_queries, | |||
| query_position_embeddings, | |||
| attention_mask, | |||
| spatial_position_embeddings, | |||
| object_queries_position_embeddings, | |||
| encoder_hidden_states, # as a positional argument for gradient checkpointing | |||
| encoder_attention_mask=encoder_attention_mask, | |||
| output_attentions=output_attentions, | |||
| **kwargs, | |||
There was a problem hiding this comment.
Truly supports attention mask on vision features (it was always None before)
|
Hello @molbap @ArthurZucker! |
| ): | ||
| if use_attention_mask: | ||
| self.skipTest( | ||
| "This test uses attention masks which are not compatible with DETR. Skipping when use_attention_mask is True." |
There was a problem hiding this comment.
Hmm, why tho? Are the attention masks perhaps 3D instead?
There was a problem hiding this comment.
It's more that _test_eager_matches_sdpa_inference is not adapted to the vision space (+object queries here). It tries to add a "decoder_input_ids" to the inputs, plus the seqlen created for the dummy masks were wrong. Seeing as the function is already quite cluttered and difficult to read, I figured trying to add support for vision model directly there would not be ideal. We can either override the tests in this model specifically, or try to have a more general test for vision models. Another option would be to be able to parameterize the tests by providing how to find the correct seqlen and input names.
I would love some help on this!
There was a problem hiding this comment.
I see, is this specific to detr or will we encounter more so for other models in the vision family? It's best to not skip too much if it comes down the line. Depending on how many are affected by this, we either should
- Fix the base test, e.g. with parametrization, splitting the test a bit (more models with similar problems)
- Overwrite the test and make specific changes (low amount of models with similar problems)
There was a problem hiding this comment.
The problem is with the test's base design indeed. It will lead to more skipped tests down the line because the division encoder/encoder-decoder/decoder isn't that clearly made. The amount of models with similar problems isn't "low" imo.
There was a problem hiding this comment.
Yes I think it will increase too with us fixing the attention masks for vision models, so we definitely need to improve the base test
|
Thanks for the review @vasqu ! I standardized attention and masking following your advice :) |
vasqu
left a comment
There was a problem hiding this comment.
Looking good from my side, amazing work! Just left some smaller comments but nothing crazy
|
|
||
| _can_record_outputs = { | ||
| "hidden_states": DetrEncoderLayer, | ||
| "attentions": OutputRecorder(DetrSelfAttention, layer_name="self_attn", index=1), |
There was a problem hiding this comment.
Do we need the explicit output recorder, iirc DetrSelfAttention should work fine in itself
There was a problem hiding this comment.
same question here out of curiosity :D
There was a problem hiding this comment.
No indeed I can remove it :)
| ): | ||
| if use_attention_mask: | ||
| self.skipTest( | ||
| "This test uses attention masks which are not compatible with DETR. Skipping when use_attention_mask is True." |
There was a problem hiding this comment.
I see, is this specific to detr or will we encounter more so for other models in the vision family? It's best to not skip too much if it comes down the line. Depending on how many are affected by this, we either should
- Fix the base test, e.g. with parametrization, splitting the test a bit (more models with similar problems)
- Overwrite the test and make specific changes (low amount of models with similar problems)
molbap
left a comment
There was a problem hiding this comment.
Looks niiiice
For the unhappy CI, let's throw the Check Copies away!
| ): | ||
| if use_attention_mask: | ||
| self.skipTest( | ||
| "This test uses attention masks which are not compatible with DETR. Skipping when use_attention_mask is True." |
There was a problem hiding this comment.
The problem is with the test's base design indeed. It will lead to more skipped tests down the line because the division encoder/encoder-decoder/decoder isn't that clearly made. The amount of models with similar problems isn't "low" imo.
| "qwen2_5_vl", | ||
| "videollava", | ||
| "vipllava", | ||
| "detr", |
There was a problem hiding this comment.
I'm not sure, do we need to add this here?
There was a problem hiding this comment.
Yes that's what made me go crazy haha otherwise _checkpoint_conversion_mapping doesn't work.
Note that this is temporary and will be replaced by the new way to convert weights on the fly that @ArthurZucker and @Cyrilvallez are working on.
| def __init__(self, config: DetrConfig): | ||
| super().__init__() | ||
| self.embed_dim = config.d_model | ||
| self.hidden_size = config.d_model |
There was a problem hiding this comment.
won't that break BC? (at least on the attribute names)
There was a problem hiding this comment.
In what way? If users access it directly? In any case I think we really need to standardize these types of variable names, it might be worth slightly breaking BC imo
There was a problem hiding this comment.
yeah in case of non-config access. I agree I prefer to standardize
| if self.training: | ||
| dropout_probability = torch.rand([]) | ||
| if dropout_probability < self.layerdrop: | ||
| continue |
There was a problem hiding this comment.
not exactly the typical dropout interface, we can maybe take the occasion to update it?
There was a problem hiding this comment.
Yes 😫, I was scared of breaking BC in that case, but maybe it's not so important. It would be great to get rid of non standards dropout elsewhere as well really
There was a problem hiding this comment.
I think it's ok to break it in here, it does not affect inference and clearly it would be an improvement to get rid of it haha
| def freeze_backbone(self): | ||
| for name, param in self.backbone.conv_encoder.model.named_parameters(): | ||
| for _, param in self.backbone.model.named_parameters(): | ||
| param.requires_grad_(False) | ||
|
|
||
| def unfreeze_backbone(self): | ||
| for name, param in self.backbone.conv_encoder.model.named_parameters(): | ||
| for _, param in self.backbone.model.named_parameters(): | ||
| param.requires_grad_(True) |
There was a problem hiding this comment.
these methods should really be user-side responsibilities 😨 I would be pro-removal! We can always communicate on it
There was a problem hiding this comment.
Yes agreed, we could start a deprecation cycle, or just remove it for v5. It's present in several other vision models
There was a problem hiding this comment.
Just asked @merveenoyan who's an avid finetuner and is not using these methods anymore, I think they were good initially but they're ok to go now. Agreed it's out of scope for current PR will create another to remove all of it (cc @ariG23498 as we chatted on finetuning too)
| def forward(self, q, k, mask: Optional[torch.Tensor] = None): | ||
| q = self.q_linear(q) | ||
| k = nn.functional.conv2d(k, self.k_linear.weight.unsqueeze(-1).unsqueeze(-1), self.k_linear.bias) | ||
| queries_per_head = q.view(q.shape[0], q.shape[1], self.num_heads, self.hidden_dim // self.num_heads) |
There was a problem hiding this comment.
on here my nit would be, if we can update a bit the single-letter variable names, that'd be great!
There was a problem hiding this comment.
Yes I think we could even try to refactor this to use the standard attention module and only take the attention weights! It could be interesting to compare the performance of eager attention vs this implementation (conv2d instead of linear for key proj, and no multiplication by value) vs other attention impl.
There was a problem hiding this comment.
ahah that's a tough one to benchmark but indeed sounds good, LMK if you want to do that in this PR or move to another
vasqu
left a comment
There was a problem hiding this comment.
Talked with @molbap internally and I think we agree that it doesn't make sense to force this merge just to split refactoring again. Let's aim for quality in this refactor
We will probably merge the model PR as is and add this to this refactor after merge. Otherwise, we will suffer on both sides - crunch time on the model PR and less quality on the refactor (e.g. another set of TODOs)
I've added a few smaller comments meanwhile
| batch_size, num_queries, self.n_heads, self.n_levels * self.n_points | ||
| ) | ||
| attention_weights = F.softmax(attention_weights, -1).view( | ||
| attention_weights = softmax(attention_weights, -1).view( |
There was a problem hiding this comment.
This is a bit weird, would like to not have a direct import
There was a problem hiding this comment.
Agreed, but I have issues with torch functional and torchvision functional aliases colliding in modular. I have this PR to fix it #43263, I'll change back when it's merged
|
|
||
| hidden_states = inputs_embeds | ||
|
|
||
| encoder_states = () if output_hidden_states else None |
There was a problem hiding this comment.
Hmm, can we not add this to _can_record_outputs
There was a problem hiding this comment.
I haven't managed to get something clean that works for now, the issue is this line:
encoder_states = encoder_states + (hidden_states[enc_ind],)
So the encoder_states/hidden_states cannot automatically be recorded. I'll see if some refactoring of the code can fix this
| # https://github.com/lyuwenyu/RT-DETR/blob/94f5e16708329d2f2716426868ec89aa774af016/rtdetr_pytorch/src/zoo/rtdetr/rtdetr_decoder.py#L412 | ||
| sources = [] | ||
| for level, source in enumerate(encoder_outputs[0]): | ||
| for level, source in enumerate(encoder_outputs.last_hidden_state): |
There was a problem hiding this comment.
We should force return_dict=True then for the encoder
There was a problem hiding this comment.
That would mean we need to force return_dict=True everytime we want to access a named parameter of a submodule output? It doesn't look like that's what we do in the library. From my understanding, return_dict=False is only applied to the top-module output, and the submodule use return_dict=True by default
Here, we pop return_dict in the top module call:
| @@ -1,15 +1,53 @@ | |||
| import math | |||
There was a problem hiding this comment.
btw, lets' bring back the header with license where missing
There was a problem hiding this comment.
Indeed thanks for the heads up!
|
View the CircleCI Test Summary for this PR: https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=41549&sha=7821c4 |
| class RTDetrV2AIFILayer(nn.Module): | ||
| """ | ||
| AIFI (Attention-based Intra-scale Feature Interaction) layer used in RT-DETR hybrid encoder. | ||
| """ |
There was a problem hiding this comment.
nice, so this can be reused in other derived models?
There was a problem hiding this comment.
Yes that's the idea! Also it allows for automatically capturing hidden_states
| x_max = x_coords_masked.flatten(start_dim=-2).max(dim=-1).values + 1 | ||
| x_min = ( | ||
| torch.where(mask, x_coords_masked, torch.tensor(1e8, device=mask.device, dtype=dtype)) | ||
| torch.where(mask, x_coords_masked, torch.tensor(torch.finfo(dtype).max)) |
There was a problem hiding this comment.
Note: This was causing overflow issues in float16
Cc @zhang-prog @molbap
What does this PR do?
This PR aims at refactoring DETR as part of an effort to standardize vision models in the library, in the same vein as #41546.
Expect to see much more PRs like this for vision models as we approach v5!