diff --git a/docs/source/en/model_doc/biogpt.md b/docs/source/en/model_doc/biogpt.md index 9a664fa288f3..a3ace15a9a3c 100644 --- a/docs/source/en/model_doc/biogpt.md +++ b/docs/source/en/model_doc/biogpt.md @@ -121,7 +121,6 @@ print(output) - Pad inputs on the right because BioGPT uses absolute position embeddings. - BioGPT can reuse previously computed key-value attention pairs. Access this feature with the [past_key_values](https://huggingface.co/docs/transformers/main/en/model_doc/biogpt#transformers.BioGptModel.forward.past_key_values) parameter in [`BioGPTModel.forward`]. -- The `head_mask` argument is ignored when using an attention implementation other than "eager". If you want to use `head_mask`, make sure `attn_implementation="eager"`). ```py from transformers import AutoModelForCausalLM diff --git a/docs/source/en/model_doc/data2vec.md b/docs/source/en/model_doc/data2vec.md index 5c12b2f69dbb..a3845f3c0ff6 100644 --- a/docs/source/en/model_doc/data2vec.md +++ b/docs/source/en/model_doc/data2vec.md @@ -53,7 +53,6 @@ The original code for vision can be found [here](https://github.com/facebookrese - For Data2VecAudio, preprocessing is identical to [`Wav2Vec2Model`], including feature extraction - For Data2VecText, preprocessing is identical to [`RobertaModel`], including tokenization. - For Data2VecVision, preprocessing is identical to [`BeitModel`], including feature extraction. -- The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` ### Using Scaled Dot Product Attention (SDPA) diff --git a/docs/source/en/model_doc/gpt_bigcode.md b/docs/source/en/model_doc/gpt_bigcode.md index e837f2a08f52..fec23ad0f14c 100644 --- a/docs/source/en/model_doc/gpt_bigcode.md +++ b/docs/source/en/model_doc/gpt_bigcode.md @@ -49,9 +49,6 @@ The main differences compared to GPT2. You can read more about the optimizations in the [original pull request](https://github.com/huggingface/transformers/pull/22575) -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - ## Combining Starcoder and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. diff --git a/docs/source/en/model_doc/hubert.md b/docs/source/en/model_doc/hubert.md index 5a072214406c..6d25b998d08c 100644 --- a/docs/source/en/model_doc/hubert.md +++ b/docs/source/en/model_doc/hubert.md @@ -114,11 +114,6 @@ print(transcription[0]) ## Notes - HuBERT models expect raw audio input as a 1D float array sampled at 16kHz. -- If you want to use a `head_mask`, use the model with `attn_implementation="eager"`. - - ```python - model = HubertModel.from_pretrained("facebook/hubert-base-ls960", attn_implementation="eager") - ``` ## HubertConfig diff --git a/docs/source/en/model_doc/m2m_100.md b/docs/source/en/model_doc/m2m_100.md index f9ac7e5ebe92..842ba115bc85 100644 --- a/docs/source/en/model_doc/m2m_100.md +++ b/docs/source/en/model_doc/m2m_100.md @@ -51,9 +51,6 @@ multilingual it expects the sequences in a certain format: A special language id source and target text. The source text format is `[lang_code] X [eos]`, where `lang_code` is source language id for source text and target language id for target text, with `X` being the source or target text. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - The [`M2M100Tokenizer`] depends on `sentencepiece` so be sure to install it before running the examples. To install `sentencepiece` run `pip install sentencepiece`. diff --git a/docs/source/en/model_doc/mbart.md b/docs/source/en/model_doc/mbart.md index eca017320375..93b74d7b31b8 100644 --- a/docs/source/en/model_doc/mbart.md +++ b/docs/source/en/model_doc/mbart.md @@ -34,9 +34,6 @@ You can find all the original mBART checkpoints under the [AI at Meta](https://h > [!TIP] > Click on the mBART models in the right sidebar for more examples of applying mBART to different language tasks. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - The example below demonstrates how to translate text with [`Pipeline`] or the [`AutoModel`] class. diff --git a/docs/source/en/model_doc/musicgen.md b/docs/source/en/model_doc/musicgen.md index 0ec3cb200d1e..c7c5efbc6e0c 100644 --- a/docs/source/en/model_doc/musicgen.md +++ b/docs/source/en/model_doc/musicgen.md @@ -63,9 +63,6 @@ python src/transformers/models/musicgen/convert_musicgen_transformers.py \ --checkpoint small --pytorch_dump_folder /output/path --safe_serialization ``` -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - ## Generation MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly diff --git a/docs/source/en/model_doc/musicgen_melody.md b/docs/source/en/model_doc/musicgen_melody.md index 9379cfe8bf0b..ff670ef85297 100644 --- a/docs/source/en/model_doc/musicgen_melody.md +++ b/docs/source/en/model_doc/musicgen_melody.md @@ -43,9 +43,6 @@ There are two key differences with MusicGen: 1. The audio prompt is used here as a conditional signal for the generated audio sample, whereas it's used for audio continuation in [MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen). 2. Conditional text and audio signals are concatenated to the decoder's hidden states instead of being used as a cross-attention signal, as in MusicGen. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - ## Generation MusicGen Melody is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default, and can be explicitly specified by setting `do_sample=True` in the call to [`MusicgenMelodyForConditionalGeneration.generate`], or by overriding the model's generation config (see below). diff --git a/docs/source/en/model_doc/opt.md b/docs/source/en/model_doc/opt.md index 7c65689594e4..4b321f2d680b 100644 --- a/docs/source/en/model_doc/opt.md +++ b/docs/source/en/model_doc/opt.md @@ -101,8 +101,6 @@ tokenizer.batch_decode(generated_ids)[0] - OPT adds an `EOS` token `` to the beginning of every prompt. -- The `head_mask` argument is ignored if the attention implementation isn't `"eager"`. Set `attn_implementation="eager"` to enable the `head_mask`. - ## Resources - Refer to this [notebook](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing) for an example of fine-tuning OPT with PEFT, bitsandbytes, and Transformers. diff --git a/docs/source/en/model_doc/qwen2_audio.md b/docs/source/en/model_doc/qwen2_audio.md index 9b9dd43a919d..ae4cdd815e91 100644 --- a/docs/source/en/model_doc/qwen2_audio.md +++ b/docs/source/en/model_doc/qwen2_audio.md @@ -40,9 +40,6 @@ The abstract from the paper is the following: `Qwen2-Audio-7B` and `Qwen2-Audio-7B-Instruct` can be found on the [Huggingface Hub](https://huggingface.co/Qwen) -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - ### Inference ```python diff --git a/docs/source/en/model_doc/sew.md b/docs/source/en/model_doc/sew.md index b52849f7c351..1f6bbfd219e2 100644 --- a/docs/source/en/model_doc/sew.md +++ b/docs/source/en/model_doc/sew.md @@ -47,9 +47,6 @@ This model was contributed by [anton-l](https://huggingface.co/anton-l). - SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` - ## Resources - [Audio classification task guide](../tasks/audio_classification) diff --git a/docs/source/en/model_doc/unispeech-sat.md b/docs/source/en/model_doc/unispeech-sat.md index 308155bbfe21..04c758e2e061 100644 --- a/docs/source/en/model_doc/unispeech-sat.md +++ b/docs/source/en/model_doc/unispeech-sat.md @@ -55,8 +55,6 @@ found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT). decoded using [`Wav2Vec2CTCTokenizer`]. - UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` ## Resources diff --git a/docs/source/en/model_doc/unispeech.md b/docs/source/en/model_doc/unispeech.md index 98348b560db7..ef4062c52464 100644 --- a/docs/source/en/model_doc/unispeech.md +++ b/docs/source/en/model_doc/unispeech.md @@ -50,8 +50,6 @@ found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech). - UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` ## Resources diff --git a/docs/source/en/model_doc/wav2vec2.md b/docs/source/en/model_doc/wav2vec2.md index 1f5f4a905767..4db7bacc8c6a 100644 --- a/docs/source/en/model_doc/wav2vec2.md +++ b/docs/source/en/model_doc/wav2vec2.md @@ -48,8 +48,6 @@ Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingf - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` ## Using Flash Attention 2 diff --git a/docs/source/en/model_doc/whisper.md b/docs/source/en/model_doc/whisper.md index 5e19e870bddc..2c4ef6257eeb 100644 --- a/docs/source/en/model_doc/whisper.md +++ b/docs/source/en/model_doc/whisper.md @@ -29,8 +29,6 @@ rendered properly in your Markdown viewer. You can find all the original Whisper checkpoints under the [Whisper](https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013) collection. -> [!NOTE] -> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")` > [!TIP] > Click on the Whisper models in the right sidebar for more examples of how to apply Whisper to different audio tasks. diff --git a/docs/source/en/perf_infer_gpu_one.md b/docs/source/en/perf_infer_gpu_one.md index ed6c2b4a8d1a..874cf2084e95 100644 --- a/docs/source/en/perf_infer_gpu_one.md +++ b/docs/source/en/perf_infer_gpu_one.md @@ -175,7 +175,7 @@ There are three supported implementations available. - [xFormers](https://github.com/facebookresearch/xformers) or Memory-Efficient Attention is able to support models with the fp32 torch type. - C++ implementation of scaled dot product attention -SDPA is used by default for PyTorch v2.1.1. and greater when an implementation is available. You could explicitly enable SDPA by setting `attn_implementation="sdpa"` in [`~PreTrainedModel.from_pretrained`] though. Certain attention parameters, such as `head_mask` and `output_attentions=True`, are unsupported and returns a warning that Transformers will fall back to the (slower) eager implementation. +SDPA is used by default for PyTorch v2.1.1. and greater when an implementation is available. You could explicitly enable SDPA by setting `attn_implementation="sdpa"` in [`~PreTrainedModel.from_pretrained`] though. Certain attention parameters, such as `output_attentions=True`, are unsupported and returns a warning that Transformers will fall back to the (slower) eager implementation. Refer to the [AttentionInterface](./attention_interface) guide to learn how to change the attention implementation after loading a model. diff --git a/examples/modular-transformers/modeling_dummy_bert.py b/examples/modular-transformers/modeling_dummy_bert.py index 9df092f73e6e..07beed462032 100644 --- a/examples/modular-transformers/modeling_dummy_bert.py +++ b/examples/modular-transformers/modeling_dummy_bert.py @@ -4,24 +4,29 @@ # the file from the modular. If any change should be done, please apply the change to the # modular_dummy_bert.py file directly. One of our CI enforces this. # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 -import math -from typing import Optional, Union +from typing import Callable, Optional, Union import torch from torch import nn from ...activations import ACT2FN -from ...cache_utils import Cache, DynamicCache, EncoderDecoderCache -from ...modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa +from ...cache_utils import Cache, EncoderDecoderCache +from ...masking_utils import create_causal_mask +from ...modeling_attn_mask_utils import _prepare_4d_attention_mask, _prepare_4d_attention_mask_for_sdpa from ...modeling_layers import GradientCheckpointingLayer from ...modeling_outputs import BaseModelOutputWithPastAndCrossAttentions, BaseModelOutputWithPoolingAndCrossAttentions -from ...modeling_utils import PreTrainedModel +from ...modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel +from ...processing_utils import Unpack from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import auto_docstring, logging -from ...utils.deprecation import deprecate_kwarg +from ...utils import TransformersKwargs, auto_docstring, is_torch_flex_attn_available, logging +from ...utils.generic import check_model_inputs from .configuration_dummy_bert import DummyBertConfig +if is_torch_flex_attn_available(): + from ...integrations.flex_attention import make_flex_block_causal_mask + + logger = logging.get_logger(__name__) @@ -58,7 +63,7 @@ def forward( else: input_shape = inputs_embeds.size()[:-1] - seq_length = input_shape[1] + batch_size, seq_length = input_shape if position_ids is None: position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] @@ -68,9 +73,10 @@ def forward( # issue #5664 if token_type_ids is None: if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded + # NOTE: We assume either pos ids to have bsz == 1 (broadcastable) or bsz == effective bsz (input_shape[0]) + buffered_token_type_ids = self.token_type_ids.expand(position_ids.shape[0], -1) + buffered_token_type_ids = torch.gather(buffered_token_type_ids, dim=1, index=position_ids) + token_type_ids = buffered_token_type_ids.expand(batch_size, seq_length) else: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) @@ -87,18 +93,74 @@ def forward( return embeddings +def eager_attention_forward( + module: nn.Module, + query: torch.Tensor, + key: torch.Tensor, + value: torch.Tensor, + attention_mask: Optional[torch.Tensor], + scaling: Optional[float] = None, + dropout: float = 0.0, + use_cache: Optional[bool] = None, + **kwargs: Unpack[TransformersKwargs], +): + if scaling is None: + scaling = query.size(-1) ** -0.5 + + # Take the dot product between "query" and "key" to get the raw attention scores. + attn_weights = torch.matmul(query, key.transpose(2, 3)) + + # Relative positional embeddings + if module.position_embedding_type == "relative_key" or module.position_embedding_type == "relative_key_query": + query_length, key_length = query.shape[2], key.shape[2] + if use_cache: + position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=query.device).view(-1, 1) + else: + position_ids_l = torch.arange(query_length, dtype=torch.long, device=query.device).view(-1, 1) + position_ids_r = torch.arange(key_length, dtype=torch.long, device=query.device).view(1, -1) + distance = position_ids_l - position_ids_r + + positional_embedding = module.distance_embedding(distance + module.max_position_embeddings - 1) + positional_embedding = positional_embedding.to(dtype=query.dtype) # fp16 compatibility + + if module.position_embedding_type == "relative_key": + relative_position_scores = torch.einsum("bhld,lrd->bhlr", query, positional_embedding) + attn_weights = attn_weights + relative_position_scores + elif module.position_embedding_type == "relative_key_query": + relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query, positional_embedding) + relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key, positional_embedding) + attn_weights = attn_weights + relative_position_scores_query + relative_position_scores_key + + # Scaling is shifted in case of embeddings being relative + attn_weights = attn_weights * scaling + + if attention_mask is not None and attention_mask.ndim == 4: + attention_mask = attention_mask[:, :, :, : key.shape[-2]] + attn_weights = attn_weights + attention_mask + + attn_weights = nn.functional.softmax(attn_weights, dim=-1) + attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) + + attn_output = torch.matmul(attn_weights, value) + attn_output = attn_output.transpose(1, 2).contiguous() + + return attn_output, attn_weights + + class DummyBertSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None, layer_idx=None): + def __init__(self, config, position_embedding_type=None, is_causal=False, layer_idx=None): super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " f"heads ({config.num_attention_heads})" ) + self.config = config self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size + self.scaling = self.attention_head_size**-0.5 self.query = nn.Linear(config.hidden_size, self.all_head_size) self.key = nn.Linear(config.hidden_size, self.all_head_size) @@ -113,215 +175,152 @@ def __init__(self, config, position_embedding_type=None, layer_idx=None): self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) self.is_decoder = config.is_decoder + self.is_causal = is_causal self.layer_idx = layer_idx - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - batch_size, seq_length, _ = hidden_states.shape - query_layer = self.query(hidden_states) - query_layer = query_layer.view(batch_size, -1, self.num_attention_heads, self.attention_head_size).transpose( - 1, 2 - ) - - is_updated = False - is_cross_attention = encoder_hidden_states is not None - if past_key_values is not None: - if isinstance(past_key_values, EncoderDecoderCache): - is_updated = past_key_values.is_updated.get(self.layer_idx) - if is_cross_attention: - # after the first generated id, we can subsequently re-use all key/value_layer from cache - curr_past_key_value = past_key_values.cross_attention_cache - else: - curr_past_key_value = past_key_values.self_attention_cache - else: - curr_past_key_value = past_key_values - - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if is_cross_attention and past_key_values is not None and is_updated: - # reuse k,v, cross_attentions - key_layer = curr_past_key_value.layers[self.layer_idx].keys - value_layer = curr_past_key_value.layers[self.layer_idx].values - else: - key_layer = self.key(current_states) - key_layer = key_layer.view(batch_size, -1, self.num_attention_heads, self.attention_head_size).transpose( - 1, 2 + input_shape = hidden_states.shape[:-1] + hidden_shape = (*input_shape, -1, self.attention_head_size) + + # get all proj + query_layer = self.query(hidden_states).view(*hidden_shape).transpose(1, 2) + key_layer = self.key(hidden_states).view(*hidden_shape).transpose(1, 2) + value_layer = self.value(hidden_states).view(*hidden_shape).transpose(1, 2) + + if past_key_value is not None: + # decoder-only dummy_bert can have a simple dynamic cache for example + current_past_key_value = past_key_value + if isinstance(past_key_value, EncoderDecoderCache): + current_past_key_value = past_key_value.self_attention_cache + + # save all key/value_layer to cache to be re-used for fast auto-regressive generation + key_layer, value_layer = current_past_key_value.update( + key_layer, + value_layer, + self.layer_idx, + {"cache_position": cache_position}, ) - value_layer = self.value(current_states) - value_layer = value_layer.view( - batch_size, -1, self.num_attention_heads, self.attention_head_size - ).transpose(1, 2) - - if past_key_values is not None: - # save all key/value_layer to cache to be re-used for fast auto-regressive generation - cache_position = cache_position if not is_cross_attention else None - key_layer, value_layer = curr_past_key_value.update( - key_layer, value_layer, self.layer_idx, {"cache_position": cache_position} - ) - # set flag that curr layer for cross-attn is already updated so we can re-use in subsequent calls - if is_cross_attention and isinstance(past_key_values, EncoderDecoderCache): - past_key_values.is_updated[self.layer_idx] = True - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - query_length, key_length = query_layer.shape[2], key_layer.shape[2] - if past_key_values is not None: - position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=hidden_states.device).view( - -1, 1 + attention_interface: Callable = eager_attention_forward + if self.config._attn_implementation != "eager": + if self.position_embedding_type != "absolute": + raise ValueError( + f"You are using {self.config._attn_implementation} as attention type. However, non-absolute " + 'positional embeddings can not work with them. Please load the model with `attn_implementation="eager"`.' ) - else: - position_ids_l = torch.arange(query_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(key_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in DummyBertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) + attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) + attn_output, attn_weights = attention_interface( + self, + query_layer, + key_layer, + value_layer, + attention_mask, + dropout=0.0 if not self.training else self.dropout.p, + scaling=self.scaling, + # only for relevant for non-absolute positional embeddings + use_cache=past_key_value is not None, + **kwargs, + ) + attn_output = attn_output.reshape(*input_shape, -1).contiguous() + return attn_output, attn_weights - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) +class DummyBertCrossAttention(nn.Module): + def __init__(self, config, position_embedding_type=None, is_causal=False, layer_idx=None): + super().__init__() + if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): + raise ValueError( + f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " + f"heads ({config.num_attention_heads})" + ) + self.config = config - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) + self.num_attention_heads = config.num_attention_heads + self.attention_head_size = int(config.hidden_size / config.num_attention_heads) + self.all_head_size = self.num_attention_heads * self.attention_head_size + self.scaling = self.attention_head_size**-0.5 - return context_layer, attention_probs + self.query = nn.Linear(config.hidden_size, self.all_head_size) + self.key = nn.Linear(config.hidden_size, self.all_head_size) + self.value = nn.Linear(config.hidden_size, self.all_head_size) + self.dropout = nn.Dropout(config.attention_probs_dropout_prob) + self.position_embedding_type = position_embedding_type or getattr( + config, "position_embedding_type", "absolute" + ) + if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": + self.max_position_embeddings = config.max_position_embeddings + self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) -class DummyBertSdpaSelfAttention(DummyBertSelfAttention): - def __init__(self, config, position_embedding_type=None, layer_idx=None): - super().__init__(config, position_embedding_type=position_embedding_type, layer_idx=layer_idx) - self.dropout_prob = config.attention_probs_dropout_prob + self.is_causal = is_causal + self.layer_idx = layer_idx - # Adapted from DummyBertSelfAttention - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, - cache_position: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.FloatTensor] = None, + past_key_value: Optional[EncoderDecoderCache] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - if self.position_embedding_type != "absolute" or output_attentions or head_mask is not None: - # TODO: Improve this warning with e.g. `model.config._attn_implementation = "manual"` once implemented. - logger.warning_once( - "DummyBertSdpaSelfAttention is used but `torch.nn.functional.scaled_dot_product_attention` does not support " - "non-absolute `position_embedding_type` or `output_attentions=True` or `head_mask`. Falling back to " - "the manual attention implementation, but specifying the manual implementation will be required from " - "Transformers version v5.0.0 onwards. This warning can be removed using the argument " - '`attn_implementation="eager"` when loading the model.' - ) - return super().forward( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - past_key_values, - output_attentions, - cache_position, - ) - - bsz, tgt_len, _ = hidden_states.size() + # determine input shapes + bsz, tgt_len = hidden_states.shape[:-1] + src_len = encoder_hidden_states.shape[1] - query_layer = ( - self.query(hidden_states).view(bsz, -1, self.num_attention_heads, self.attention_head_size).transpose(1, 2) - ) + q_input_shape = (bsz, tgt_len, -1, self.attention_head_size) + kv_input_shape = (bsz, src_len, -1, self.attention_head_size) - is_updated = False - is_cross_attention = encoder_hidden_states is not None - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if past_key_values is not None: - if isinstance(past_key_values, EncoderDecoderCache): - is_updated = past_key_values.is_updated.get(self.layer_idx) - if is_cross_attention: - # after the first generated id, we can subsequently re-use all key/value_states from cache - curr_past_key_value = past_key_values.cross_attention_cache - else: - curr_past_key_value = past_key_values.self_attention_cache - else: - curr_past_key_value = past_key_values + # get query proj + query_layer = self.query(hidden_states).view(*q_input_shape).transpose(1, 2) - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if is_cross_attention and past_key_values is not None and is_updated: + is_updated = past_key_value.is_updated.get(self.layer_idx) if past_key_value is not None else False + if past_key_value is not None and is_updated: # reuse k,v, cross_attentions - key_layer = curr_past_key_value.layers[self.layer_idx].keys - value_layer = curr_past_key_value.layers[self.layer_idx].values + key_layer = past_key_value.cross_attention_cache.layers[self.layer_idx].keys + value_layer = past_key_value.cross_attention_cache.layers[self.layer_idx].values else: - key_layer = ( - self.key(current_states) - .view(bsz, -1, self.num_attention_heads, self.attention_head_size) - .transpose(1, 2) - ) - value_layer = ( - self.value(current_states) - .view(bsz, -1, self.num_attention_heads, self.attention_head_size) - .transpose(1, 2) - ) + key_layer = self.key(encoder_hidden_states).view(*kv_input_shape).transpose(1, 2) + value_layer = self.value(encoder_hidden_states).view(*kv_input_shape).transpose(1, 2) - if past_key_values is not None: - # save all key/value_layer to cache to be re-used for fast auto-regressive generation - cache_position = cache_position if not is_cross_attention else None - key_layer, value_layer = curr_past_key_value.update( - key_layer, value_layer, self.layer_idx, {"cache_position": cache_position} + if past_key_value is not None: + # save all states to the cache + key_layer, value_layer = past_key_value.cross_attention_cache.update( + key_layer, value_layer, self.layer_idx ) # set flag that curr layer for cross-attn is already updated so we can re-use in subsequent calls - if is_cross_attention and isinstance(past_key_values, EncoderDecoderCache): - past_key_values.is_updated[self.layer_idx] = True + past_key_value.is_updated[self.layer_idx] = True - # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment - # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling. - # The tgt_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create - # a causal mask in case tgt_len == 1. - is_causal = self.is_decoder and not is_cross_attention and attention_mask is None and tgt_len > 1 + attention_interface: Callable = eager_attention_forward + if self.config._attn_implementation != "eager": + if self.position_embedding_type != "absolute": + raise ValueError( + f"You are using {self.config._attn_implementation} as attention type. However, non-absolute " + 'positional embeddings can not work with them. Please load the model with `attn_implementation="eager"`.' + ) + attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] - attn_output = torch.nn.functional.scaled_dot_product_attention( + attn_output, attn_weights = attention_interface( + self, query_layer, key_layer, value_layer, - attn_mask=attention_mask, - dropout_p=self.dropout_prob if self.training else 0.0, - is_causal=is_causal, + attention_mask, + dropout=0.0 if not self.training else self.dropout.p, + scaling=self.scaling, + # only for relevant for non-absolute positional embeddings + use_cache=past_key_value is not None, + **kwargs, ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, tgt_len, self.all_head_size) - - return attn_output, None + attn_output = attn_output.reshape(bsz, tgt_len, -1).contiguous() + return attn_output, attn_weights class DummyBertSelfOutput(nn.Module): @@ -338,19 +337,15 @@ def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> to return hidden_states -DUMMY_BERT_SELF_ATTENTION_CLASSES = { - "eager": DummyBertSelfAttention, - "sdpa": DummyBertSdpaSelfAttention, -} - - class DummyBertAttention(nn.Module): - def __init__(self, config, position_embedding_type=None, layer_idx=None): + def __init__( + self, config, position_embedding_type=None, is_causal=False, layer_idx=None, is_cross_attention=False + ): super().__init__() - self.self = DUMMY_BERT_SELF_ATTENTION_CLASSES[config._attn_implementation]( - config, - position_embedding_type=position_embedding_type, - layer_idx=layer_idx, + self.is_cross_attention = is_cross_attention + attention_class = DummyBertCrossAttention if is_cross_attention else DummyBertSelfAttention + self.self = attention_class( + config, position_embedding_type=position_embedding_type, is_causal=is_causal, layer_idx=layer_idx ) self.output = DummyBertSelfOutput(config) self.pruned_heads = set() @@ -373,29 +368,27 @@ def prune_heads(self, heads): self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + encoder_attention_mask: Optional[torch.FloatTensor] = None, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - self_outputs = self.self( + attention_mask = attention_mask if not self.is_cross_attention else encoder_attention_mask + attention_output, attn_weights = self.self( hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, - past_key_values=past_key_values, - output_attentions=output_attentions, + attention_mask=attention_mask, + past_key_value=past_key_value, cache_position=cache_position, + **kwargs, ) - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs + attention_output = self.output(attention_output, hidden_states) + return attention_output, attn_weights class DummyBertIntermediate(nn.Module): @@ -432,38 +425,40 @@ def __init__(self, config, layer_idx=None): super().__init__() self.chunk_size_feed_forward = config.chunk_size_feed_forward self.seq_len_dim = 1 - self.attention = DummyBertAttention(config, layer_idx=layer_idx) + self.attention = DummyBertAttention(config, is_causal=config.is_decoder, layer_idx=layer_idx) self.is_decoder = config.is_decoder self.add_cross_attention = config.add_cross_attention if self.add_cross_attention: if not self.is_decoder: raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = DummyBertAttention(config, position_embedding_type="absolute", layer_idx=layer_idx) + self.crossattention = DummyBertAttention( + config, + position_embedding_type="absolute", + is_causal=False, + layer_idx=layer_idx, + is_cross_attention=True, + ) self.intermediate = DummyBertIntermediate(config) self.output = DummyBertOutput(config) - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - self_attention_outputs = self.attention( + self_attention_output, _ = self.attention( hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - past_key_values=past_key_values, + attention_mask, + past_key_value=past_key_value, cache_position=cache_position, + **kwargs, ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights + attention_output = self_attention_output if self.is_decoder and encoder_hidden_states is not None: if not hasattr(self, "crossattention"): @@ -472,24 +467,20 @@ def forward( " by setting `config.add_cross_attention=True`" ) - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask=encoder_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - past_key_values=past_key_values, - output_attentions=output_attentions, - cache_position=cache_position, + cross_attention_output, _ = self.crossattention( + self_attention_output, + None, # attention_mask + encoder_hidden_states, + encoder_attention_mask, + past_key_value=past_key_value, + **kwargs, ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights + attention_output = cross_attention_output layer_output = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output ) - outputs = (layer_output,) + outputs - - return outputs + return layer_output def feed_forward_chunk(self, attention_output): intermediate_output = self.intermediate(attention_output) @@ -498,92 +489,36 @@ def feed_forward_chunk(self, attention_output): class DummyBertEncoder(nn.Module): - def __init__(self, config, layer_idx=None): + def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList([DummyBertLayer(config, layer_idx=i) for i in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, + past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - if use_cache and self.config.is_decoder and past_key_values is None: - past_key_values = EncoderDecoderCache(DynamicCache(config=self.config), DynamicCache(config=self.config)) - - if use_cache and self.config.is_decoder and isinstance(past_key_values, tuple): - logger.warning_once( - "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.58.0. " - "You should pass an instance of `EncoderDecoderCache` instead, e.g. " - "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." - ) - past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( + hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - output_attentions=output_attentions, + past_key_value=past_key_values, cache_position=cache_position, + **kwargs, ) - hidden_states = layer_outputs[0] - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - past_key_values, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, - past_key_values=past_key_values, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, + past_key_values=past_key_values if use_cache else None, ) @@ -644,10 +579,18 @@ def forward(self, hidden_states): @auto_docstring class DummyBertPreTrainedModel(PreTrainedModel): - config: DummyBertConfig + config_class = DummyBertConfig base_model_prefix = "dummy_bert" supports_gradient_checkpointing = True + _supports_flash_attn = True _supports_sdpa = True + _supports_flex_attn = True + _supports_attention_backend = True + _can_record_outputs = { + "hidden_states": DummyBertLayer, + "attentions": DummyBertSelfAttention, + "cross_attentions": DummyBertCrossAttention, + } def _init_weights(self, module): """Initialize the weights""" @@ -688,13 +631,13 @@ def __init__(self, config, add_pooling_layer=True): """ super().__init__(config) self.config = config + self.gradient_checkpointing = False self.embeddings = DummyBertEmbeddings(config) self.encoder = DummyBertEncoder(config) self.pooler = DummyBertPooler(config) if add_pooling_layer else None - self.attn_implementation = config._attn_implementation self.position_embedding_type = config.position_embedding_type # Initialize weights and apply final processing @@ -714,6 +657,7 @@ class PreTrainedModel for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) + @check_model_inputs @auto_docstring def forward( self, @@ -721,7 +665,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -731,46 +674,37 @@ def forward( output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if self.config.is_decoder: use_cache = use_cache if use_cache is not None else self.config.use_cache else: use_cache = False - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - past_key_values_length = 0 - if past_key_values is not None: - past_key_values_length = ( - past_key_values[0][0].shape[-2] - if not isinstance(past_key_values, Cache) - else past_key_values.get_seq_length() + return_legacy_cache = False + if use_cache and not isinstance(past_key_values, Cache): + logger.warning_once( + "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.58.0. " + "You should pass an instance of `EncoderDecoderCache` instead, e.g. " + "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." ) + return_legacy_cache = True + past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) + if (input_ids is None) ^ (inputs_embeds is not None): + raise ValueError("You must specify exactly one of input_ids or inputs_embeds") + + if input_ids is not None: + device = input_ids.device + input_shape = input_ids.shape + else: + device = inputs_embeds.device + input_shape = inputs_embeds.shape[:-1] + + seq_length = input_shape[1] + past_key_values_length = past_key_values.get_seq_length() if past_key_values is not None else 0 + if cache_position is None: + cache_position = torch.arange(past_key_values_length, past_key_values_length + seq_length, device=device) embedding_output = self.embeddings( input_ids=input_ids, @@ -780,86 +714,138 @@ def forward( past_key_values_length=past_key_values_length, ) - if attention_mask is None: - attention_mask = torch.ones((batch_size, seq_length + past_key_values_length), device=device) - - use_sdpa_attention_masks = ( - self.attn_implementation == "sdpa" - and self.position_embedding_type == "absolute" - and head_mask is None - and not output_attentions + attention_mask, encoder_attention_mask = self._create_attention_masks( + input_shape=input_shape, + attention_mask=attention_mask, + encoder_attention_mask=encoder_attention_mask, + embedding_output=embedding_output, + encoder_hidden_states=encoder_hidden_states, + cache_position=cache_position, + past_key_values=past_key_values, ) - # Expand the attention mask - if use_sdpa_attention_masks and attention_mask.dim() == 2: - # Expand the attention mask for SDPA. - # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len] - if self.config.is_decoder: - extended_attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( - attention_mask, - input_shape, - embedding_output, - past_key_values_length, - ) - else: - extended_attention_mask = _prepare_4d_attention_mask_for_sdpa( - attention_mask, embedding_output.dtype, tgt_len=seq_length - ) - else: - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - - if use_sdpa_attention_masks and encoder_attention_mask.dim() == 2: - # Expand the attention mask for SDPA. - # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len] - encoder_extended_attention_mask = _prepare_4d_attention_mask_for_sdpa( - encoder_attention_mask, embedding_output.dtype, tgt_len=seq_length - ) - else: - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, + attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, + encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, cache_position=cache_position, + position_ids=position_ids, + **kwargs, ) - sequence_output = encoder_outputs[0] + sequence_output = encoder_outputs.last_hidden_state pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] + if return_legacy_cache: + encoder_outputs.past_key_values = encoder_outputs.past_key_values.to_legacy_cache() return BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=sequence_output, pooler_output=pooled_output, past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, ) + + def _create_attention_masks( + self, + input_shape, + attention_mask, + encoder_attention_mask, + embedding_output, + encoder_hidden_states, + cache_position, + past_key_values, + ): + if attention_mask is not None and attention_mask.dim() == 2: + if self.config.is_decoder: + attention_mask = create_causal_mask( + config=self.config, + input_embeds=embedding_output, + attention_mask=attention_mask, + cache_position=cache_position, + past_key_values=past_key_values, + ) + else: + attention_mask = self._update_full_mask( + attention_mask, + embedding_output, + ) + elif attention_mask is not None and attention_mask.dim() == 3: + if "flash" in self.config._attn_implementation or self.config._attn_implementation == "flex_attention": + raise ValueError( + "Passing attention mask with a 3D/4D shape does not work with type " + f"{self.config._attn_implementation} - please use either `sdpa` or `eager` instead." + ) + attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) + + if encoder_attention_mask is not None: + if encoder_attention_mask.dim() == 2: + encoder_attention_mask = self._update_cross_attn_mask( + encoder_hidden_states, + encoder_attention_mask, + embedding_output.shape[:2], + embedding_output, + ) + else: + if "flash" in self.config._attn_implementation or self.config._attn_implementation == "flex_attention": + raise ValueError( + "Passing attention mask with a 3D/4D shape does not work with type " + f"{self.config._attn_implementation} - please use either `sdpa` or `eager` instead." + ) + encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) + + return attention_mask, encoder_attention_mask + + def _update_full_mask( + self, + attention_mask: Union[torch.Tensor, None], + inputs_embeds: torch.Tensor, + ): + if attention_mask is not None: + if "flash" in self.config._attn_implementation: + attention_mask = attention_mask if 0 in attention_mask else None + elif self.config._attn_implementation == "sdpa": + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) + elif self.config._attn_implementation == "flex_attention": + if isinstance(attention_mask, torch.Tensor): + attention_mask = make_flex_block_causal_mask(attention_mask, is_causal=False) + else: + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + attention_mask = _prepare_4d_attention_mask(attention_mask, inputs_embeds.dtype) + + return attention_mask + + def _update_cross_attn_mask( + self, + encoder_hidden_states: Union[torch.Tensor, None], + encoder_attention_mask: Union[torch.Tensor, None], + input_shape: torch.Size, + inputs_embeds: torch.Tensor, + ): + # expand encoder attention mask + if encoder_hidden_states is not None and encoder_attention_mask is not None: + if "flash" in self.config._attn_implementation: + encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None + elif self.config._attn_implementation == "sdpa": + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( + encoder_attention_mask, + inputs_embeds.dtype, + tgt_len=input_shape[-1], + ) + elif self.config._attn_implementation == "flex_attention": + if isinstance(encoder_attention_mask, torch.Tensor): + encoder_attention_mask = make_flex_block_causal_mask( + encoder_attention_mask, + query_length=input_shape[-1], + is_causal=False, + ) + else: + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + encoder_attention_mask = _prepare_4d_attention_mask( + encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] + ) + + return encoder_attention_mask diff --git a/examples/modular-transformers/modeling_roberta.py b/examples/modular-transformers/modeling_roberta.py index 2ae39a555892..427e8f8d1572 100644 --- a/examples/modular-transformers/modeling_roberta.py +++ b/examples/modular-transformers/modeling_roberta.py @@ -4,24 +4,29 @@ # the file from the modular. If any change should be done, please apply the change to the # modular_roberta.py file directly. One of our CI enforces this. # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 -import math -from typing import Optional, Union +from typing import Callable, Optional, Union import torch import torch.nn as nn from ...activations import ACT2FN -from ...cache_utils import Cache, DynamicCache, EncoderDecoderCache -from ...modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa +from ...cache_utils import Cache, EncoderDecoderCache +from ...masking_utils import create_causal_mask +from ...modeling_attn_mask_utils import _prepare_4d_attention_mask, _prepare_4d_attention_mask_for_sdpa from ...modeling_layers import GradientCheckpointingLayer from ...modeling_outputs import BaseModelOutputWithPastAndCrossAttentions, BaseModelOutputWithPoolingAndCrossAttentions -from ...modeling_utils import PreTrainedModel +from ...modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel +from ...processing_utils import Unpack from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import auto_docstring, logging -from ...utils.deprecation import deprecate_kwarg +from ...utils import TransformersKwargs, auto_docstring, is_torch_flex_attn_available, logging +from ...utils.generic import check_model_inputs from .configuration_roberta import RobertaConfig +if is_torch_flex_attn_available(): + from ...integrations.flex_attention import make_flex_block_causal_mask + + logger = logging.get_logger(__name__) @@ -61,7 +66,7 @@ def forward( else: input_shape = inputs_embeds.size()[:-1] - seq_length = input_shape[1] + batch_size, seq_length = input_shape if position_ids is None: position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] @@ -71,9 +76,10 @@ def forward( # issue #5664 if token_type_ids is None: if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded + # NOTE: We assume either pos ids to have bsz == 1 (broadcastable) or bsz == effective bsz (input_shape[0]) + buffered_token_type_ids = self.token_type_ids.expand(position_ids.shape[0], -1) + buffered_token_type_ids = torch.gather(buffered_token_type_ids, dim=1, index=position_ids) + token_type_ids = buffered_token_type_ids.expand(batch_size, seq_length) else: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) @@ -90,18 +96,74 @@ def forward( return embeddings +def eager_attention_forward( + module: nn.Module, + query: torch.Tensor, + key: torch.Tensor, + value: torch.Tensor, + attention_mask: Optional[torch.Tensor], + scaling: Optional[float] = None, + dropout: float = 0.0, + use_cache: Optional[bool] = None, + **kwargs: Unpack[TransformersKwargs], +): + if scaling is None: + scaling = query.size(-1) ** -0.5 + + # Take the dot product between "query" and "key" to get the raw attention scores. + attn_weights = torch.matmul(query, key.transpose(2, 3)) + + # Relative positional embeddings + if module.position_embedding_type == "relative_key" or module.position_embedding_type == "relative_key_query": + query_length, key_length = query.shape[2], key.shape[2] + if use_cache: + position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=query.device).view(-1, 1) + else: + position_ids_l = torch.arange(query_length, dtype=torch.long, device=query.device).view(-1, 1) + position_ids_r = torch.arange(key_length, dtype=torch.long, device=query.device).view(1, -1) + distance = position_ids_l - position_ids_r + + positional_embedding = module.distance_embedding(distance + module.max_position_embeddings - 1) + positional_embedding = positional_embedding.to(dtype=query.dtype) # fp16 compatibility + + if module.position_embedding_type == "relative_key": + relative_position_scores = torch.einsum("bhld,lrd->bhlr", query, positional_embedding) + attn_weights = attn_weights + relative_position_scores + elif module.position_embedding_type == "relative_key_query": + relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query, positional_embedding) + relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key, positional_embedding) + attn_weights = attn_weights + relative_position_scores_query + relative_position_scores_key + + # Scaling is shifted in case of embeddings being relative + attn_weights = attn_weights * scaling + + if attention_mask is not None and attention_mask.ndim == 4: + attention_mask = attention_mask[:, :, :, : key.shape[-2]] + attn_weights = attn_weights + attention_mask + + attn_weights = nn.functional.softmax(attn_weights, dim=-1) + attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) + + attn_output = torch.matmul(attn_weights, value) + attn_output = attn_output.transpose(1, 2).contiguous() + + return attn_output, attn_weights + + class RobertaSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None, layer_idx=None): + def __init__(self, config, position_embedding_type=None, is_causal=False, layer_idx=None): super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " f"heads ({config.num_attention_heads})" ) + self.config = config self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size + self.scaling = self.attention_head_size**-0.5 self.query = nn.Linear(config.hidden_size, self.all_head_size) self.key = nn.Linear(config.hidden_size, self.all_head_size) @@ -116,215 +178,152 @@ def __init__(self, config, position_embedding_type=None, layer_idx=None): self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) self.is_decoder = config.is_decoder + self.is_causal = is_causal self.layer_idx = layer_idx - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - batch_size, seq_length, _ = hidden_states.shape - query_layer = self.query(hidden_states) - query_layer = query_layer.view(batch_size, -1, self.num_attention_heads, self.attention_head_size).transpose( - 1, 2 - ) - - is_updated = False - is_cross_attention = encoder_hidden_states is not None - if past_key_values is not None: - if isinstance(past_key_values, EncoderDecoderCache): - is_updated = past_key_values.is_updated.get(self.layer_idx) - if is_cross_attention: - # after the first generated id, we can subsequently re-use all key/value_layer from cache - curr_past_key_value = past_key_values.cross_attention_cache - else: - curr_past_key_value = past_key_values.self_attention_cache - else: - curr_past_key_value = past_key_values - - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if is_cross_attention and past_key_values is not None and is_updated: - # reuse k,v, cross_attentions - key_layer = curr_past_key_value.layers[self.layer_idx].keys - value_layer = curr_past_key_value.layers[self.layer_idx].values - else: - key_layer = self.key(current_states) - key_layer = key_layer.view(batch_size, -1, self.num_attention_heads, self.attention_head_size).transpose( - 1, 2 + input_shape = hidden_states.shape[:-1] + hidden_shape = (*input_shape, -1, self.attention_head_size) + + # get all proj + query_layer = self.query(hidden_states).view(*hidden_shape).transpose(1, 2) + key_layer = self.key(hidden_states).view(*hidden_shape).transpose(1, 2) + value_layer = self.value(hidden_states).view(*hidden_shape).transpose(1, 2) + + if past_key_value is not None: + # decoder-only roberta can have a simple dynamic cache for example + current_past_key_value = past_key_value + if isinstance(past_key_value, EncoderDecoderCache): + current_past_key_value = past_key_value.self_attention_cache + + # save all key/value_layer to cache to be re-used for fast auto-regressive generation + key_layer, value_layer = current_past_key_value.update( + key_layer, + value_layer, + self.layer_idx, + {"cache_position": cache_position}, ) - value_layer = self.value(current_states) - value_layer = value_layer.view( - batch_size, -1, self.num_attention_heads, self.attention_head_size - ).transpose(1, 2) - - if past_key_values is not None: - # save all key/value_layer to cache to be re-used for fast auto-regressive generation - cache_position = cache_position if not is_cross_attention else None - key_layer, value_layer = curr_past_key_value.update( - key_layer, value_layer, self.layer_idx, {"cache_position": cache_position} - ) - # set flag that curr layer for cross-attn is already updated so we can re-use in subsequent calls - if is_cross_attention and isinstance(past_key_values, EncoderDecoderCache): - past_key_values.is_updated[self.layer_idx] = True - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - query_length, key_length = query_layer.shape[2], key_layer.shape[2] - if past_key_values is not None: - position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=hidden_states.device).view( - -1, 1 + attention_interface: Callable = eager_attention_forward + if self.config._attn_implementation != "eager": + if self.position_embedding_type != "absolute": + raise ValueError( + f"You are using {self.config._attn_implementation} as attention type. However, non-absolute " + 'positional embeddings can not work with them. Please load the model with `attn_implementation="eager"`.' ) - else: - position_ids_l = torch.arange(query_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(key_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) + attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) + attn_output, attn_weights = attention_interface( + self, + query_layer, + key_layer, + value_layer, + attention_mask, + dropout=0.0 if not self.training else self.dropout.p, + scaling=self.scaling, + # only for relevant for non-absolute positional embeddings + use_cache=past_key_value is not None, + **kwargs, + ) + attn_output = attn_output.reshape(*input_shape, -1).contiguous() + return attn_output, attn_weights - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) +class RobertaCrossAttention(nn.Module): + def __init__(self, config, position_embedding_type=None, is_causal=False, layer_idx=None): + super().__init__() + if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): + raise ValueError( + f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " + f"heads ({config.num_attention_heads})" + ) + self.config = config - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) + self.num_attention_heads = config.num_attention_heads + self.attention_head_size = int(config.hidden_size / config.num_attention_heads) + self.all_head_size = self.num_attention_heads * self.attention_head_size + self.scaling = self.attention_head_size**-0.5 - return context_layer, attention_probs + self.query = nn.Linear(config.hidden_size, self.all_head_size) + self.key = nn.Linear(config.hidden_size, self.all_head_size) + self.value = nn.Linear(config.hidden_size, self.all_head_size) + self.dropout = nn.Dropout(config.attention_probs_dropout_prob) + self.position_embedding_type = position_embedding_type or getattr( + config, "position_embedding_type", "absolute" + ) + if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": + self.max_position_embeddings = config.max_position_embeddings + self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) -class RobertaSdpaSelfAttention(RobertaSelfAttention): - def __init__(self, config, position_embedding_type=None, layer_idx=None): - super().__init__(config, position_embedding_type=position_embedding_type, layer_idx=layer_idx) - self.dropout_prob = config.attention_probs_dropout_prob + self.is_causal = is_causal + self.layer_idx = layer_idx - # Adapted from RobertaSelfAttention - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, - cache_position: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.FloatTensor] = None, + past_key_value: Optional[EncoderDecoderCache] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - if self.position_embedding_type != "absolute" or output_attentions or head_mask is not None: - # TODO: Improve this warning with e.g. `model.config._attn_implementation = "manual"` once implemented. - logger.warning_once( - "RobertaSdpaSelfAttention is used but `torch.nn.functional.scaled_dot_product_attention` does not support " - "non-absolute `position_embedding_type` or `output_attentions=True` or `head_mask`. Falling back to " - "the manual attention implementation, but specifying the manual implementation will be required from " - "Transformers version v5.0.0 onwards. This warning can be removed using the argument " - '`attn_implementation="eager"` when loading the model.' - ) - return super().forward( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - past_key_values, - output_attentions, - cache_position, - ) - - bsz, tgt_len, _ = hidden_states.size() + # determine input shapes + bsz, tgt_len = hidden_states.shape[:-1] + src_len = encoder_hidden_states.shape[1] - query_layer = ( - self.query(hidden_states).view(bsz, -1, self.num_attention_heads, self.attention_head_size).transpose(1, 2) - ) + q_input_shape = (bsz, tgt_len, -1, self.attention_head_size) + kv_input_shape = (bsz, src_len, -1, self.attention_head_size) - is_updated = False - is_cross_attention = encoder_hidden_states is not None - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if past_key_values is not None: - if isinstance(past_key_values, EncoderDecoderCache): - is_updated = past_key_values.is_updated.get(self.layer_idx) - if is_cross_attention: - # after the first generated id, we can subsequently re-use all key/value_states from cache - curr_past_key_value = past_key_values.cross_attention_cache - else: - curr_past_key_value = past_key_values.self_attention_cache - else: - curr_past_key_value = past_key_values + # get query proj + query_layer = self.query(hidden_states).view(*q_input_shape).transpose(1, 2) - current_states = encoder_hidden_states if is_cross_attention else hidden_states - if is_cross_attention and past_key_values is not None and is_updated: + is_updated = past_key_value.is_updated.get(self.layer_idx) if past_key_value is not None else False + if past_key_value is not None and is_updated: # reuse k,v, cross_attentions - key_layer = curr_past_key_value.layers[self.layer_idx].keys - value_layer = curr_past_key_value.layers[self.layer_idx].values + key_layer = past_key_value.cross_attention_cache.layers[self.layer_idx].keys + value_layer = past_key_value.cross_attention_cache.layers[self.layer_idx].values else: - key_layer = ( - self.key(current_states) - .view(bsz, -1, self.num_attention_heads, self.attention_head_size) - .transpose(1, 2) - ) - value_layer = ( - self.value(current_states) - .view(bsz, -1, self.num_attention_heads, self.attention_head_size) - .transpose(1, 2) - ) + key_layer = self.key(encoder_hidden_states).view(*kv_input_shape).transpose(1, 2) + value_layer = self.value(encoder_hidden_states).view(*kv_input_shape).transpose(1, 2) - if past_key_values is not None: - # save all key/value_layer to cache to be re-used for fast auto-regressive generation - cache_position = cache_position if not is_cross_attention else None - key_layer, value_layer = curr_past_key_value.update( - key_layer, value_layer, self.layer_idx, {"cache_position": cache_position} + if past_key_value is not None: + # save all states to the cache + key_layer, value_layer = past_key_value.cross_attention_cache.update( + key_layer, value_layer, self.layer_idx ) # set flag that curr layer for cross-attn is already updated so we can re-use in subsequent calls - if is_cross_attention and isinstance(past_key_values, EncoderDecoderCache): - past_key_values.is_updated[self.layer_idx] = True + past_key_value.is_updated[self.layer_idx] = True - # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment - # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling. - # The tgt_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create - # a causal mask in case tgt_len == 1. - is_causal = self.is_decoder and not is_cross_attention and attention_mask is None and tgt_len > 1 + attention_interface: Callable = eager_attention_forward + if self.config._attn_implementation != "eager": + if self.position_embedding_type != "absolute": + raise ValueError( + f"You are using {self.config._attn_implementation} as attention type. However, non-absolute " + 'positional embeddings can not work with them. Please load the model with `attn_implementation="eager"`.' + ) + attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] - attn_output = torch.nn.functional.scaled_dot_product_attention( + attn_output, attn_weights = attention_interface( + self, query_layer, key_layer, value_layer, - attn_mask=attention_mask, - dropout_p=self.dropout_prob if self.training else 0.0, - is_causal=is_causal, + attention_mask, + dropout=0.0 if not self.training else self.dropout.p, + scaling=self.scaling, + # only for relevant for non-absolute positional embeddings + use_cache=past_key_value is not None, + **kwargs, ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, tgt_len, self.all_head_size) - - return attn_output, None + attn_output = attn_output.reshape(bsz, tgt_len, -1).contiguous() + return attn_output, attn_weights class RobertaSelfOutput(nn.Module): @@ -341,19 +340,15 @@ def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> to return hidden_states -ROBERTA_SELF_ATTENTION_CLASSES = { - "eager": RobertaSelfAttention, - "sdpa": RobertaSdpaSelfAttention, -} - - class RobertaAttention(nn.Module): - def __init__(self, config, position_embedding_type=None, layer_idx=None): + def __init__( + self, config, position_embedding_type=None, is_causal=False, layer_idx=None, is_cross_attention=False + ): super().__init__() - self.self = ROBERTA_SELF_ATTENTION_CLASSES[config._attn_implementation]( - config, - position_embedding_type=position_embedding_type, - layer_idx=layer_idx, + self.is_cross_attention = is_cross_attention + attention_class = RobertaCrossAttention if is_cross_attention else RobertaSelfAttention + self.self = attention_class( + config, position_embedding_type=position_embedding_type, is_causal=is_causal, layer_idx=layer_idx ) self.output = RobertaSelfOutput(config) self.pruned_heads = set() @@ -376,29 +371,27 @@ def prune_heads(self, heads): self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + encoder_attention_mask: Optional[torch.FloatTensor] = None, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - self_outputs = self.self( + attention_mask = attention_mask if not self.is_cross_attention else encoder_attention_mask + attention_output, attn_weights = self.self( hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, - past_key_values=past_key_values, - output_attentions=output_attentions, + attention_mask=attention_mask, + past_key_value=past_key_value, cache_position=cache_position, + **kwargs, ) - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs + attention_output = self.output(attention_output, hidden_states) + return attention_output, attn_weights class RobertaIntermediate(nn.Module): @@ -435,38 +428,40 @@ def __init__(self, config, layer_idx=None): super().__init__() self.chunk_size_feed_forward = config.chunk_size_feed_forward self.seq_len_dim = 1 - self.attention = RobertaAttention(config, layer_idx=layer_idx) + self.attention = RobertaAttention(config, is_causal=config.is_decoder, layer_idx=layer_idx) self.is_decoder = config.is_decoder self.add_cross_attention = config.add_cross_attention if self.add_cross_attention: if not self.is_decoder: raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = RobertaAttention(config, position_embedding_type="absolute", layer_idx=layer_idx) + self.crossattention = RobertaAttention( + config, + position_embedding_type="absolute", + is_causal=False, + layer_idx=layer_idx, + is_cross_attention=True, + ) self.intermediate = RobertaIntermediate(config) self.output = RobertaOutput(config) - @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Cache] = None, - output_attentions: Optional[bool] = False, + past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: - self_attention_outputs = self.attention( + self_attention_output, _ = self.attention( hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - past_key_values=past_key_values, + attention_mask, + past_key_value=past_key_value, cache_position=cache_position, + **kwargs, ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights + attention_output = self_attention_output if self.is_decoder and encoder_hidden_states is not None: if not hasattr(self, "crossattention"): @@ -475,24 +470,20 @@ def forward( " by setting `config.add_cross_attention=True`" ) - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask=encoder_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - past_key_values=past_key_values, - output_attentions=output_attentions, - cache_position=cache_position, + cross_attention_output, _ = self.crossattention( + self_attention_output, + None, # attention_mask + encoder_hidden_states, + encoder_attention_mask, + past_key_value=past_key_value, + **kwargs, ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights + attention_output = cross_attention_output layer_output = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output ) - outputs = (layer_output,) + outputs - - return outputs + return layer_output def feed_forward_chunk(self, attention_output): intermediate_output = self.intermediate(attention_output) @@ -501,92 +492,36 @@ def feed_forward_chunk(self, attention_output): class RobertaEncoder(nn.Module): - def __init__(self, config, layer_idx=None): + def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList([RobertaLayer(config, layer_idx=i) for i in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, + past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - if use_cache and self.config.is_decoder and past_key_values is None: - past_key_values = EncoderDecoderCache(DynamicCache(config=self.config), DynamicCache(config=self.config)) - - if use_cache and self.config.is_decoder and isinstance(past_key_values, tuple): - logger.warning_once( - "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.58.0. " - "You should pass an instance of `EncoderDecoderCache` instead, e.g. " - "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." - ) - past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( + hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - output_attentions=output_attentions, + past_key_value=past_key_values, cache_position=cache_position, + **kwargs, ) - hidden_states = layer_outputs[0] - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - past_key_values, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, - past_key_values=past_key_values, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, + past_key_values=past_key_values if use_cache else None, ) @@ -647,10 +582,18 @@ def forward(self, hidden_states): @auto_docstring class RobertaPreTrainedModel(PreTrainedModel): - config: RobertaConfig + config_class = RobertaConfig base_model_prefix = "roberta" supports_gradient_checkpointing = True + _supports_flash_attn = True _supports_sdpa = True + _supports_flex_attn = True + _supports_attention_backend = True + _can_record_outputs = { + "hidden_states": RobertaLayer, + "attentions": RobertaSelfAttention, + "cross_attentions": RobertaCrossAttention, + } def _init_weights(self, module): """Initialize the weights""" @@ -691,13 +634,13 @@ def __init__(self, config, add_pooling_layer=True): """ super().__init__(config) self.config = config + self.gradient_checkpointing = False self.embeddings = RobertaEmbeddings(config) self.encoder = RobertaEncoder(config) self.pooler = RobertaPooler(config) if add_pooling_layer else None - self.attn_implementation = config._attn_implementation self.position_embedding_type = config.position_embedding_type # Initialize weights and apply final processing @@ -717,6 +660,7 @@ class PreTrainedModel for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) + @check_model_inputs @auto_docstring def forward( self, @@ -724,56 +668,43 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[list[torch.FloatTensor]] = None, + past_key_values: Optional[Union[list[torch.FloatTensor], Cache]] = None, use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if self.config.is_decoder: use_cache = use_cache if use_cache is not None else self.config.use_cache else: use_cache = False - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask) - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - past_key_values_length = 0 - if past_key_values is not None: - past_key_values_length = ( - past_key_values[0][0].shape[-2] - if not isinstance(past_key_values, Cache) - else past_key_values.get_seq_length() + return_legacy_cache = False + if use_cache and not isinstance(past_key_values, Cache): + logger.warning_once( + "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.58.0. " + "You should pass an instance of `EncoderDecoderCache` instead, e.g. " + "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." ) + return_legacy_cache = True + past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) + if (input_ids is None) ^ (inputs_embeds is not None): + raise ValueError("You must specify exactly one of input_ids or inputs_embeds") + + if input_ids is not None: + device = input_ids.device + input_shape = input_ids.shape + else: + device = inputs_embeds.device + input_shape = inputs_embeds.shape[:-1] + + seq_length = input_shape[1] + past_key_values_length = past_key_values.get_seq_length() if past_key_values is not None else 0 + if cache_position is None: + cache_position = torch.arange(past_key_values_length, past_key_values_length + seq_length, device=device) embedding_output = self.embeddings( input_ids=input_ids, @@ -783,86 +714,138 @@ def forward( past_key_values_length=past_key_values_length, ) - if attention_mask is None: - attention_mask = torch.ones((batch_size, seq_length + past_key_values_length), device=device) - - use_sdpa_attention_masks = ( - self.attn_implementation == "sdpa" - and self.position_embedding_type == "absolute" - and head_mask is None - and not output_attentions + attention_mask, encoder_attention_mask = self._create_attention_masks( + input_shape=input_shape, + attention_mask=attention_mask, + encoder_attention_mask=encoder_attention_mask, + embedding_output=embedding_output, + encoder_hidden_states=encoder_hidden_states, + cache_position=cache_position, + past_key_values=past_key_values, ) - # Expand the attention mask - if use_sdpa_attention_masks and attention_mask.dim() == 2: - # Expand the attention mask for SDPA. - # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len] - if self.config.is_decoder: - extended_attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( - attention_mask, - input_shape, - embedding_output, - past_key_values_length, - ) - else: - extended_attention_mask = _prepare_4d_attention_mask_for_sdpa( - attention_mask, embedding_output.dtype, tgt_len=seq_length - ) - else: - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - - if use_sdpa_attention_masks and encoder_attention_mask.dim() == 2: - # Expand the attention mask for SDPA. - # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len] - encoder_extended_attention_mask = _prepare_4d_attention_mask_for_sdpa( - encoder_attention_mask, embedding_output.dtype, tgt_len=seq_length - ) - else: - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, + attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, + encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, cache_position=cache_position, + position_ids=position_ids, + **kwargs, ) - sequence_output = encoder_outputs[0] + sequence_output = encoder_outputs.last_hidden_state pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] + if return_legacy_cache: + encoder_outputs.past_key_values = encoder_outputs.past_key_values.to_legacy_cache() return BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=sequence_output, pooler_output=pooled_output, past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, ) + + def _create_attention_masks( + self, + input_shape, + attention_mask, + encoder_attention_mask, + embedding_output, + encoder_hidden_states, + cache_position, + past_key_values, + ): + if attention_mask is not None and attention_mask.dim() == 2: + if self.config.is_decoder: + attention_mask = create_causal_mask( + config=self.config, + input_embeds=embedding_output, + attention_mask=attention_mask, + cache_position=cache_position, + past_key_values=past_key_values, + ) + else: + attention_mask = self._update_full_mask( + attention_mask, + embedding_output, + ) + elif attention_mask is not None and attention_mask.dim() == 3: + if "flash" in self.config._attn_implementation or self.config._attn_implementation == "flex_attention": + raise ValueError( + "Passing attention mask with a 3D/4D shape does not work with type " + f"{self.config._attn_implementation} - please use either `sdpa` or `eager` instead." + ) + attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) + + if encoder_attention_mask is not None: + if encoder_attention_mask.dim() == 2: + encoder_attention_mask = self._update_cross_attn_mask( + encoder_hidden_states, + encoder_attention_mask, + embedding_output.shape[:2], + embedding_output, + ) + else: + if "flash" in self.config._attn_implementation or self.config._attn_implementation == "flex_attention": + raise ValueError( + "Passing attention mask with a 3D/4D shape does not work with type " + f"{self.config._attn_implementation} - please use either `sdpa` or `eager` instead." + ) + encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) + + return attention_mask, encoder_attention_mask + + def _update_full_mask( + self, + attention_mask: Union[torch.Tensor, None], + inputs_embeds: torch.Tensor, + ): + if attention_mask is not None: + if "flash" in self.config._attn_implementation: + attention_mask = attention_mask if 0 in attention_mask else None + elif self.config._attn_implementation == "sdpa": + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) + elif self.config._attn_implementation == "flex_attention": + if isinstance(attention_mask, torch.Tensor): + attention_mask = make_flex_block_causal_mask(attention_mask, is_causal=False) + else: + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + attention_mask = _prepare_4d_attention_mask(attention_mask, inputs_embeds.dtype) + + return attention_mask + + def _update_cross_attn_mask( + self, + encoder_hidden_states: Union[torch.Tensor, None], + encoder_attention_mask: Union[torch.Tensor, None], + input_shape: torch.Size, + inputs_embeds: torch.Tensor, + ): + # expand encoder attention mask + if encoder_hidden_states is not None and encoder_attention_mask is not None: + if "flash" in self.config._attn_implementation: + encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None + elif self.config._attn_implementation == "sdpa": + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( + encoder_attention_mask, + inputs_embeds.dtype, + tgt_len=input_shape[-1], + ) + elif self.config._attn_implementation == "flex_attention": + if isinstance(encoder_attention_mask, torch.Tensor): + encoder_attention_mask = make_flex_block_causal_mask( + encoder_attention_mask, + query_length=input_shape[-1], + is_causal=False, + ) + else: + # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] + encoder_attention_mask = _prepare_4d_attention_mask( + encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] + ) + + return encoder_attention_mask diff --git a/examples/modular-transformers/modular_dummy_bert.py b/examples/modular-transformers/modular_dummy_bert.py index fb7440228d8c..592508752843 100644 --- a/examples/modular-transformers/modular_dummy_bert.py +++ b/examples/modular-transformers/modular_dummy_bert.py @@ -5,6 +5,8 @@ from transformers.models.bert.modeling_bert import BertModel from ...modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions +from ...processing_utils import Unpack +from ...utils import TransformersKwargs class DummyBertModel(BertModel): @@ -14,7 +16,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -24,5 +25,6 @@ def forward( output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.Tensor] = None, + **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - return super().forward(input_ids) + return super().forward(input_ids, **kwargs) diff --git a/src/transformers/integrations/flash_attention.py b/src/transformers/integrations/flash_attention.py index 552d89bac2f6..c5592114fc5c 100644 --- a/src/transformers/integrations/flash_attention.py +++ b/src/transformers/integrations/flash_attention.py @@ -23,9 +23,9 @@ def flash_attention_forward( softcap: Optional[float] = None, **kwargs, ) -> tuple[torch.Tensor, None]: - if kwargs.get("output_attentions", False) or kwargs.get("head_mask") is not None: + if kwargs.get("output_attentions", False): logger.warning_once( - "`flash_attention_2` does not support `output_attentions=True` or `head_mask`." + "`flash_attention_2` does not support `output_attentions=True`." " Please set your attention to `eager` if you want any of these features." ) diff --git a/src/transformers/integrations/flex_attention.py b/src/transformers/integrations/flex_attention.py index ee947808d894..2ccbad24261c 100644 --- a/src/transformers/integrations/flex_attention.py +++ b/src/transformers/integrations/flex_attention.py @@ -240,15 +240,9 @@ def flex_attention_forward( attention_mask: Union[torch.Tensor, "BlockMask"], scaling: Optional[float] = None, softcap: Optional[float] = None, - head_mask: Optional[torch.Tensor] = None, s_aux: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: - if head_mask is not None: - logger.warning_once( - "`flex_attention` does not support `head_mask`. Please set your attention to `eager` if you want this feature." - ) - if kwargs.get("dropout", 0.0) > 0: raise ValueError( "`flex_attention` does not support `dropout`. Please use it with inference" @@ -270,8 +264,6 @@ def score_mod(score, batch_idx, head_idx, q_idx, kv_idx): score = softcap * torch.tanh(score / softcap) if score_mask is not None: score = score + score_mask[batch_idx][0][q_idx][kv_idx] - if head_mask is not None: - score = score + head_mask[batch_idx][head_idx][0][0] # Note: attention sinks cannot be correctly implemented in score_mod # because it requires operating on the full attention matrix before softmax. # ==> this is done after flex attention diff --git a/src/transformers/integrations/sdpa_attention.py b/src/transformers/integrations/sdpa_attention.py index f6c6f2785c3f..301243b3fbfd 100644 --- a/src/transformers/integrations/sdpa_attention.py +++ b/src/transformers/integrations/sdpa_attention.py @@ -51,9 +51,9 @@ def sdpa_attention_forward( is_causal: Optional[bool] = None, **kwargs, ) -> tuple[torch.Tensor, None]: - if kwargs.get("output_attentions", False) or kwargs.get("head_mask") is not None: + if kwargs.get("output_attentions", False): logger.warning_once( - "`sdpa` attention does not support `output_attentions=True` or `head_mask`." + "`sdpa` attention does not support `output_attentions=True`." " Please set your attention to `eager` if you want any of these features." ) sdpa_kwargs = {} diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py index 3853c369cd49..1c57072a0c72 100644 --- a/src/transformers/modeling_utils.py +++ b/src/transformers/modeling_utils.py @@ -1587,44 +1587,6 @@ def get_extended_attention_mask( extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min return extended_attention_mask - def get_head_mask( - self, head_mask: Optional[Tensor], num_hidden_layers: int, is_attention_chunked: bool = False - ) -> Tensor: - """ - Prepare the head mask if needed. - - Args: - head_mask (`torch.Tensor` with shape `[num_heads]` or `[num_hidden_layers x num_heads]`, *optional*): - The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). - num_hidden_layers (`int`): - The number of hidden layers in the model. - is_attention_chunked (`bool`, *optional*, defaults to `False`): - Whether or not the attentions scores are computed by chunks or not. - - Returns: - `torch.Tensor` with shape `[num_hidden_layers x batch x num_heads x seq_length x seq_length]` or list with - `[None]` for each layer. - """ - if head_mask is not None: - head_mask = self._convert_head_mask_to_5d(head_mask, num_hidden_layers) - if is_attention_chunked is True: - head_mask = head_mask.unsqueeze(-1) - else: - head_mask = [None] * num_hidden_layers - - return head_mask - - def _convert_head_mask_to_5d(self, head_mask, num_hidden_layers): - """-> [num_hidden_layers x batch x num_heads x seq_length x seq_length]""" - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.expand(num_hidden_layers, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer - assert head_mask.dim() == 5, f"head_mask.dim != 5, instead {head_mask.dim()}" - head_mask = head_mask.to(dtype=self.dtype) # switch to float if need + fp16 compatibility - return head_mask - def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int: """ Get number of (optionally, trainable or non-embeddings) parameters in the module. diff --git a/src/transformers/models/albert/modeling_albert.py b/src/transformers/models/albert/modeling_albert.py index 31caa335bb64..a1dfa5e2fc9c 100755 --- a/src/transformers/models/albert/modeling_albert.py +++ b/src/transformers/models/albert/modeling_albert.py @@ -125,7 +125,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -166,9 +165,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -231,7 +227,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor, torch.Tensor]: input_shape = hidden_states.shape[:-1] @@ -259,7 +254,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=False, **kwargs, @@ -291,10 +285,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor, torch.Tensor]: - attention_output, _ = self.attention(hidden_states, attention_mask, head_mask, **kwargs) + attention_output, _ = self.attention(hidden_states, attention_mask, **kwargs) ffn_output = apply_chunking_to_forward( self.ff_chunk, @@ -322,11 +315,10 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[Union[torch.Tensor, tuple[torch.Tensor]], ...]: for layer_index, albert_layer in enumerate(self.albert_layers): - hidden_states = albert_layer(hidden_states, attention_mask, head_mask[layer_index], **kwargs) + hidden_states = albert_layer(hidden_states, attention_mask, **kwargs) return hidden_states @@ -342,24 +334,17 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[BaseModelOutput, tuple]: hidden_states = self.embedding_hidden_mapping_in(hidden_states) - head_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask - for i in range(self.config.num_hidden_layers): - # Number of layers in a hidden group - layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups) - # Index of the hidden group group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups)) hidden_states = self.albert_layer_groups[group_idx]( hidden_states, attention_mask, - head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group], **kwargs, ) @@ -480,7 +465,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[BaseModelOutputWithPooling, tuple]: @@ -493,12 +477,9 @@ def forward( attention_mask = self._update_full_mask(attention_mask, embedding_output) - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask, - head_mask=head_mask, position_ids=position_ids, **kwargs, ) @@ -522,8 +503,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -572,7 +551,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, sentence_order_label: Optional[torch.LongTensor] = None, @@ -609,7 +587,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -710,7 +687,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -755,7 +731,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -804,7 +779,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -820,7 +794,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -888,7 +861,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -902,7 +874,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -946,7 +917,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -957,7 +927,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1016,7 +985,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1067,7 +1035,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/align/modeling_align.py b/src/transformers/models/align/modeling_align.py index 839856b92119..f55c84b47176 100644 --- a/src/transformers/models/align/modeling_align.py +++ b/src/transformers/models/align/modeling_align.py @@ -576,7 +576,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -587,9 +586,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -621,7 +617,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -644,7 +639,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -697,14 +691,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -757,14 +749,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -796,7 +786,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -809,12 +798,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -914,7 +900,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -970,13 +955,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -986,7 +964,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -1130,7 +1107,6 @@ def get_text_features( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, ) -> torch.FloatTensor: r""" @@ -1156,7 +1132,6 @@ def get_text_features( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, ) last_hidden_state = text_outputs[0][:, 0, :] @@ -1202,7 +1177,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1253,7 +1227,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/altclip/modeling_altclip.py b/src/transformers/models/altclip/modeling_altclip.py index ec8031507d50..f40caa7af4ce 100755 --- a/src/transformers/models/altclip/modeling_altclip.py +++ b/src/transformers/models/altclip/modeling_altclip.py @@ -225,7 +225,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: input_shape = hidden_states.shape[:-1] @@ -267,10 +266,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -333,13 +328,11 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, ) attention_output = self.output(self_outputs[0], hidden_states) @@ -392,14 +385,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -432,7 +423,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -445,12 +435,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -1028,7 +1015,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1068,9 +1054,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -1080,7 +1063,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -1123,7 +1105,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -1154,7 +1135,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py b/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py index c445fbb0e36d..bc9415b70e1f 100644 --- a/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py +++ b/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py @@ -149,9 +149,7 @@ def __init__(self, config: ASTConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -168,7 +166,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -224,8 +222,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -274,9 +272,9 @@ def __init__(self, config: ASTConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -299,10 +297,9 @@ def __init__(self, config: ASTConfig): self.layer = nn.ModuleList([ASTLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -371,7 +368,6 @@ class PreTrainedModel def forward( self, input_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: r""" @@ -388,16 +384,9 @@ def forward( if input_values is None: raise ValueError("You have to specify input_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(input_values) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) @@ -442,7 +431,6 @@ def __init__(self, config: ASTConfig) -> None: def forward( self, input_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> SequenceClassifierOutput: @@ -459,9 +447,7 @@ def forward( config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ - outputs: BaseModelOutputWithPooling = self.audio_spectrogram_transformer( - input_values, head_mask=head_mask, **kwargs - ) + outputs: BaseModelOutputWithPooling = self.audio_spectrogram_transformer(input_values, **kwargs) pooled_output = outputs.pooler_output logits = self.classifier(pooled_output) diff --git a/src/transformers/models/autoformer/modeling_autoformer.py b/src/transformers/models/autoformer/modeling_autoformer.py index fc1f57aec0e7..9e583b0b8187 100644 --- a/src/transformers/models/autoformer/modeling_autoformer.py +++ b/src/transformers/models/autoformer/modeling_autoformer.py @@ -453,7 +453,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -541,15 +540,6 @@ def forward( attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, channel) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, channel) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -652,7 +642,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -660,8 +649,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -670,7 +657,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -755,8 +741,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -771,10 +755,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -790,7 +770,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -809,7 +788,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -876,8 +854,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -922,7 +898,6 @@ def __init__(self, config: AutoformerConfig): def forward( self, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -937,12 +912,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -976,14 +945,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -1000,7 +961,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -1056,8 +1016,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -1088,19 +1046,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1176,15 +1121,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1199,8 +1135,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1428,9 +1362,6 @@ def forward( future_values: Optional[torch.Tensor] = None, future_time_features: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1499,11 +1430,6 @@ def forward( Transformer requires to provide additional features. The Autoformer only learns additional embeddings for `static_categorical_features`. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1563,7 +1489,6 @@ def forward( ) encoder_outputs = self.encoder( inputs_embeds=enc_input, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1612,8 +1537,6 @@ def forward( inputs_embeds=decoder_input, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1696,9 +1619,6 @@ def forward( future_time_features: Optional[torch.Tensor] = None, future_observed_mask: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1774,11 +1694,6 @@ def forward( - 0 for values that are **missing** (i.e. NaNs that were replaced by zeros). This mask is used to filter out missing values for the final loss calculation. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1882,9 +1797,6 @@ def forward( future_values=future_values, future_time_features=future_time_features, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/bark/modeling_bark.py b/src/transformers/models/bark/modeling_bark.py index d895f95b9fc9..e4e4cb5cdea2 100644 --- a/src/transformers/models/bark/modeling_bark.py +++ b/src/transformers/models/bark/modeling_bark.py @@ -118,7 +118,7 @@ def _merge_heads(self, tensor, num_heads, attn_head_size): return tensor - def _attn(self, query, key, value, attention_mask=None, head_mask=None): + def _attn(self, query, key, value, attention_mask=None): # unlike GPTNeo's SelfAttention, divide by the square root of the dimension of the query and the key attn_weights = torch.matmul(query, key.transpose(-1, -2)) * (1.0 / math.sqrt(self.head_dim)) @@ -139,10 +139,6 @@ def _attn(self, query, key, value, attention_mask=None, head_mask=None): attn_weights = attn_weights.to(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - # (batch, num_heads, seq_len, seq_len) x (batch, num_heads, seq_len, attn_head_size) # -> (batch, num_heads, seq_len, attn_head_size) attn_output = torch.matmul(attn_weights, value) @@ -154,7 +150,6 @@ def forward( hidden_states, attention_mask=None, past_key_values=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -169,7 +164,7 @@ def forward( if past_key_values is not None: key, value = past_key_values.update(key, value, self.layer_idx, {"cache_position": cache_position}) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) attn_output = self.out_proj(attn_output) @@ -217,7 +212,6 @@ def forward( hidden_states, attention_mask=None, past_key_values=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -298,7 +292,6 @@ def forward( hidden_states, past_key_values=None, attention_mask=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -309,7 +302,6 @@ def forward( intermediary_hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -438,7 +430,6 @@ def forward( past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, input_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -525,12 +516,6 @@ def forward( # from_seq_length is 1 to easily broadcast attention_mask = _prepare_4d_attention_mask(attention_mask, input_embeds.dtype, tgt_len=1) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_heads x N x N - # head_mask has shape num_layers x batch x num_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - hidden_states = self.drop(input_embeds + position_embeds) output_shape = input_shape + (hidden_states.size(-1),) @@ -545,7 +530,6 @@ def forward( hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -1071,7 +1055,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, input_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1143,8 +1126,6 @@ def forward( # from_seq_length is 1 to easily broadcast attention_mask = _prepare_4d_attention_mask(attention_mask, input_embeds.dtype, tgt_len=1) - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - hidden_states = self.drop(input_embeds + position_embeds) output_shape = input_shape + (hidden_states.size(-1),) @@ -1158,7 +1139,6 @@ def forward( outputs = block( hidden_states, attention_mask=attention_mask, - head_mask=head_mask[i], output_attentions=output_attentions, ) diff --git a/src/transformers/models/bart/modeling_bart.py b/src/transformers/models/bart/modeling_bart.py index 97e736520fe6..720e562b3817 100755 --- a/src/transformers/models/bart/modeling_bart.py +++ b/src/transformers/models/bart/modeling_bart.py @@ -124,7 +124,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -136,9 +135,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -195,7 +191,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -264,7 +259,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -298,7 +292,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -306,8 +299,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -316,7 +307,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -384,8 +374,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -400,10 +388,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -419,7 +403,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -436,7 +419,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -533,8 +515,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -690,8 +670,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -773,7 +751,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -796,12 +773,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -850,14 +821,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -874,7 +837,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -935,8 +897,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -973,19 +933,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1101,15 +1048,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1124,8 +1062,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1204,9 +1140,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1239,12 +1172,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ # different to other models, Bart automatically creates decoder_input_ids from # input_ids if no decoder_input_ids are provided @@ -1271,7 +1198,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1291,8 +1217,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1370,9 +1294,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1406,12 +1327,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1475,9 +1390,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1546,9 +1458,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1581,12 +1490,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1605,9 +1508,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1691,9 +1591,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1727,12 +1624,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict if start_positions is not None and end_positions is not None: @@ -1743,9 +1634,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1853,8 +1741,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1865,11 +1751,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1904,8 +1785,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/beit/modeling_beit.py b/src/transformers/models/beit/modeling_beit.py index 9b6e7f1cd1a6..0728600795d2 100755 --- a/src/transformers/models/beit/modeling_beit.py +++ b/src/transformers/models/beit/modeling_beit.py @@ -257,7 +257,6 @@ def __init__(self, config: BeitConfig, window_size: Optional[tuple] = None) -> N def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, @@ -304,10 +303,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -323,22 +318,20 @@ class BeitSdpaSelfAttention(BeitSelfAttention): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, resolution: Optional[tuple[int]] = None, ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: - if output_attentions or head_mask is not None: + if output_attentions: logger.warning_once( "`BeitSdpaSelfAttention` is used but `torch.nn.functional.scaled_dot_product_attention` does not " - "support `output_attentions=True` or `head_mask`. Falling back to the manual attention implementation, " + "support `output_attentions=True`. Falling back to the manual attention implementation, " "but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. " 'This warning can be removed using the argument `attn_implementation="eager"` when loading the model.' ) return super().forward( hidden_states=hidden_states, - head_mask=head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -445,14 +438,13 @@ def prune_heads(self, heads): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, resolution: Optional[tuple[int]] = None, ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: self_outputs = self.attention( - hidden_states, head_mask, output_attentions, relative_position_bias, interpolate_pos_encoding, resolution + hidden_states, output_attentions, relative_position_bias, interpolate_pos_encoding, resolution ) attention_output = self.output(self_outputs[0], hidden_states) @@ -514,7 +506,6 @@ def __init__(self, config: BeitConfig, window_size: Optional[tuple] = None, drop def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, @@ -522,7 +513,6 @@ def forward( ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in BEiT, layernorm is applied before self-attention - head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -663,7 +653,6 @@ def __init__(self, config: BeitConfig, window_size: Optional[tuple] = None) -> N def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, interpolate_pos_encoding: bool = False, @@ -686,11 +675,8 @@ def forward( else: relative_position_bias = None - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, - head_mask=layer_head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -788,7 +774,6 @@ def forward( self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -804,19 +789,11 @@ def forward( ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, _ = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos) resolution = pixel_values.shape[2:] encoder_outputs = self.encoder( embedding_output, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, resolution=resolution, @@ -888,7 +865,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -932,7 +908,6 @@ def forward( outputs = self.beit( pixel_values, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -983,7 +958,6 @@ def __init__(self, config: BeitConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -999,7 +973,6 @@ def forward( return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.beit( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1318,7 +1291,6 @@ def compute_loss(self, logits, auxiliary_logits, labels): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1358,7 +1330,6 @@ def forward( outputs = self.beit( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=True, # we need the intermediate hidden states interpolate_pos_encoding=interpolate_pos_encoding, diff --git a/src/transformers/models/bert/modeling_bert.py b/src/transformers/models/bert/modeling_bert.py index 384e34351ea7..1689da04aa52 100755 --- a/src/transformers/models/bert/modeling_bert.py +++ b/src/transformers/models/bert/modeling_bert.py @@ -126,7 +126,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -167,9 +166,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -211,7 +207,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -255,7 +250,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -299,7 +293,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -347,7 +340,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -405,7 +397,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -417,7 +408,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -480,7 +470,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -490,7 +479,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -507,7 +495,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -536,7 +523,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -545,12 +531,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -764,7 +747,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -821,17 +803,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -912,8 +886,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -938,8 +910,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -995,7 +965,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, next_sentence_label: Optional[torch.Tensor] = None, @@ -1034,7 +1003,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1094,7 +1062,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1118,7 +1085,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1180,7 +1146,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1198,7 +1163,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1269,7 +1233,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1314,7 +1277,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1367,7 +1329,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1383,7 +1344,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1448,7 +1408,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1500,7 +1459,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1549,7 +1507,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1563,7 +1520,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1607,7 +1563,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1618,7 +1573,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/bert_generation/modeling_bert_generation.py b/src/transformers/models/bert_generation/modeling_bert_generation.py index 8966adc1eb26..12aee8a014b3 100755 --- a/src/transformers/models/bert_generation/modeling_bert_generation.py +++ b/src/transformers/models/bert_generation/modeling_bert_generation.py @@ -70,7 +70,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -111,9 +110,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -156,7 +152,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -200,7 +195,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -245,7 +239,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -293,7 +286,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -338,7 +330,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -350,7 +341,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -416,7 +406,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -426,7 +415,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -443,7 +431,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -474,7 +461,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -483,12 +469,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -624,7 +607,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -680,17 +662,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -770,8 +744,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -796,8 +768,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -874,7 +844,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -915,7 +884,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, diff --git a/src/transformers/models/big_bird/modeling_big_bird.py b/src/transformers/models/big_bird/modeling_big_bird.py index 6658235c2e03..f774c61c5964 100755 --- a/src/transformers/models/big_bird/modeling_big_bird.py +++ b/src/transformers/models/big_bird/modeling_big_bird.py @@ -158,7 +158,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -214,10 +213,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -1180,7 +1175,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -1204,7 +1198,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -1290,7 +1283,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, band_mask=None, @@ -1305,7 +1297,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -1330,7 +1321,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -1378,7 +1368,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -1417,12 +1406,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, band_mask, @@ -1680,7 +1666,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1804,13 +1789,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -1822,7 +1800,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -1965,7 +1942,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.FloatTensor] = None, next_sentence_label: Optional[torch.LongTensor] = None, @@ -2008,7 +1984,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2073,7 +2048,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -2136,7 +2110,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -2214,7 +2187,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -2240,7 +2212,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -2327,7 +2298,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -2382,7 +2352,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2446,7 +2415,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -2501,7 +2469,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2554,7 +2521,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -2572,7 +2538,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2646,7 +2611,6 @@ def forward( question_lengths: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -2718,7 +2682,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py b/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py index e36e4b06dbef..9ad5b5772b7d 100755 --- a/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py +++ b/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py @@ -135,7 +135,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -191,10 +190,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -1144,7 +1139,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, band_mask=None, from_mask=None, @@ -1152,14 +1146,10 @@ def forward( from_blocked_mask=None, to_blocked_mask=None, ): - # Expand dims to enable multiplication in the self-attention module - head_mask = head_mask.reshape(1, -1, 1, 1) if head_mask is not None else None - if self.attention_type == "original_full": self_outputs = self.self( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) else: @@ -1181,7 +1171,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -1193,9 +1182,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -1253,7 +1239,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -1322,7 +1307,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -1350,7 +1334,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, band_mask=None, from_mask=None, to_mask=None, @@ -1373,7 +1356,6 @@ def forward( self_attention_outputs = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, band_mask=band_mask, from_mask=from_mask, @@ -1458,8 +1440,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -1474,10 +1454,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -1494,7 +1470,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -1511,7 +1486,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -1741,8 +1715,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1811,7 +1783,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1922,14 +1893,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -1946,7 +1909,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), band_mask=band_mask, from_mask=from_mask, to_mask=to_mask, @@ -2094,8 +2056,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -2132,19 +2092,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -2257,14 +2204,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -2279,8 +2218,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -2357,9 +2294,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -2381,11 +2315,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bigbird_pegasus._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - decoder_head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ # different to other models, BigBirdPegasus automatically creates decoder_input_ids from # input_ids if no decoder_input_ids are provided @@ -2412,7 +2341,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2432,8 +2360,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -2513,9 +2439,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -2538,11 +2461,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bigbird_pegasus._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - decoder_head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -2589,9 +2507,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -2660,9 +2575,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -2684,11 +2596,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bigbird_pegasus._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - decoder_head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -2707,9 +2614,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -2793,9 +2697,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -2818,11 +2719,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bigbird_pegasus._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - decoder_head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict if start_positions is not None and end_positions is not None: @@ -2833,9 +2729,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -2939,8 +2832,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -2951,11 +2842,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -2989,8 +2875,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/biogpt/modeling_biogpt.py b/src/transformers/models/biogpt/modeling_biogpt.py index 348bf2707584..1ff6eddea256 100755 --- a/src/transformers/models/biogpt/modeling_biogpt.py +++ b/src/transformers/models/biogpt/modeling_biogpt.py @@ -101,7 +101,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -113,9 +112,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -172,7 +168,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -241,7 +236,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -280,7 +274,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -293,8 +286,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -315,7 +306,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, position_ids=position_ids, cache_position=cache_position, @@ -515,7 +505,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -623,7 +612,6 @@ def forward( layer_outputs = decoder_layer( hidden_states, attention_mask=causal_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -686,7 +674,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, labels: Optional[torch.LongTensor] = None, @@ -709,7 +696,6 @@ def forward( outputs = self.biogpt( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -769,7 +755,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -792,7 +777,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, position_ids=position_ids, @@ -861,7 +845,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -884,7 +867,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, position_ids=position_ids, diff --git a/src/transformers/models/biogpt/modular_biogpt.py b/src/transformers/models/biogpt/modular_biogpt.py index accc1bdc7559..ad04a4ef5b82 100644 --- a/src/transformers/models/biogpt/modular_biogpt.py +++ b/src/transformers/models/biogpt/modular_biogpt.py @@ -102,7 +102,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -115,8 +114,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -137,7 +134,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, position_ids=position_ids, cache_position=cache_position, @@ -337,7 +333,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -445,7 +440,6 @@ def forward( layer_outputs = decoder_layer( hidden_states, attention_mask=causal_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -508,7 +502,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, labels: Optional[torch.LongTensor] = None, @@ -531,7 +524,6 @@ def forward( outputs = self.biogpt( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -591,7 +583,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -614,7 +605,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, position_ids=position_ids, @@ -683,7 +673,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -706,7 +695,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, position_ids=position_ids, diff --git a/src/transformers/models/blenderbot/modeling_blenderbot.py b/src/transformers/models/blenderbot/modeling_blenderbot.py index 5cd138fe3180..86522323005f 100755 --- a/src/transformers/models/blenderbot/modeling_blenderbot.py +++ b/src/transformers/models/blenderbot/modeling_blenderbot.py @@ -120,7 +120,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -132,9 +131,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -192,7 +188,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -261,7 +256,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -295,7 +289,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -303,8 +296,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -314,7 +305,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -375,8 +365,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -391,10 +379,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -411,7 +395,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -428,7 +411,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -498,8 +480,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -657,8 +637,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -724,7 +702,6 @@ def forward( self, input_ids=None, attention_mask=None, - head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, @@ -747,12 +724,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -800,13 +771,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -823,7 +787,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -888,8 +851,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -926,20 +887,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1054,14 +1001,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1076,8 +1015,6 @@ def forward( causal_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1160,9 +1097,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple, BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1188,12 +1122,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1222,7 +1150,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1242,8 +1169,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1329,9 +1254,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple, BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1358,12 +1280,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1418,9 +1334,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1503,8 +1416,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1515,11 +1426,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1554,8 +1460,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py b/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py index 1c1cf379d032..0536146d8463 100755 --- a/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py +++ b/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py @@ -104,7 +104,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -116,9 +115,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -176,7 +172,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -245,7 +240,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -280,7 +274,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -288,8 +281,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -298,7 +289,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -367,8 +357,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -383,10 +371,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -402,7 +386,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -419,7 +402,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -491,8 +473,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -650,8 +630,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -715,7 +693,6 @@ def forward( self, input_ids=None, attention_mask=None, - head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, @@ -738,12 +715,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -792,13 +763,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -815,7 +779,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -875,8 +838,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -913,19 +874,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1042,14 +990,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1064,8 +1004,6 @@ def forward( causal_mask, encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1132,9 +1070,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple, BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1160,12 +1095,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1194,7 +1123,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1214,8 +1142,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1288,9 +1214,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple, BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1317,12 +1240,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1377,9 +1294,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1462,8 +1376,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1474,11 +1386,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1513,8 +1420,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/blip/modeling_blip.py b/src/transformers/models/blip/modeling_blip.py index f979518e9e11..aa87b37b069d 100644 --- a/src/transformers/models/blip/modeling_blip.py +++ b/src/transformers/models/blip/modeling_blip.py @@ -327,7 +327,6 @@ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor, torch.Tensor]: """Input shape: Batch x Time x Channel""" @@ -353,10 +352,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_states).permute(0, 2, 1, 3) new_context_layer_shape = context_layer.size()[:-2] + (self.embed_dim,) @@ -396,7 +391,6 @@ def __init__(self, config: BlipConfig): def forward( self, hidden_states: torch.Tensor, - attention_mask: torch.Tensor, **kwargs: Unpack[TransformersKwargs], ) -> torch.FloatTensor: residual = hidden_states @@ -404,7 +398,6 @@ def forward( hidden_states = self.layer_norm1(hidden_states) hidden_states, _ = self.self_attn( hidden_states=hidden_states, - head_mask=attention_mask, **kwargs, ) hidden_states = hidden_states + residual @@ -475,14 +468,12 @@ def __init__(self, config: BlipConfig): def forward( self, inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutput]: hidden_states = inputs_embeds for encoder_layer in self.layers: hidden_states = encoder_layer( hidden_states, - attention_mask=attention_mask, **kwargs, ) diff --git a/src/transformers/models/blip/modeling_blip_text.py b/src/transformers/models/blip/modeling_blip_text.py index 99026a2b4fd0..18d4bdaab721 100644 --- a/src/transformers/models/blip/modeling_blip_text.py +++ b/src/transformers/models/blip/modeling_blip_text.py @@ -137,7 +137,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -227,10 +226,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs_dropped = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - context_layer = torch.matmul(attention_probs_dropped, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -286,7 +281,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -295,7 +289,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -357,7 +350,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -367,7 +359,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, past_key_values=past_key_values, cache_position=cache_position, @@ -379,7 +370,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -410,7 +400,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -453,12 +442,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -689,7 +675,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, @@ -785,13 +770,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - if encoder_embeds is None: embedding_output = self.embeddings( input_ids=input_ids, @@ -805,7 +783,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -860,7 +837,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -905,7 +881,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, diff --git a/src/transformers/models/blip_2/modeling_blip_2.py b/src/transformers/models/blip_2/modeling_blip_2.py index b552df47f2fc..cb4e36b37308 100644 --- a/src/transformers/models/blip_2/modeling_blip_2.py +++ b/src/transformers/models/blip_2/modeling_blip_2.py @@ -303,7 +303,6 @@ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -369,7 +368,6 @@ def __init__(self, config: Blip2Config): def forward( self, hidden_states: torch.Tensor, - attention_mask: torch.Tensor, **kwargs: Unpack[TransformersKwargs], ) -> torch.FloatTensor: residual = hidden_states @@ -377,7 +375,6 @@ def forward( hidden_states = self.layer_norm1(hidden_states) hidden_states, _ = self.self_attn( hidden_states=hidden_states, - head_mask=attention_mask, **kwargs, ) hidden_states = hidden_states + residual @@ -460,14 +457,12 @@ def __init__(self, config: Blip2Config): def forward( self, inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutput]: hidden_states = inputs_embeds for encoder_layer in self.layers: hidden_states = encoder_layer( hidden_states, - attention_mask=attention_mask, **kwargs, ) @@ -578,7 +573,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -636,10 +630,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs_dropped = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - context_layer = torch.matmul(attention_probs_dropped, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -696,7 +686,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -704,7 +693,6 @@ def forward( attn_output, _ = self.attention( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -770,7 +758,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -779,7 +766,6 @@ def forward( attention_output = self.attention( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) @@ -792,7 +778,6 @@ def forward( query_attention_output = self.crossattention( hidden_states=query_attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -847,7 +832,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -855,12 +839,10 @@ def forward( ): for i in range(self.config.num_hidden_layers): layer_module = self.layer[i] - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, query_length=query_length, @@ -1014,7 +996,6 @@ def forward( query_embeds: torch.FloatTensor, query_length: Optional[int] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1070,17 +1051,9 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs: BaseModelOutput = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, query_length=query_length, diff --git a/src/transformers/models/bloom/modeling_bloom.py b/src/transformers/models/bloom/modeling_bloom.py index 3698c55db03a..8ba3d46496e4 100644 --- a/src/transformers/models/bloom/modeling_bloom.py +++ b/src/transformers/models/bloom/modeling_bloom.py @@ -263,7 +263,6 @@ def forward( alibi: torch.Tensor, attention_mask: torch.Tensor, layer_past: Optional[Cache] = None, - head_mask: Optional[torch.Tensor] = None, use_cache: bool = False, output_attentions: bool = False, cache_position: Optional[torch.LongTensor] = None, @@ -302,9 +301,6 @@ def forward( # [batch_size, num_heads, q_length, kv_length] attention_probs = self.attention_dropout(attention_probs) - if head_mask is not None: - attention_probs = attention_probs * head_mask - # change view [batch_size x num_heads, q_length, kv_length] attention_probs_reshaped = attention_probs.view(batch_size * self.num_heads, q_length, -1) @@ -382,7 +378,6 @@ def forward( alibi: torch.Tensor, attention_mask: torch.Tensor, layer_past: Optional[Cache] = None, - head_mask: Optional[torch.Tensor] = None, use_cache: bool = False, output_attentions: bool = False, cache_position: Optional[torch.LongTensor] = None, @@ -405,7 +400,6 @@ def forward( layer_past=layer_past, attention_mask=attention_mask, alibi=alibi, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -491,7 +485,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -551,11 +544,6 @@ def forward( if cache_position is None: cache_position = torch.arange(past_length, past_length + seq_length, device=inputs_embeds.device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape batch_size x num_heads x N x N - # head_mask has shape n_layer x batch x num_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) hidden_states = self.word_embeddings_layernorm(inputs_embeds) all_self_attentions = () if output_attentions else None @@ -580,7 +568,6 @@ def forward( hidden_states, layer_past=past_key_values, attention_mask=causal_mask, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, alibi=alibi, @@ -829,7 +816,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -874,7 +860,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -941,7 +926,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -983,7 +967,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1077,7 +1060,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1119,7 +1101,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1169,7 +1150,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1196,7 +1176,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/bridgetower/modeling_bridgetower.py b/src/transformers/models/bridgetower/modeling_bridgetower.py index ff88a0a087d1..896ee175c7b1 100644 --- a/src/transformers/models/bridgetower/modeling_bridgetower.py +++ b/src/transformers/models/bridgetower/modeling_bridgetower.py @@ -415,7 +415,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -456,9 +455,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -501,7 +497,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -545,7 +540,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -590,7 +584,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -638,7 +631,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -683,7 +675,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -695,7 +686,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -727,7 +717,6 @@ def forward( hidden_states, encoder_hidden_states, attention_mask=None, - head_mask=None, encoder_attention_mask=None, past_key_value=None, **kwargs: Unpack[TransformersKwargs], @@ -735,7 +724,6 @@ def forward( self_attention_output, self_attn_weights = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=None, past_key_value=None, **kwargs, ) @@ -744,7 +732,6 @@ def forward( cross_attention_output, cross_attn_weights = self.crossattention( attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_value, @@ -793,7 +780,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -804,7 +790,6 @@ def forward( self_attention_output, self_attn_weights = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -821,7 +806,6 @@ def forward( cross_attention_output, cross_attn_weights = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -857,7 +841,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -872,12 +855,9 @@ def forward( all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -1120,7 +1100,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1191,17 +1170,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -1288,8 +1259,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -1314,8 +1283,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1414,7 +1381,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, image_token_type_idx: Optional[int] = None, @@ -1724,7 +1690,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1773,7 +1738,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, @@ -1826,7 +1790,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1872,7 +1835,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, @@ -1940,7 +1902,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1993,7 +1954,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, diff --git a/src/transformers/models/bros/modeling_bros.py b/src/transformers/models/bros/modeling_bros.py index 5f5dd05ff82d..517ff8b9b87a 100755 --- a/src/transformers/models/bros/modeling_bros.py +++ b/src/transformers/models/bros/modeling_bros.py @@ -209,7 +209,6 @@ def forward( hidden_states: torch.Tensor, bbox_pos_emb: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[torch.Tensor] = False, @@ -270,10 +269,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -335,7 +330,6 @@ def forward( hidden_states: torch.Tensor, bbox_pos_emb: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, @@ -344,7 +338,6 @@ def forward( hidden_states=hidden_states, bbox_pos_emb=bbox_pos_emb, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -404,7 +397,6 @@ def forward( hidden_states: torch.Tensor, bbox_pos_emb: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -413,7 +405,6 @@ def forward( hidden_states, bbox_pos_emb=bbox_pos_emb, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -433,7 +424,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -473,7 +463,6 @@ def forward( hidden_states: torch.Tensor, bbox_pos_emb: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -488,13 +477,10 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, bbox_pos_emb=bbox_pos_emb, attention_mask=attention_mask, - head_mask=layer_head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -631,7 +617,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -709,13 +694,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -733,7 +711,6 @@ def forward( embedding_output, bbox_pos_emb=bbox_position_embeddings, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, output_attentions=output_attentions, @@ -779,7 +756,6 @@ def forward( bbox_first_token_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -822,7 +798,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -900,7 +875,6 @@ def forward( bbox_first_token_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, initial_token_labels: Optional[torch.Tensor] = None, subsequent_token_labels: Optional[torch.Tensor] = None, @@ -948,7 +922,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1039,7 +1012,6 @@ def forward( bbox_first_token_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1081,7 +1053,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/camembert/modeling_camembert.py b/src/transformers/models/camembert/modeling_camembert.py index 670c7784b2a8..e5e361c9b7bb 100644 --- a/src/transformers/models/camembert/modeling_camembert.py +++ b/src/transformers/models/camembert/modeling_camembert.py @@ -65,7 +65,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -106,9 +105,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -150,7 +146,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -194,7 +189,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -238,7 +232,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -286,7 +279,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -344,7 +336,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -356,7 +347,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -419,7 +409,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -429,7 +418,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -446,7 +434,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -645,7 +632,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -654,12 +640,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -744,7 +727,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -801,17 +783,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -891,8 +865,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -916,8 +888,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -973,7 +943,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1000,7 +969,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1073,7 +1041,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1098,7 +1065,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1160,7 +1126,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -1212,7 +1177,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1262,7 +1226,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1285,7 +1248,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1331,7 +1293,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1353,7 +1314,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1426,7 +1386,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1475,7 +1434,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, diff --git a/src/transformers/models/camembert/modular_camembert.py b/src/transformers/models/camembert/modular_camembert.py index dca85aae1d7e..2676a28fce83 100644 --- a/src/transformers/models/camembert/modular_camembert.py +++ b/src/transformers/models/camembert/modular_camembert.py @@ -66,7 +66,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -93,7 +92,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -133,7 +131,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -158,7 +155,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -215,7 +211,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -267,7 +262,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -308,7 +302,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -331,7 +324,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -372,7 +364,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -394,7 +385,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -448,7 +438,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -497,7 +486,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, diff --git a/src/transformers/models/canine/modeling_canine.py b/src/transformers/models/canine/modeling_canine.py index 545919dc7b77..e4ed912dd6b8 100644 --- a/src/transformers/models/canine/modeling_canine.py +++ b/src/transformers/models/canine/modeling_canine.py @@ -310,7 +310,6 @@ def forward( from_tensor: torch.Tensor, to_tensor: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: batch_size, seq_length, _ = from_tensor.shape @@ -373,10 +372,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -476,11 +471,10 @@ def forward( self, hidden_states: tuple[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: if not self.local: - self_outputs = self.self(hidden_states, hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, hidden_states, attention_mask, output_attentions) attention_output = self_outputs[0] else: from_seq_length = to_seq_length = hidden_states.shape[1] @@ -530,7 +524,7 @@ def forward( to_tensor_chunk = torch.cat([cls_position, to_tensor_chunk], dim=1) attention_outputs_chunk = self.self( - from_tensor_chunk, to_tensor_chunk, attention_mask_chunk, head_mask, output_attentions + from_tensor_chunk, to_tensor_chunk, attention_mask_chunk, output_attentions ) attention_output_chunks.append(attention_outputs_chunk[0]) if output_attentions: @@ -608,13 +602,11 @@ def forward( self, hidden_states: tuple[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -669,7 +661,6 @@ def forward( self, hidden_states: tuple[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -681,9 +672,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] if output_attentions: @@ -907,7 +896,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -949,13 +937,6 @@ def forward( molecule_attention_mask, (batch_size, molecule_attention_mask.shape[-1]) ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # `input_char_embeddings`: shape (batch_size, char_seq, char_dim) input_char_embeddings = self.char_embeddings( input_ids=input_ids, @@ -999,7 +980,6 @@ def forward( encoder_outputs = self.encoder( init_molecule_encoding, attention_mask=extended_molecule_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1085,7 +1065,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1105,7 +1084,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1170,7 +1148,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1225,7 +1202,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1275,7 +1251,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1323,7 +1298,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1371,7 +1345,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1386,7 +1359,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/chinese_clip/modeling_chinese_clip.py b/src/transformers/models/chinese_clip/modeling_chinese_clip.py index a689886abc37..9872b397b318 100644 --- a/src/transformers/models/chinese_clip/modeling_chinese_clip.py +++ b/src/transformers/models/chinese_clip/modeling_chinese_clip.py @@ -243,7 +243,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -254,9 +253,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -289,7 +285,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -312,7 +307,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -366,14 +360,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -498,14 +490,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -651,7 +641,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -664,12 +653,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -868,7 +854,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -921,7 +906,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, diff --git a/src/transformers/models/clap/modeling_clap.py b/src/transformers/models/clap/modeling_clap.py index 33ad9463ff24..885286ea3f49 100644 --- a/src/transformers/models/clap/modeling_clap.py +++ b/src/transformers/models/clap/modeling_clap.py @@ -386,7 +386,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -425,10 +424,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) @@ -483,10 +478,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -583,7 +577,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor, torch.Tensor]: @@ -616,9 +609,7 @@ def forward( height_pad, width_pad, dtype=hidden_states.dtype, device=hidden_states_windows.device ) - attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = attention_outputs[0] @@ -679,17 +670,12 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor]: height, width = input_dimensions for i, layer_module in enumerate(self.blocks): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] @@ -844,7 +830,6 @@ def forward( self, input_features, is_longer: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, output_hidden_states_before_downsampling: Optional[bool] = False, @@ -881,13 +866,9 @@ def forward( all_reshaped_hidden_states += (reshaped_hidden_state,) for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - input_dimensions = self.input_resolutions[i] - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] @@ -1095,7 +1076,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -1106,9 +1086,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -1141,7 +1118,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -1164,7 +1140,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -1218,14 +1193,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -1279,14 +1252,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -1319,7 +1290,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -1332,12 +1302,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -1508,7 +1475,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1548,9 +1514,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -1560,7 +1523,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, diff --git a/src/transformers/models/clvp/modeling_clvp.py b/src/transformers/models/clvp/modeling_clvp.py index 552434b5bb22..3598b29f42e8 100644 --- a/src/transformers/models/clvp/modeling_clvp.py +++ b/src/transformers/models/clvp/modeling_clvp.py @@ -307,7 +307,6 @@ def forward( position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = False, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor], Optional[tuple[torch.FloatTensor]]]: @@ -366,10 +365,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = torch.matmul(attn_probs, value_states) @@ -615,7 +610,6 @@ def forward( past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, @@ -627,7 +621,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -1027,7 +1020,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -1094,12 +1086,6 @@ def forward( attention_mask, input_shape, inputs_embeds, past_key_values_length ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_attention_heads x N x N - # head_mask has shape num_hidden_layers x batch x num_attention_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - hidden_states = inputs_embeds if token_type_ids is not None: @@ -1124,7 +1110,6 @@ def forward( None, attention_mask, position_ids, - head_mask[i], cache_position, ) else: @@ -1133,7 +1118,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -1193,7 +1177,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -1215,7 +1198,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -1364,7 +1346,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -1393,7 +1374,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/codegen/modeling_codegen.py b/src/transformers/models/codegen/modeling_codegen.py index 1727338c98a2..a9a82b5e7738 100644 --- a/src/transformers/models/codegen/modeling_codegen.py +++ b/src/transformers/models/codegen/modeling_codegen.py @@ -120,7 +120,6 @@ def _attn( key, value, attention_mask=None, - head_mask=None, ): # Keep the attention weights computation in fp32 to avoid overflow issues query = query.to(torch.float32) @@ -137,10 +136,6 @@ def _attn( attn_weights = attn_weights.to(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights @@ -151,7 +146,6 @@ def forward( layer_past: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -211,7 +205,7 @@ def forward( key, value = layer_past.update(key.to(hidden_states.dtype), value, self.layer_idx, cache_kwargs) # compute self-attention: V x Softmax(QK^T) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim) attn_output = self.out_proj(attn_output) @@ -255,7 +249,6 @@ def forward( layer_past: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -267,7 +260,6 @@ def forward( layer_past=layer_past, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -338,7 +330,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -388,11 +379,6 @@ def forward( attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_attention_heads x N x N - # head_mask has shape n_layer x batch x num_attention_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) hidden_states = inputs_embeds if token_type_ids is not None: @@ -414,7 +400,6 @@ def forward( layer_past=past_key_values, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -593,7 +578,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -621,7 +605,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/convbert/modeling_convbert.py b/src/transformers/models/convbert/modeling_convbert.py index 5f4dd419b4fc..2dc138f82d14 100755 --- a/src/transformers/models/convbert/modeling_convbert.py +++ b/src/transformers/models/convbert/modeling_convbert.py @@ -197,7 +197,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: @@ -262,10 +261,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -325,14 +320,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor, Optional[torch.FloatTensor]]: self_outputs = self.self( hidden_states, attention_mask, - head_mask, encoder_hidden_states, output_attentions, ) @@ -421,7 +414,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, @@ -429,7 +421,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -444,7 +435,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, encoder_attention_mask, - head_mask, encoder_hidden_states, output_attentions, ) @@ -474,7 +464,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, @@ -488,12 +477,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions, @@ -673,7 +659,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -709,7 +694,6 @@ def forward( token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) hidden_states = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds @@ -721,7 +705,6 @@ def forward( hidden_states = self.encoder( hidden_states, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -775,7 +758,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -795,7 +777,6 @@ def forward( attention_mask, token_type_ids, position_ids, - head_mask, inputs_embeds, output_attentions, output_hidden_states, @@ -872,7 +853,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -892,7 +872,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -956,7 +935,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1012,7 +990,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1065,7 +1042,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1083,7 +1059,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1131,7 +1106,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1146,7 +1120,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/ctrl/modeling_ctrl.py b/src/transformers/models/ctrl/modeling_ctrl.py index 03da5b51c907..2415cfdf4b75 100644 --- a/src/transformers/models/ctrl/modeling_ctrl.py +++ b/src/transformers/models/ctrl/modeling_ctrl.py @@ -57,7 +57,7 @@ def positional_encoding(position, d_model_size, dtype): return pos_encoding -def scaled_dot_product_attention(q, k, v, mask, attention_mask=None, head_mask=None): +def scaled_dot_product_attention(q, k, v, mask, attention_mask=None): # calculate attention matmul_qk = torch.matmul(q, k.permute(0, 1, 3, 2)) @@ -74,10 +74,6 @@ def scaled_dot_product_attention(q, k, v, mask, attention_mask=None, head_mask=N attention_weights = torch.softmax(scaled_attention_logits, dim=-1) - # Mask heads if we want to - if head_mask is not None: - attention_weights = attention_weights * head_mask - output = torch.matmul(attention_weights, v) return output, attention_weights @@ -128,7 +124,6 @@ def forward( mask, layer_past=None, attention_mask=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -146,7 +141,7 @@ def forward( if layer_past is not None: k, v = layer_past.update(k, v, self.layer_idx, {"cache_position": cache_position}) - output = scaled_dot_product_attention(q, k, v, mask, attention_mask, head_mask) + output = scaled_dot_product_attention(q, k, v, mask, attention_mask) scaled_attention = output[0].permute([0, 2, 1, 3]) attn = output[1] original_size_attention = scaled_attention.reshape(batch_size, -1, self.d_model_size) @@ -177,7 +172,6 @@ def forward( mask, layer_past=None, attention_mask=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -190,7 +184,6 @@ def forward( mask, layer_past=layer_past, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -273,7 +266,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -371,9 +363,6 @@ def forward( attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - if token_type_ids is not None: token_type_ids = token_type_ids.view(-1, input_shape[-1]) token_type_embeds = self.w(token_type_ids) @@ -407,7 +396,6 @@ def forward( mask, layer_past=past_key_values, attention_mask=attention_mask, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -458,7 +446,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -518,7 +505,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -610,7 +596,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -714,7 +699,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/data2vec/modeling_data2vec_audio.py b/src/transformers/models/data2vec/modeling_data2vec_audio.py index 60128f882dd3..3107b6884778 100755 --- a/src/transformers/models/data2vec/modeling_data2vec_audio.py +++ b/src/transformers/models/data2vec/modeling_data2vec_audio.py @@ -184,7 +184,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -196,9 +195,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -245,7 +241,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -284,7 +279,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -433,8 +427,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": diff --git a/src/transformers/models/data2vec/modeling_data2vec_text.py b/src/transformers/models/data2vec/modeling_data2vec_text.py index 0cba1f894003..6ea41b626fff 100644 --- a/src/transformers/models/data2vec/modeling_data2vec_text.py +++ b/src/transformers/models/data2vec/modeling_data2vec_text.py @@ -171,7 +171,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -212,9 +211,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -256,7 +252,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -300,7 +295,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -344,7 +338,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -392,7 +385,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -450,7 +442,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -462,7 +453,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -525,7 +515,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -535,7 +524,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -552,7 +540,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -616,7 +603,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -625,12 +611,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -704,7 +687,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -761,17 +743,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -851,8 +825,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -876,8 +848,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -987,7 +957,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1027,7 +996,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1093,7 +1061,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1111,7 +1078,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1162,7 +1128,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1178,7 +1143,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1240,7 +1204,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, MultipleChoiceModelOutput]: @@ -1291,7 +1254,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1341,7 +1303,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1355,7 +1316,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1401,7 +1361,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1412,7 +1371,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/data2vec/modeling_data2vec_vision.py b/src/transformers/models/data2vec/modeling_data2vec_vision.py index e59258625210..2152c7e92bae 100644 --- a/src/transformers/models/data2vec/modeling_data2vec_vision.py +++ b/src/transformers/models/data2vec/modeling_data2vec_vision.py @@ -258,7 +258,6 @@ def __init__(self, config: Data2VecVisionConfig, window_size: Optional[tuple] = def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, @@ -305,10 +304,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -325,22 +320,20 @@ class Data2VecVisionSdpaSelfAttention(Data2VecVisionSelfAttention): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, resolution: Optional[tuple[int]] = None, ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: - if output_attentions or head_mask is not None: + if output_attentions: logger.warning_once( "`Data2VecVisionSdpaSelfAttention` is used but `torch.nn.functional.scaled_dot_product_attention` does not " - "support `output_attentions=True` or `head_mask`. Falling back to the manual attention implementation, " + "support `output_attentions=True`. Falling back to the manual attention implementation, " "but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. " 'This warning can be removed using the argument `attn_implementation="eager"` when loading the model.' ) return super().forward( hidden_states=hidden_states, - head_mask=head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -451,14 +444,13 @@ def prune_heads(self, heads): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional["Data2VecVisionRelativePositionBias"] = None, interpolate_pos_encoding: bool = False, resolution: Optional[tuple[int]] = None, ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: self_outputs = self.attention( - hidden_states, head_mask, output_attentions, relative_position_bias, interpolate_pos_encoding, resolution + hidden_states, output_attentions, relative_position_bias, interpolate_pos_encoding, resolution ) attention_output = self.output(self_outputs[0], hidden_states) @@ -525,7 +517,6 @@ def __init__( def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, relative_position_bias: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, @@ -533,7 +524,6 @@ def forward( ) -> Union[tuple[torch.Tensor], tuple[torch.Tensor, torch.Tensor]]: self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in Data2VecVision, layernorm is applied before self-attention - head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -676,7 +666,6 @@ def __init__(self, config: Data2VecVisionConfig, window_size: Optional[tuple] = def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, interpolate_pos_encoding: bool = False, @@ -699,11 +688,8 @@ def forward( else: relative_position_bias = None - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, - head_mask=layer_head_mask, output_attentions=output_attentions, relative_position_bias=relative_position_bias, interpolate_pos_encoding=interpolate_pos_encoding, @@ -803,7 +789,6 @@ def forward( self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -819,19 +804,11 @@ def forward( ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, _ = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos) resolution = pixel_values.shape[2:] encoder_outputs = self.encoder( embedding_output, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, resolution=resolution, @@ -898,7 +875,6 @@ def __init__(self, config: Data2VecVisionConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -914,7 +890,6 @@ def forward( return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.data2vec_vision( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1245,7 +1220,6 @@ def compute_loss(self, logits, auxiliary_logits, labels): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1285,7 +1259,6 @@ def forward( outputs = self.data2vec_vision( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=True, # we need the intermediate hidden states interpolate_pos_encoding=interpolate_pos_encoding, diff --git a/src/transformers/models/data2vec/modular_data2vec_text.py b/src/transformers/models/data2vec/modular_data2vec_text.py index 76a75671ecf8..de562f465aca 100644 --- a/src/transformers/models/data2vec/modular_data2vec_text.py +++ b/src/transformers/models/data2vec/modular_data2vec_text.py @@ -146,7 +146,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -186,7 +185,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -252,7 +250,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -270,7 +267,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -321,7 +317,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -337,7 +332,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -399,7 +393,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, MultipleChoiceModelOutput]: @@ -450,7 +443,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -500,7 +492,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -514,7 +505,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -560,7 +550,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -571,7 +560,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/decision_transformer/modeling_decision_transformer.py b/src/transformers/models/decision_transformer/modeling_decision_transformer.py index 672a652ed08b..f60c94b1bdbe 100755 --- a/src/transformers/models/decision_transformer/modeling_decision_transformer.py +++ b/src/transformers/models/decision_transformer/modeling_decision_transformer.py @@ -40,7 +40,7 @@ # Copied from transformers.models.gpt2.modeling_gpt2.eager_attention_forward -def eager_attention_forward(module, query, key, value, attention_mask, head_mask=None, **kwargs): +def eager_attention_forward(module, query, key, value, attention_mask, **kwargs): attn_weights = torch.matmul(query, key.transpose(-1, -2)) if module.scale_attn_weights: @@ -73,10 +73,6 @@ def eager_attention_forward(module, query, key, value, attention_mask, head_mask attn_weights = attn_weights.type(value.dtype) attn_weights = module.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2) @@ -144,7 +140,7 @@ def prune_heads(self, heads): self.num_heads = self.num_heads - len(heads) self.pruned_heads = self.pruned_heads.union(heads) - def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None): + def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None): # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM) bsz, num_heads, q_seq_len, dk = query.size() _, _, k_seq_len, _ = key.size() @@ -188,10 +184,6 @@ def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, hea attn_weights = attn_weights.type(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2) @@ -204,7 +196,6 @@ def forward( past_key_values: Optional[Cache] = None, cache_position: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -270,7 +261,7 @@ def forward( if using_eager and self.reorder_and_upcast_attn: attn_output, attn_weights = self._upcast_and_reordered_attn( - query_states, key_states, value_states, attention_mask, head_mask + query_states, key_states, value_states, attention_mask ) else: attn_output, attn_weights = attention_interface( @@ -279,7 +270,6 @@ def forward( key_states, value_states, attention_mask, - head_mask=head_mask, dropout=self.attn_dropout.p if self.training else 0.0, is_causal=is_causal, **kwargs, @@ -337,7 +327,6 @@ def forward( past_key_values: Optional[Cache] = None, cache_position: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, @@ -351,7 +340,6 @@ def forward( past_key_values=past_key_values, cache_position=cache_position, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, **kwargs, @@ -372,7 +360,6 @@ def forward( hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -466,7 +453,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -547,12 +533,6 @@ def forward( else: encoder_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - if inputs_embeds is None: inputs_embeds = self.wte(input_ids) position_embeds = self.wpe(position_ids) @@ -576,13 +556,12 @@ def forward( all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None all_hidden_states = () if output_hidden_states else None - for i, block in enumerate(self.h): + for block in self.h: outputs = block( hidden_states, past_key_values if not (self.gradient_checkpointing and self.training) else None, cache_position, attention_mask, - head_mask[i], encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, diff --git a/src/transformers/models/deit/modeling_deit.py b/src/transformers/models/deit/modeling_deit.py index 4015dcbe0bc3..8c9b7e89ecd8 100644 --- a/src/transformers/models/deit/modeling_deit.py +++ b/src/transformers/models/deit/modeling_deit.py @@ -214,9 +214,7 @@ def __init__(self, config: DeiTConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -233,7 +231,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -289,8 +287,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -339,9 +337,9 @@ def __init__(self, config: DeiTConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -364,10 +362,9 @@ def __init__(self, config: DeiTConfig): self.layer = nn.ModuleList([DeiTLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -447,7 +444,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: @@ -459,13 +455,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: @@ -475,7 +464,7 @@ def forward( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) pooled_output = self.pooler(sequence_output) if self.pooler is not None else None @@ -538,7 +527,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> MaskedImageModelingOutput: @@ -573,7 +561,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.deit( pixel_values, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) @@ -634,7 +621,6 @@ def __init__(self, config: DeiTConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], @@ -673,7 +659,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.deit( pixel_values, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) @@ -754,13 +739,11 @@ def __init__(self, config: DeiTConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> DeiTForImageClassificationWithTeacherOutput: outputs: BaseModelOutputWithPooling = self.deit( pixel_values, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) diff --git a/src/transformers/models/deprecated/ernie_m/modeling_ernie_m.py b/src/transformers/models/deprecated/ernie_m/modeling_ernie_m.py index 4cecdf5728a3..bcccd9b8c7b4 100755 --- a/src/transformers/models/deprecated/ernie_m/modeling_ernie_m.py +++ b/src/transformers/models/deprecated/ernie_m/modeling_ernie_m.py @@ -124,7 +124,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -205,10 +204,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -252,7 +247,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -261,7 +255,6 @@ def forward( self_outputs = self.self_attn( hidden_states, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -297,7 +290,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = True, ): @@ -306,7 +298,6 @@ def forward( hidden_states, attention_opt_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -315,7 +306,6 @@ def forward( hidden_states = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -346,7 +336,6 @@ def forward( self, input_embeds: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, @@ -359,12 +348,9 @@ def forward( if output_hidden_states: hidden_states = hidden_states + (output,) for i, layer in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - output, opt_attn_weights = layer( hidden_states=output, attention_mask=attention_mask, - head_mask=layer_head_mask, past_key_values=past_key_values[i] if past_key_values is not None else None, ) @@ -458,12 +444,6 @@ def _init_weights(self, module): config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert *input_ids* indices into associated vectors than the @@ -518,7 +498,6 @@ def forward( input_ids: Optional[tensor] = None, position_ids: Optional[tensor] = None, attention_mask: Optional[tensor] = None, - head_mask: Optional[tensor] = None, inputs_embeds: Optional[tensor] = None, past_key_values: Optional[tuple[tuple[tensor]]] = None, use_cache: Optional[bool] = None, @@ -536,8 +515,6 @@ def forward( ) return_dict = return_dict if return_dict is not None else self.config.return_dict - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - past_key_values_length = 0 if past_key_values is not None: past_key_values_length = past_key_values.get_seq_length() @@ -567,7 +544,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, past_key_values=past_key_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -625,7 +601,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -646,7 +621,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, output_hidden_states=output_hidden_states, @@ -723,7 +697,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -752,7 +725,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -814,7 +786,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -832,7 +803,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, output_attentions=output_attentions, @@ -890,7 +860,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -914,7 +883,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -978,7 +946,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -999,7 +966,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/deprecated/gptsan_japanese/modeling_gptsan_japanese.py b/src/transformers/models/deprecated/gptsan_japanese/modeling_gptsan_japanese.py index 919a3feaa067..bb6a0d1f81d8 100644 --- a/src/transformers/models/deprecated/gptsan_japanese/modeling_gptsan_japanese.py +++ b/src/transformers/models/deprecated/gptsan_japanese/modeling_gptsan_japanese.py @@ -386,7 +386,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -460,15 +459,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -522,7 +512,6 @@ def forward( hidden_states: Optional[tuple[torch.FloatTensor]], past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, ) -> tuple[Union[torch.Tensor, tuple[torch.Tensor]], ...]: @@ -545,12 +534,6 @@ def forward( - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. - head_mask (`numpy.ndarray` of shape `({0})`, `optional): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). @@ -568,7 +551,6 @@ def forward( hidden_states=hidden_states, past_key_values=self_attn_past_key_value, attention_mask=(1 - attention_mask) * torch.finfo(hidden_states.dtype).min, - layer_head_mask=head_mask, output_attentions=output_attentions, ) if output_attentions: @@ -604,7 +586,6 @@ def forward( hidden_states: Optional[tuple[torch.FloatTensor]], past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, output_router_tuple: Optional[bool] = False, @@ -628,12 +609,6 @@ def forward( - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. - head_mask (`numpy.ndarray` of shape `({0})`, `optional): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). @@ -648,7 +623,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, ) @@ -808,8 +782,6 @@ def _shift_right(self, input_ids): If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). @@ -878,7 +850,6 @@ def forward( token_type_ids: Optional[torch.FloatTensor] = None, spout: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1000,12 +971,6 @@ def forward( # Merge prefix_lm_mask and attention_mask extended_attention_mask = prefix_lm_mask * attention_mask.unsqueeze(1).unsqueeze(2) - # Prepare head mask if needed - if head_mask is not None: - head_mask = self.get_head_mask( - head_mask, self.config.num_switch_layers + self.config.num_ext_layers - ) # n_layer x batch x n_heads x N x N - # outputs present_key_value_states = () if self.config.use_cache or use_cache else None all_hidden_states = () if self.config.output_hidden_states or output_hidden_states else None @@ -1030,7 +995,6 @@ def forward( hidden_states=hidden_states, past_key_values=past, attention_mask=extended_attention_mask, - head_mask=head_mask, use_cache=self.config.use_cache or use_cache, output_attentions=self.config.output_attentions or output_attentions, output_router_tuple=output_router_tuple, @@ -1104,7 +1068,6 @@ def forward( token_type_ids: Optional[torch.FloatTensor] = None, spout: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1194,7 +1157,6 @@ def forward( token_type_ids, spout, past_key_values, - head_mask, use_cache, inputs_embeds, decoder_inputs_embeds, diff --git a/src/transformers/models/deprecated/mctct/modeling_mctct.py b/src/transformers/models/deprecated/mctct/modeling_mctct.py index 357b8b2c3681..2f021dd7c69a 100755 --- a/src/transformers/models/deprecated/mctct/modeling_mctct.py +++ b/src/transformers/models/deprecated/mctct/modeling_mctct.py @@ -228,7 +228,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): mixed_query_layer = self.query(hidden_states) @@ -260,10 +259,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).flatten(start_dim=-2) @@ -327,13 +322,11 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): self_outputs = self.self( hidden_states, attention_mask, - head_mask, output_attentions, ) attention_output = self.output(self_outputs[0], hidden_states) @@ -387,12 +380,9 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): - self_attention_outputs = self.attention( - hidden_states, attention_mask, head_mask, output_attentions=output_attentions - ) + self_attention_outputs = self.attention(hidden_states, attention_mask, output_attentions=output_attentions) attention_output = self_attention_outputs[0] outputs = self_attention_outputs[1:] # add self attentions if we output attention weights @@ -504,11 +494,6 @@ def _get_feature_vector_attention_mask(self, feature_vector_length, attention_ma - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -535,7 +520,6 @@ def forward( self, input_features: torch.Tensor, attention_mask: torch.Tensor, - head_mask: torch.Tensor, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -564,14 +548,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, " - f"but it is for {head_mask.size()[0]}." - ) - synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: @@ -633,7 +609,6 @@ def forward( self, input_features: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -650,7 +625,6 @@ def forward( encoder_outputs = self.encoder( input_features, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -703,7 +677,6 @@ def forward( self, input_features: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -723,7 +696,6 @@ def forward( outputs = self.mctct( input_features, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/deprecated/mmbt/modeling_mmbt.py b/src/transformers/models/deprecated/mmbt/modeling_mmbt.py index 45ae577f7fce..ed8e3847b578 100644 --- a/src/transformers/models/deprecated/mmbt/modeling_mmbt.py +++ b/src/transformers/models/deprecated/mmbt/modeling_mmbt.py @@ -142,12 +142,6 @@ def forward(self, input_modal, start_token=None, end_token=None, position_ids=No Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, embedding_dim)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the @@ -197,7 +191,6 @@ def forward( modal_token_type_ids=None, position_ids=None, modal_position_ids=None, - head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, @@ -269,12 +262,10 @@ def forward( extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) encoder_outputs = self.transformer.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, output_attentions=output_attentions, @@ -358,7 +349,6 @@ def forward( modal_token_type_ids=None, position_ids=None, modal_position_ids=None, - head_mask=None, inputs_embeds=None, labels=None, return_dict=None, @@ -375,7 +365,6 @@ def forward( modal_token_type_ids=modal_token_type_ids, position_ids=position_ids, modal_position_ids=modal_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=return_dict, ) diff --git a/src/transformers/models/deprecated/nezha/modeling_nezha.py b/src/transformers/models/deprecated/nezha/modeling_nezha.py index eaf1cedfed32..a586c85832ce 100644 --- a/src/transformers/models/deprecated/nezha/modeling_nezha.py +++ b/src/transformers/models/deprecated/nezha/modeling_nezha.py @@ -172,7 +172,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -245,10 +244,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) relations_values = self.relative_positions_encoding(to_seq_length) attention_probs_t = attention_probs.permute(2, 0, 1, 3) @@ -317,7 +312,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -326,7 +320,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -386,7 +379,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -397,7 +389,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, past_key_values=self_attn_past_key_value, ) @@ -423,7 +414,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, cross_attn_past_key_value, @@ -464,7 +454,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -489,12 +478,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values[i] if past_key_values is not None else None, @@ -719,12 +705,6 @@ class NezhaForPreTrainingOutput(ModelOutput): - 1 corresponds to a *sentence B* token. [What are token type IDs?](../glossary#token-type-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the @@ -794,7 +774,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -877,13 +856,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, token_type_ids=token_type_ids, @@ -892,7 +864,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -950,7 +921,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, next_sentence_label: Optional[torch.Tensor] = None, @@ -996,7 +966,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1063,7 +1032,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1085,7 +1053,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1151,7 +1118,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1202,7 +1168,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1264,7 +1229,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1283,7 +1247,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1361,7 +1324,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1389,7 +1351,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1454,7 +1415,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1471,7 +1431,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1529,7 +1488,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1553,7 +1511,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/deprecated/qdqbert/modeling_qdqbert.py b/src/transformers/models/deprecated/qdqbert/modeling_qdqbert.py index f522a1d72154..7365e8382d85 100755 --- a/src/transformers/models/deprecated/qdqbert/modeling_qdqbert.py +++ b/src/transformers/models/deprecated/qdqbert/modeling_qdqbert.py @@ -172,7 +172,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -248,10 +247,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul( self.matmul_a_input_quantizer(attention_probs), self.matmul_v_input_quantizer(value_layer) ) @@ -321,7 +316,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -330,7 +324,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -399,7 +392,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -410,7 +402,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, past_key_values=self_attn_past_key_value, ) @@ -436,7 +427,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, cross_attn_past_key_value, @@ -476,7 +466,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -494,12 +483,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values[i] if past_key_values is not None else None, @@ -699,12 +685,6 @@ def _init_weights(self, module): config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the @@ -776,7 +756,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -860,13 +839,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -877,7 +849,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -935,7 +906,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -997,7 +967,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1106,7 +1075,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1129,7 +1097,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1198,7 +1165,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1249,7 +1215,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1308,7 +1273,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1328,7 +1292,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1404,7 +1367,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1435,7 +1397,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1496,7 +1457,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1514,7 +1474,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1573,7 +1532,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1598,7 +1556,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/deprecated/realm/modeling_realm.py b/src/transformers/models/deprecated/realm/modeling_realm.py index 69bab60f6803..382a4812fbd4 100644 --- a/src/transformers/models/deprecated/realm/modeling_realm.py +++ b/src/transformers/models/deprecated/realm/modeling_realm.py @@ -144,7 +144,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -225,10 +224,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -293,7 +288,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -302,7 +296,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -362,7 +355,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -373,7 +365,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, past_key_values=self_attn_past_key_value, ) @@ -399,7 +390,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask, - head_mask, encoder_hidden_states, encoder_attention_mask, cross_attn_past_key_value, @@ -440,7 +430,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -465,12 +454,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values[i] if past_key_values is not None else None, @@ -807,12 +793,6 @@ def mask_to_score(mask, dtype=torch.float32): config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert *input_ids* indices into associated vectors than the @@ -903,7 +883,6 @@ def forward( attention_mask=None, token_type_ids=None, position_ids=None, - head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, @@ -966,13 +945,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -983,7 +955,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -1036,7 +1007,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1068,7 +1038,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1122,7 +1091,6 @@ def forward( candidate_attention_mask: Optional[torch.FloatTensor] = None, candidate_token_type_ids: Optional[torch.LongTensor] = None, candidate_inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1196,7 +1164,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1213,7 +1180,6 @@ def forward( attention_mask=flattened_attention_mask, token_type_ids=flattened_token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=candidate_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1274,7 +1240,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, relevance_score: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1336,7 +1301,6 @@ def forward( attention_mask=flattened_attention_mask, token_type_ids=flattened_token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1411,7 +1375,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, relevance_score: Optional[torch.FloatTensor] = None, block_mask: Optional[torch.BoolTensor] = None, @@ -1454,7 +1417,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/deprecated/retribert/modeling_retribert.py b/src/transformers/models/deprecated/retribert/modeling_retribert.py index 926d7551e51b..fa7695133fb8 100644 --- a/src/transformers/models/deprecated/retribert/modeling_retribert.py +++ b/src/transformers/models/deprecated/retribert/modeling_retribert.py @@ -109,7 +109,6 @@ def embed_sentences_checkpointed( device = input_ids.device input_shape = input_ids.size() token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - head_mask = [None] * sent_encoder.config.num_hidden_layers extended_attention_mask: torch.Tensor = sent_encoder.get_extended_attention_mask( attention_mask, input_shape ) @@ -119,7 +118,6 @@ def partial_encode(*inputs): encoder_outputs = sent_encoder.encoder( inputs[0], attention_mask=inputs[1], - head_mask=head_mask, ) sequence_output = encoder_outputs[0] pooled_output = sent_encoder.pooler(sequence_output) diff --git a/src/transformers/models/deprecated/speech_to_text_2/modeling_speech_to_text_2.py b/src/transformers/models/deprecated/speech_to_text_2/modeling_speech_to_text_2.py index 854f21c06550..86495448299a 100755 --- a/src/transformers/models/deprecated/speech_to_text_2/modeling_speech_to_text_2.py +++ b/src/transformers/models/deprecated/speech_to_text_2/modeling_speech_to_text_2.py @@ -151,7 +151,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -225,15 +224,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -303,8 +293,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -318,10 +306,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size *(decoder_attention_heads,)*. past_key_values (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -337,7 +321,6 @@ def forward( hidden_states=hidden_states, past_key_values=self_attn_past_key_value, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -356,7 +339,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=cross_attn_past_key_value, output_attentions=output_attentions, ) @@ -463,8 +445,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -500,19 +480,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention - on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of @@ -591,14 +558,6 @@ def forward( all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None next_decoder_cache = () if use_cache else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -613,8 +572,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values[idx] if past_key_values is not None else None, output_attentions=output_attentions, use_cache=use_cache, @@ -706,8 +663,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -739,18 +694,6 @@ def forward( encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of @@ -836,8 +779,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py b/src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py index a7b4825e5fcd..c92fcd6fbb37 100644 --- a/src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py +++ b/src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py @@ -153,7 +153,7 @@ def _rel_shift(self, x): return x - def forward(self, w, r, attn_mask=None, mems=None, head_mask=None, output_attentions=False): + def forward(self, w, r, attn_mask=None, mems=None, output_attentions=False): qlen, rlen, bsz = w.size(0), r.size(0), w.size(1) if mems is not None: @@ -211,10 +211,6 @@ def forward(self, w, r, attn_mask=None, mems=None, head_mask=None, output_attent attn_prob = nn.functional.softmax(attn_score, dim=1) attn_prob = self.dropatt(attn_prob) - # Mask heads if we want to - if head_mask is not None: - attn_prob = attn_prob * head_mask - # compute attention vector attn_vec = torch.einsum("ijbn,jbnd->ibnd", (attn_prob, w_head_v)) @@ -249,13 +245,12 @@ def __init__(self, n_head, d_model, d_head, d_inner, dropout, layer_norm_epsilon d_model, d_inner, dropout, pre_lnorm=kwargs.get("pre_lnorm"), layer_norm_epsilon=layer_norm_epsilon ) - def forward(self, dec_inp, r, dec_attn_mask=None, mems=None, head_mask=None, output_attentions=False): + def forward(self, dec_inp, r, dec_attn_mask=None, mems=None, output_attentions=False): attn_outputs = self.dec_attn( dec_inp, r, attn_mask=dec_attn_mask, mems=mems, - head_mask=head_mask, output_attentions=output_attentions, ) ff_output = self.pos_ff(attn_outputs[0]) @@ -604,12 +599,6 @@ def logits(self): Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see `mems` output below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as `input_ids` as they have already been computed. - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the @@ -742,7 +731,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, mems: Optional[list[torch.FloatTensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -770,23 +758,6 @@ def forward( if mems is None: mems = self.init_mems(bsz) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] (a head_mask for each layer) - # and head_mask is converted to shape [num_hidden_layers x qlen x klen x bsz x n_head] - if head_mask is not None: - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0) - head_mask = head_mask.expand(self.n_layer, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(1).unsqueeze(1) - head_mask = head_mask.to( - dtype=next(self.parameters()).dtype - ) # switch to float if need + fp16 compatibility - else: - head_mask = [None] * self.n_layer - if inputs_embeds is not None: word_emb = inputs_embeds else: @@ -828,7 +799,6 @@ def forward( pos_emb, dec_attn_mask=dec_attn_mask, mems=mems_i, - head_mask=head_mask[i], output_attentions=output_attentions, ) core_out = layer_outputs[0] @@ -937,7 +907,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, mems: Optional[list[torch.FloatTensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -961,7 +930,6 @@ def forward( transformer_outputs = self.transformer( input_ids, mems=mems, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1078,7 +1046,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, mems: Optional[list[torch.FloatTensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1096,7 +1063,6 @@ def forward( transformer_outputs = self.transformer( input_ids, mems=mems, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/deprecated/tvlt/modeling_tvlt.py b/src/transformers/models/deprecated/tvlt/modeling_tvlt.py index 82aa12ada9e9..7f51661b5270 100644 --- a/src/transformers/models/deprecated/tvlt/modeling_tvlt.py +++ b/src/transformers/models/deprecated/tvlt/modeling_tvlt.py @@ -364,7 +364,7 @@ def transpose_for_scores(self, x): x = x.view(*new_x_shape) return x.permute(0, 2, 1, 3) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, hidden_states, attention_mask=None, output_attentions=False): mixed_query_layer = self.query(hidden_states) key_layer = self.transpose_for_scores(self.key(hidden_states)) @@ -385,10 +385,6 @@ def forward(self, hidden_states, attention_mask=None, head_mask=None, output_att # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -443,8 +439,8 @@ def prune_heads(self, heads): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): - self_outputs = self.attention(hidden_states, attention_mask, head_mask, output_attentions) + def forward(self, hidden_states, attention_mask=None, output_attentions=False): + self_outputs = self.attention(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) @@ -496,11 +492,10 @@ def __init__(self, config): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, hidden_states, attention_mask=None, output_attentions=False): self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in ViLT, layernorm is applied before self-attention attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -532,7 +527,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -544,9 +538,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] diff --git a/src/transformers/models/deprecated/vit_hybrid/modeling_vit_hybrid.py b/src/transformers/models/deprecated/vit_hybrid/modeling_vit_hybrid.py index 86b1594a20c9..7269fbd00020 100644 --- a/src/transformers/models/deprecated/vit_hybrid/modeling_vit_hybrid.py +++ b/src/transformers/models/deprecated/vit_hybrid/modeling_vit_hybrid.py @@ -223,7 +223,7 @@ def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: return x.permute(0, 2, 1, 3) def forward( - self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False + self, hidden_states: Optional[torch.Tensor] = None, output_attentions: bool = False ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: mixed_query_layer = self.query(hidden_states) @@ -243,10 +243,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -264,7 +260,7 @@ def __init__(self, config: ViTHybridConfig) -> None: self.attention_probs_dropout_prob = config.attention_probs_dropout_prob def forward( - self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False + self, hidden_states: Optional[torch.Tensor] = None, output_attentions: bool = False ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: mixed_query_layer = self.query(hidden_states) @@ -276,7 +272,6 @@ def forward( query_layer, key_layer, value_layer, - head_mask, self.attention_probs_dropout_prob if self.training else 0.0, is_causal=False, scale=None, @@ -335,10 +330,9 @@ def prune_heads(self, heads: set[int]) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: - self_outputs = self.attention(hidden_states, head_mask, output_attentions) + self_outputs = self.attention(hidden_states, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) @@ -405,12 +399,10 @@ def __init__(self, config: ViTHybridConfig) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in ViTHybrid, layernorm is applied before self-attention - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -442,7 +434,6 @@ def __init__(self, config: ViTHybridConfig) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -454,9 +445,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, output_attentions) hidden_states = layer_outputs[0] @@ -531,13 +520,6 @@ def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> No pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`ViTHybridImageProcessor.__call__`] for details. - - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -590,7 +572,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, @@ -609,13 +590,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: @@ -627,7 +601,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -693,7 +666,6 @@ def __init__(self, config: ViTHybridConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -710,7 +682,6 @@ def forward( outputs = self.vit( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, diff --git a/src/transformers/models/deprecated/xlm_prophetnet/modeling_xlm_prophetnet.py b/src/transformers/models/deprecated/xlm_prophetnet/modeling_xlm_prophetnet.py index 36f6e6097bc3..95bd63226c3c 100644 --- a/src/transformers/models/deprecated/xlm_prophetnet/modeling_xlm_prophetnet.py +++ b/src/transformers/models/deprecated/xlm_prophetnet/modeling_xlm_prophetnet.py @@ -97,24 +97,6 @@ decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - decoder_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of @@ -156,12 +138,6 @@ - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -657,7 +633,6 @@ def forward( hidden_states, key_value_states: Optional[Tensor] = None, attention_mask: Optional[Tensor] = None, - layer_head_mask: Optional[Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: bool = False, ) -> tuple[Tensor, Optional[Tensor]]: @@ -722,18 +697,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view( - batch_size, self.num_attn_heads, tgt_len, src_len - ) - - # apply head_mask also on attn_weights_reshaped which is used for n-gram attention inside the model - attn_weights_reshaped = layer_head_mask.view(1, -1, 1, 1) * attn_weights_reshaped - attn_probs = nn.functional.dropout( attn_weights, p=self.attention_dropout, @@ -816,7 +779,6 @@ def forward( hidden_states, past_key_values: Optional[Cache] = None, attention_mask=None, - layer_head_mask=None, extended_predict_attention_mask=None, main_relative_position_buckets=None, predict_relative_position_buckets=None, @@ -893,15 +855,6 @@ def forward( onnx_trace=self.onnx_trace, ).type_as(main_attn_weights) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - main_attn_probs = layer_head_mask.view(1, -1, 1, 1) * main_attn_probs.view( - batch_size, self.num_attn_heads, -1, sequence_length - ) - main_attn_probs = nn.functional.dropout(main_attn_probs, p=self.attention_dropout, training=self.training) # project to attn_output # [batch_size, number_heads, sequence_length, sequence_length] @@ -955,13 +908,6 @@ def forward( onnx_trace=self.onnx_trace, ).type_as(predict_attn_weights) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - predict_attn_probs = layer_head_mask.view(1, 1, -1, 1, 1) * predict_attn_probs - predict_attn_probs = nn.functional.dropout( predict_attn_probs, p=self.attention_dropout, training=self.training ) @@ -1113,14 +1059,12 @@ def forward( self, hidden_states, attention_mask, - layer_head_mask, output_attentions: bool = False, ): # 1st residual block attention_output, attn_weights, _ = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.self_attn_layer_norm(attention_output + hidden_states) @@ -1164,8 +1108,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attn_mask=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, extended_predict_attention_mask=None, main_relative_position_buckets=None, predict_relative_position_buckets=None, @@ -1181,7 +1123,6 @@ def forward( hidden_states=hidden_states, past_key_values=self_attn_past_key_value, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, extended_predict_attention_mask=extended_predict_attention_mask, main_relative_position_buckets=main_relative_position_buckets, predict_relative_position_buckets=predict_relative_position_buckets, @@ -1198,7 +1139,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attn_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=cross_attn_past_key_value, output_attentions=output_attentions, ) @@ -1262,7 +1202,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1316,11 +1255,6 @@ def forward( encoder_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_hidden_states = encoder_hidden_states + (hidden_states,) @@ -1328,7 +1262,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask=extended_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -1396,8 +1329,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1412,11 +1343,6 @@ def forward( encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. @@ -1534,13 +1460,6 @@ def forward( present_key_values = () if use_cache else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): if output_hidden_states: # grad cannot be kept because tensor is sliced @@ -1553,8 +1472,6 @@ def forward( attention_mask=extended_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attn_mask=extended_encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), extended_predict_attention_mask=extended_predict_attention_mask, main_relative_position_buckets=main_relative_position_buckets, predict_relative_position_buckets=predict_relative_position_buckets, @@ -1741,9 +1658,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1784,7 +1698,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1797,8 +1710,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, output_attentions=output_attentions, @@ -1857,9 +1768,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1906,9 +1814,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, inputs_embeds=inputs_embeds, @@ -1988,9 +1893,6 @@ def prepare_inputs_for_generation( decoder_input_ids, past_key_values=None, attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, use_cache=None, encoder_outputs=None, **kwargs, @@ -2006,9 +1908,6 @@ def prepare_inputs_for_generation( "past_key_values": past_key_values, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, "use_cache": use_cache, } @@ -2085,8 +1984,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, @@ -2102,11 +1999,6 @@ def forward( encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. @@ -2173,8 +2065,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -2244,7 +2134,6 @@ def prepare_inputs_for_generation( input_ids, past_key_values=None, attention_mask=None, - head_mask=None, use_cache=None, **kwargs, ): @@ -2258,7 +2147,6 @@ def prepare_inputs_for_generation( return { "input_ids": input_ids, # encoder_outputs is defined. input_ids not needed "attention_mask": attention_mask, - "head_mask": head_mask, "past_key_values": past_key_values, "use_cache": use_cache, } diff --git a/src/transformers/models/depth_pro/modeling_depth_pro.py b/src/transformers/models/depth_pro/modeling_depth_pro.py index 86cf0206c8c9..153ddfc1f513 100644 --- a/src/transformers/models/depth_pro/modeling_depth_pro.py +++ b/src/transformers/models/depth_pro/modeling_depth_pro.py @@ -239,7 +239,6 @@ def __init__(self, config: DepthProConfig): def forward( self, pixel_values: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, ) -> list[torch.Tensor]: batch_size, num_channels, height, width = pixel_values.shape @@ -279,7 +278,6 @@ def forward( encodings = self.model( # each patch is processed as a separate batch patches, - head_mask=head_mask, # required for intermediate features output_hidden_states=self.n_intermediate_hooks > 0, ) @@ -344,7 +342,6 @@ def __init__(self, config: DepthProConfig): def forward( self, pixel_values: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -361,7 +358,6 @@ def forward( ) encodings = self.model( pixel_values=pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) @@ -410,7 +406,6 @@ def __init__(self, config: DepthProConfig): def forward( self, pixel_values: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -419,11 +414,9 @@ def forward( patch_features = self.patch_encoder( pixel_values, - head_mask=head_mask, ) image_encodings = self.image_encoder( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -646,7 +639,6 @@ def get_input_embeddings(self): def forward( self, pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -684,7 +676,6 @@ def forward( encodings = self.encoder( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -852,7 +843,6 @@ def __init__(self, config: DepthProConfig): def forward( self, pixel_values: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: batch_size, num_channels, height, width = pixel_values.shape @@ -866,7 +856,6 @@ def forward( ) encodings = self.model( pixel_values=pixel_values, - head_mask=head_mask, ) hidden_state = encodings[0] hidden_state = self.neck(hidden_state) @@ -945,9 +934,8 @@ def forward( self, pixel_values: torch.Tensor, global_features: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: - fov_features = self.fov_encoder(pixel_values, head_mask) + fov_features = self.fov_encoder(pixel_values) global_features = self.conv(global_features) global_features = self.activation(global_features) @@ -1032,7 +1020,6 @@ def __init__(self, config, use_fov_model=None): def forward( self, pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1093,7 +1080,6 @@ def forward( depth_pro_outputs = self.depth_pro( pixel_values=pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -1108,7 +1094,6 @@ def forward( fov = self.fov_model( pixel_values=pixel_values, global_features=features_for_fov, - head_mask=head_mask, ) else: fov = None diff --git a/src/transformers/models/dia/modeling_dia.py b/src/transformers/models/dia/modeling_dia.py index 4626a37750c1..3025c8de4faa 100644 --- a/src/transformers/models/dia/modeling_dia.py +++ b/src/transformers/models/dia/modeling_dia.py @@ -493,8 +493,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -687,8 +685,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, diff --git a/src/transformers/models/dia/modular_dia.py b/src/transformers/models/dia/modular_dia.py index 398514cafe3f..432f0298430c 100644 --- a/src/transformers/models/dia/modular_dia.py +++ b/src/transformers/models/dia/modular_dia.py @@ -308,8 +308,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -502,8 +500,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, diff --git a/src/transformers/models/dinov2/modeling_dinov2.py b/src/transformers/models/dinov2/modeling_dinov2.py index f84d442a3efc..a080fcb5017a 100644 --- a/src/transformers/models/dinov2/modeling_dinov2.py +++ b/src/transformers/models/dinov2/modeling_dinov2.py @@ -202,9 +202,7 @@ def __init__(self, config: Dinov2Config): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -221,7 +219,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -277,8 +275,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -381,10 +379,9 @@ def __init__(self, config: Dinov2Config) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: hidden_states_norm = self.norm1(hidden_states) - self_attention_output = self.attention(hidden_states_norm, head_mask) + self_attention_output = self.attention(hidden_states_norm) self_attention_output = self.layer_scale1(self_attention_output) # first residual connection @@ -408,13 +405,10 @@ def __init__(self, config: Dinov2Config): self.layer = nn.ModuleList([Dinov2Layer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_hidden_states: bool = False - ) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor, output_hidden_states: bool = False) -> BaseModelOutput: all_hidden_states = [hidden_states] if output_hidden_states else None for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) if all_hidden_states: all_hidden_states.append(hidden_states) @@ -502,7 +496,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, ) -> BaseModelOutputWithPooling: @@ -517,18 +510,9 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos) - encoder_outputs: BaseModelOutput = self.encoder( - embedding_output, head_mask=head_mask, output_hidden_states=output_hidden_states - ) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output, output_hidden_states=output_hidden_states) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) pooled_output = sequence_output[:, 0, :] @@ -566,7 +550,6 @@ def __init__(self, config: Dinov2Config) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> ImageClassifierOutput: @@ -576,7 +559,7 @@ def forward( config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ - outputs: BaseModelOutputWithPooling = self.dinov2(pixel_values, head_mask=head_mask, **kwargs) + outputs: BaseModelOutputWithPooling = self.dinov2(pixel_values, **kwargs) sequence_output = outputs.last_hidden_state # batch_size, sequence_length, hidden_size cls_token = sequence_output[:, 0] diff --git a/src/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py b/src/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py index 042c21babd19..89ce1d51a1be 100644 --- a/src/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py +++ b/src/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py @@ -221,9 +221,7 @@ def __init__(self, config: Dinov2WithRegistersConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -240,7 +238,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -294,8 +292,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -398,10 +396,9 @@ def __init__(self, config: Dinov2WithRegistersConfig) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: hidden_states_norm = self.norm1(hidden_states) - self_attention_output = self.attention(hidden_states_norm, head_mask) + self_attention_output = self.attention(hidden_states_norm) self_attention_output = self.layer_scale1(self_attention_output) # first residual connection @@ -425,13 +422,10 @@ def __init__(self, config: Dinov2WithRegistersConfig): self.layer = nn.ModuleList([Dinov2WithRegistersLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_hidden_states: bool = False - ) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor, output_hidden_states: bool = False) -> BaseModelOutput: all_hidden_states = [hidden_states] if output_hidden_states else None for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) if all_hidden_states: all_hidden_states.append(hidden_states) @@ -519,7 +513,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, ) -> BaseModelOutputWithPooling: @@ -534,18 +527,9 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos) - encoder_outputs: BaseModelOutput = self.encoder( - embedding_output, head_mask=head_mask, output_hidden_states=output_hidden_states - ) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output, output_hidden_states=output_hidden_states) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) pooled_output = sequence_output[:, 0, :] @@ -583,7 +567,6 @@ def __init__(self, config: Dinov2WithRegistersConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> ImageClassifierOutput: @@ -594,7 +577,7 @@ def forward( `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ - outputs: BaseModelOutputWithPooling = self.dinov2_with_registers(pixel_values, head_mask=head_mask, **kwargs) + outputs: BaseModelOutputWithPooling = self.dinov2_with_registers(pixel_values, **kwargs) sequence_output = outputs.last_hidden_state # batch_size, sequence_length, hidden_size cls_token = sequence_output[:, 0] diff --git a/src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py b/src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py index 686528002b09..02c33e33d260 100644 --- a/src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py +++ b/src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py @@ -317,7 +317,6 @@ class Dinov2WithRegistersForImageClassification(Dinov2ForImageClassification): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> ImageClassifierOutput: @@ -328,7 +327,7 @@ def forward( `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ - outputs: BaseModelOutputWithPooling = self.dinov2_with_registers(pixel_values, head_mask=head_mask, **kwargs) + outputs: BaseModelOutputWithPooling = self.dinov2_with_registers(pixel_values, **kwargs) sequence_output = outputs.last_hidden_state # batch_size, sequence_length, hidden_size cls_token = sequence_output[:, 0] diff --git a/src/transformers/models/dinov3_vit/modeling_dinov3_vit.py b/src/transformers/models/dinov3_vit/modeling_dinov3_vit.py index 76e365903082..c7f56ce1fa4f 100644 --- a/src/transformers/models/dinov3_vit/modeling_dinov3_vit.py +++ b/src/transformers/models/dinov3_vit/modeling_dinov3_vit.py @@ -500,7 +500,6 @@ def forward( self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: r""" @@ -514,10 +513,8 @@ def forward( position_embeddings = self.rope_embeddings(pixel_values) for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, - attention_mask=layer_head_mask, position_embeddings=position_embeddings, ) diff --git a/src/transformers/models/dinov3_vit/modular_dinov3_vit.py b/src/transformers/models/dinov3_vit/modular_dinov3_vit.py index 0515a1a1e0bf..43c8672b8249 100644 --- a/src/transformers/models/dinov3_vit/modular_dinov3_vit.py +++ b/src/transformers/models/dinov3_vit/modular_dinov3_vit.py @@ -395,7 +395,6 @@ def forward( self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: r""" @@ -409,10 +408,8 @@ def forward( position_embeddings = self.rope_embeddings(pixel_values) for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, - attention_mask=layer_head_mask, position_embeddings=position_embeddings, ) diff --git a/src/transformers/models/distilbert/modeling_distilbert.py b/src/transformers/models/distilbert/modeling_distilbert.py index 8f0cdcd76898..4cbbf2c4f0df 100755 --- a/src/transformers/models/distilbert/modeling_distilbert.py +++ b/src/transformers/models/distilbert/modeling_distilbert.py @@ -173,7 +173,6 @@ def forward( key: torch.Tensor, value: torch.Tensor, mask: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, ...]: """ @@ -218,10 +217,6 @@ def unshape(x: torch.Tensor) -> torch.Tensor: weights = nn.functional.softmax(scores, dim=-1) # (bs, n_heads, q_length, k_length) weights = self.dropout(weights) # (bs, n_heads, q_length, k_length) - # Mask heads if we want to - if head_mask is not None: - weights = weights * head_mask - context = torch.matmul(weights, v) # (bs, n_heads, q_length, dim_per_head) context = unshape(context) # (bs, q_length, dim) context = self.out_lin(context) # (bs, q_length, dim) @@ -253,7 +248,6 @@ def forward( key: torch.Tensor, value: torch.Tensor, mask: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, ...]: """ @@ -344,7 +338,6 @@ def forward( key: torch.Tensor, value: torch.Tensor, mask: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, ...]: """ @@ -358,10 +351,10 @@ def forward( weights: torch.tensor(bs, n_heads, seq_length, seq_length) Attention weights context: torch.tensor(bs, seq_length, dim) Contextualized layer. Optional: only if `output_attentions=True` """ - if output_attentions or head_mask is not None: + if output_attentions: logger.warning_once( "DistilBertSdpaAttention is used but `torch.nn.functional.scaled_dot_product_attention` does not support" - " `output_attentions=True` or `head_mask`. Falling back to the manual attention implementation, but specifying" + " `output_attentions=True`. Falling back to the manual attention implementation, but specifying" " the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be" ' removed using the argument `attn_implementation="eager"` when loading the model.' ) @@ -370,7 +363,6 @@ def forward( key, value, mask, - head_mask, output_attentions, ) @@ -450,7 +442,6 @@ def forward( self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, ...]: """ @@ -468,7 +459,6 @@ def forward( key=x, value=x, mask=attn_mask, - head_mask=head_mask, output_attentions=output_attentions, ) if output_attentions: @@ -501,7 +491,6 @@ def forward( self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: Optional[bool] = None, @@ -531,7 +520,6 @@ def forward( layer_outputs = layer_module( hidden_state, attn_mask, - head_mask[i], output_attentions, ) @@ -664,7 +652,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -701,10 +688,6 @@ def forward( device = input_ids.device if input_ids is not None else inputs_embeds.device - head_mask_is_none = head_mask is None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embeddings = self.embeddings(input_ids, inputs_embeds) # (bs, seq_length, dim) if self.config._attn_implementation == "flash_attention_2": @@ -713,7 +696,7 @@ def forward( if attention_mask is None: attention_mask = torch.ones(input_shape, device=device) # (bs, seq_length) - if self.config._attn_implementation == "sdpa" and head_mask_is_none and not output_attentions: + if self.config._attn_implementation == "sdpa" and not output_attentions: attention_mask = _prepare_4d_attention_mask_for_sdpa( attention_mask, embeddings.dtype, tgt_len=input_shape[1] ) @@ -721,7 +704,6 @@ def forward( return self.transformer( x=embeddings, attn_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -782,7 +764,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -811,7 +792,6 @@ def forward( dlbrt_output = self.distilbert( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -884,7 +864,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -902,7 +881,6 @@ def forward( distilbert_output = self.distilbert( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -990,7 +968,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1016,7 +993,6 @@ def forward( distilbert_output = self.distilbert( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1098,7 +1074,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1114,7 +1089,6 @@ def forward( outputs = self.distilbert( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1181,7 +1155,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1240,7 +1213,6 @@ def forward( outputs = self.distilbert( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/doge/modeling_doge.py b/src/transformers/models/doge/modeling_doge.py index 5822cad62017..f92371fdaba6 100644 --- a/src/transformers/models/doge/modeling_doge.py +++ b/src/transformers/models/doge/modeling_doge.py @@ -186,7 +186,6 @@ def flex_attention_forward( attention_mask: Union[torch.Tensor, "BlockMask"], scaling: Optional[float] = None, softcap: Optional[float] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, torch.Tensor]: block_mask = None @@ -204,8 +203,6 @@ def score_mod(score, batch_idx, head_idx, q_idx, kv_idx): score = softcap * torch.tanh(score / softcap) if causal_mask is not None: score = score + causal_mask[batch_idx][head_idx][q_idx][kv_idx] - if head_mask is not None: - score = score + head_mask[batch_idx][head_idx][0][0] return score attn_output, attention_weights = compile_friendly_flex_attention( diff --git a/src/transformers/models/doge/modular_doge.py b/src/transformers/models/doge/modular_doge.py index c4c95e627376..0d1a1e06afb4 100644 --- a/src/transformers/models/doge/modular_doge.py +++ b/src/transformers/models/doge/modular_doge.py @@ -282,7 +282,6 @@ def flex_attention_forward( attention_mask: Union[torch.Tensor, "BlockMask"], scaling: Optional[float] = None, softcap: Optional[float] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, torch.Tensor]: block_mask = None @@ -300,8 +299,6 @@ def score_mod(score, batch_idx, head_idx, q_idx, kv_idx): score = softcap * torch.tanh(score / softcap) if causal_mask is not None: score = score + causal_mask[batch_idx][head_idx][q_idx][kv_idx] - if head_mask is not None: - score = score + head_mask[batch_idx][head_idx][0][0] return score attn_output, attention_weights = compile_friendly_flex_attention( diff --git a/src/transformers/models/donut/modeling_donut_swin.py b/src/transformers/models/donut/modeling_donut_swin.py index c541b960fd2e..d388e386ae49 100644 --- a/src/transformers/models/donut/modeling_donut_swin.py +++ b/src/transformers/models/donut/modeling_donut_swin.py @@ -403,7 +403,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -442,10 +441,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) @@ -500,10 +495,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -600,7 +594,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor, torch.Tensor]: @@ -633,9 +626,7 @@ def forward( height_pad, width_pad, dtype=hidden_states.dtype, device=hidden_states_windows.device ) - attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = attention_outputs[0] @@ -696,17 +687,12 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor]: height, width = input_dimensions for i, layer_module in enumerate(self.blocks): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] @@ -753,7 +739,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, output_hidden_states_before_downsampling: Optional[bool] = False, @@ -773,11 +758,7 @@ def forward( all_reshaped_hidden_states += (reshaped_hidden_state,) for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] hidden_states_before_downsampling = layer_outputs[1] @@ -882,7 +863,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -901,13 +881,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - embedding_output, input_dimensions = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) @@ -915,7 +888,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, input_dimensions, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -976,7 +948,6 @@ def __init__(self, config): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -993,7 +964,6 @@ def forward( outputs = self.donut( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, diff --git a/src/transformers/models/dpt/modeling_dpt.py b/src/transformers/models/dpt/modeling_dpt.py index 7be71fd3ceb4..d43279307aeb 100755 --- a/src/transformers/models/dpt/modeling_dpt.py +++ b/src/transformers/models/dpt/modeling_dpt.py @@ -320,9 +320,7 @@ def __init__(self, config: DPTConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -339,7 +337,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -395,8 +393,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -445,9 +443,9 @@ def __init__(self, config: DPTConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -470,13 +468,10 @@ def __init__(self, config: DPTConfig): self.layer = nn.ModuleList([DPTViTLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_hidden_states: bool = False - ) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor, output_hidden_states: bool = False) -> BaseModelOutput: all_hidden_states = [hidden_states] if output_hidden_states else None for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) if all_hidden_states: all_hidden_states.append(hidden_states) @@ -811,25 +806,17 @@ class PreTrainedModel def forward( self, pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, ) -> BaseModelOutputWithPoolingAndIntermediateActivations: if output_hidden_states is None: output_hidden_states = self.config.output_hidden_states - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output: BaseModelOutputWithIntermediateActivations = self.embeddings(pixel_values) embedding_last_hidden_states = embedding_output.last_hidden_states encoder_outputs: BaseModelOutput = self.encoder( - embedding_last_hidden_states, head_mask=head_mask, output_hidden_states=output_hidden_states + embedding_last_hidden_states, output_hidden_states=output_hidden_states ) sequence_output = encoder_outputs.last_hidden_state @@ -987,7 +974,6 @@ def __init__(self, config): def forward( self, pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, @@ -1040,7 +1026,7 @@ def forward( outputs = self.backbone.forward_with_filtered_kwargs(pixel_values, output_hidden_states=True, **kwargs) hidden_states = outputs.feature_maps else: - outputs = self.dpt(pixel_values, head_mask=head_mask, output_hidden_states=True, **kwargs) + outputs = self.dpt(pixel_values, output_hidden_states=True, **kwargs) hidden_states = outputs.hidden_states # only keep certain features based on config.backbone_out_indices # note that the hidden_states also include the initial embeddings @@ -1137,7 +1123,6 @@ def __init__(self, config: DPTConfig): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, @@ -1171,7 +1156,7 @@ def forward( raise ValueError("The number of labels should be greater than one") outputs: BaseModelOutputWithPoolingAndIntermediateActivations = self.dpt( - pixel_values, head_mask=head_mask, output_hidden_states=True, **kwargs + pixel_values, output_hidden_states=True, **kwargs ) hidden_states = outputs.hidden_states diff --git a/src/transformers/models/electra/modeling_electra.py b/src/transformers/models/electra/modeling_electra.py index 100e48034abb..921e545afc35 100644 --- a/src/transformers/models/electra/modeling_electra.py +++ b/src/transformers/models/electra/modeling_electra.py @@ -132,7 +132,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -173,9 +172,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -218,7 +214,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -262,7 +257,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -307,7 +301,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -355,7 +348,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -415,7 +407,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -427,7 +418,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -493,7 +483,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -503,7 +492,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -520,7 +508,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -550,7 +537,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -559,12 +545,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -702,7 +685,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -761,17 +743,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -849,8 +823,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -875,8 +847,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1047,7 +1017,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1063,7 +1032,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1127,7 +1095,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1168,7 +1135,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1231,7 +1197,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1247,7 +1212,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1300,7 +1264,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1314,7 +1277,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1360,7 +1322,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1371,7 +1332,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1430,7 +1390,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1482,7 +1441,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1541,7 +1499,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1581,7 +1538,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, diff --git a/src/transformers/models/eomt/modeling_eomt.py b/src/transformers/models/eomt/modeling_eomt.py index e7e1624c1406..29aa667d9b8a 100644 --- a/src/transformers/models/eomt/modeling_eomt.py +++ b/src/transformers/models/eomt/modeling_eomt.py @@ -891,10 +891,10 @@ def __init__(self, config: EomtConfig) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: hidden_states_norm = self.norm1(hidden_states) - self_attention_output, _ = self.attention(hidden_states_norm, head_mask) + self_attention_output, _ = self.attention(hidden_states_norm, attention_mask) self_attention_output = self.layer_scale1(self_attention_output) # first residual connection diff --git a/src/transformers/models/eomt/modular_eomt.py b/src/transformers/models/eomt/modular_eomt.py index 17fb96ac60aa..807a130c764a 100644 --- a/src/transformers/models/eomt/modular_eomt.py +++ b/src/transformers/models/eomt/modular_eomt.py @@ -297,10 +297,10 @@ class EomtLayer(Dinov2Layer): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: hidden_states_norm = self.norm1(hidden_states) - self_attention_output, _ = self.attention(hidden_states_norm, head_mask) + self_attention_output, _ = self.attention(hidden_states_norm, attention_mask) self_attention_output = self.layer_scale1(self_attention_output) # first residual connection diff --git a/src/transformers/models/ernie/modeling_ernie.py b/src/transformers/models/ernie/modeling_ernie.py index 3e94cf71d1e6..01a5d9dddd2a 100644 --- a/src/transformers/models/ernie/modeling_ernie.py +++ b/src/transformers/models/ernie/modeling_ernie.py @@ -143,7 +143,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -184,9 +183,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -228,7 +224,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -272,7 +267,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -316,7 +310,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -364,7 +357,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -422,7 +414,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -434,7 +425,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -497,7 +487,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -507,7 +496,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -524,7 +512,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -608,7 +595,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -617,12 +603,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -727,7 +710,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -830,17 +812,9 @@ def forward( ) encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -920,8 +894,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -945,8 +917,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1040,7 +1010,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, next_sentence_label: Optional[torch.Tensor] = None, @@ -1085,7 +1054,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1156,7 +1124,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1186,7 +1153,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1254,7 +1220,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1278,7 +1243,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1360,7 +1324,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1411,7 +1374,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1465,7 +1427,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1487,7 +1448,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1553,7 +1513,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1611,7 +1570,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1661,7 +1619,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1681,7 +1638,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1726,7 +1682,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1745,7 +1700,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/ernie/modular_ernie.py b/src/transformers/models/ernie/modular_ernie.py index 30261966b3d0..eba860e93185 100644 --- a/src/transformers/models/ernie/modular_ernie.py +++ b/src/transformers/models/ernie/modular_ernie.py @@ -212,7 +212,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -315,17 +314,9 @@ def forward( ) encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -356,8 +347,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -382,8 +371,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -422,7 +409,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, next_sentence_label: Optional[torch.Tensor] = None, @@ -467,7 +453,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -502,7 +487,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -532,7 +516,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -577,7 +560,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -601,7 +583,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -635,7 +616,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -686,7 +666,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -719,7 +698,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -741,7 +719,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -793,7 +770,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -851,7 +827,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -886,7 +861,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -906,7 +880,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -940,7 +913,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, task_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -959,7 +931,6 @@ def forward( token_type_ids=token_type_ids, task_type_ids=task_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/esm/modeling_esm.py b/src/transformers/models/esm/modeling_esm.py index 3524b221a0ec..7b5674f31be5 100755 --- a/src/transformers/models/esm/modeling_esm.py +++ b/src/transformers/models/esm/modeling_esm.py @@ -259,7 +259,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ): # ESM applies relative position embeddings and we don't copy from Llama @@ -292,9 +291,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -340,7 +336,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -382,7 +377,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -433,7 +427,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -442,7 +435,6 @@ def forward( attn_output, _ = self.self( hidden_states_ln, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -495,7 +487,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -503,7 +494,6 @@ def forward( attention_output = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) @@ -517,7 +507,6 @@ def forward( attention_output = self.crossattention( attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -546,17 +535,14 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], ): for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -682,7 +668,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -735,17 +720,9 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( inputs_embeds, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, **kwargs, @@ -803,7 +780,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -821,7 +797,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -895,7 +870,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -911,7 +885,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, **kwargs, ) @@ -972,7 +945,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -986,7 +958,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, **kwargs, ) diff --git a/src/transformers/models/evolla/modeling_evolla.py b/src/transformers/models/evolla/modeling_evolla.py index 8bb5713d1764..75db8a22a022 100644 --- a/src/transformers/models/evolla/modeling_evolla.py +++ b/src/transformers/models/evolla/modeling_evolla.py @@ -227,7 +227,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ): # EVOLLA_SA_PROT applies relative position embeddings and we don't copy from Llama @@ -260,9 +259,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -308,7 +304,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -350,7 +345,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -401,7 +395,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -410,7 +403,6 @@ def forward( attn_output, _ = self.self( hidden_states_ln, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -470,7 +462,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -478,7 +469,6 @@ def forward( attention_output = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) @@ -492,7 +482,6 @@ def forward( attention_output = self.crossattention( attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -521,17 +510,14 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], ): for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, diff --git a/src/transformers/models/falcon/modeling_falcon.py b/src/transformers/models/falcon/modeling_falcon.py index dac4dc658b19..b32981c51353 100644 --- a/src/transformers/models/falcon/modeling_falcon.py +++ b/src/transformers/models/falcon/modeling_falcon.py @@ -290,7 +290,6 @@ def forward( attention_mask: torch.Tensor, position_ids: Optional[torch.LongTensor] = None, layer_past: Optional[Cache] = None, - head_mask: Optional[torch.Tensor] = None, use_cache: bool = False, output_attentions: bool = False, cache_position: Optional[torch.LongTensor] = None, @@ -353,7 +352,7 @@ def forward( attention_scores /= math.sqrt(self.head_dim) attention_scores = F.softmax(attention_scores + attention_mask, dim=-1, dtype=hidden_states.dtype) - # It is unclear why neither dropout nor head_mask is applied here (while it is with alibi). + # It is unclear why dropout is not applied here (while it is with alibi). attn_output = attention_scores @ value_layer attn_output = attn_output.view(batch_size, self.num_heads, query_length, self.head_dim) @@ -365,7 +364,7 @@ def forward( return attn_output, attention_scores else: - if self.config._attn_implementation == "sdpa" and not output_attentions and head_mask is None: + if self.config._attn_implementation == "sdpa" and not output_attentions: # We dispatch to SDPA's Flash Attention or Efficient kernels via this if statement instead of an # inline conditional assignment to support both torch.compile's `dynamic=True` and `fullgraph=True` is_causal = self.is_causal and attention_mask is None and query_length > 1 @@ -400,9 +399,6 @@ def forward( # [batch_size, num_heads, q_length, kv_length] attention_probs = self.attention_dropout(attention_probs) - if head_mask is not None: - attention_probs = attention_probs * head_mask - # change view [batch_size, num_heads, q_length, kv_length] attention_probs_reshaped = attention_probs.view(batch_size, self.num_heads, query_length, kv_length) @@ -439,7 +435,6 @@ def forward( attention_mask: torch.Tensor, position_ids: Optional[torch.LongTensor] = None, layer_past: Optional[Cache] = None, - head_mask: Optional[torch.Tensor] = None, use_cache: bool = False, output_attentions: bool = False, cache_position: Optional[torch.LongTensor] = None, @@ -582,7 +577,6 @@ def forward( attention_mask: torch.Tensor, position_ids: Optional[torch.LongTensor] = None, layer_past: Optional[Union[Cache, tuple[torch.Tensor, torch.Tensor]]] = None, - head_mask: Optional[torch.Tensor] = None, use_cache: bool = False, output_attentions: bool = False, cache_position: Optional[torch.LongTensor] = None, @@ -604,7 +598,6 @@ def forward( attention_mask=attention_mask, position_ids=position_ids, alibi=alibi, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -716,7 +709,6 @@ def forward( past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -783,14 +775,9 @@ def forward( position_ids = cache_position.unsqueeze(0) causal_mask = self._update_causal_mask( - attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions, head_mask, alibi + attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions, alibi ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape batch_size x num_heads x N x N - # head_mask has shape n_layer x batch x num_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) hidden_states = inputs_embeds # create position embeddings to be shared across the decoder layers @@ -808,7 +795,6 @@ def forward( layer_past=past_key_values, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, alibi=alibi, @@ -845,7 +831,6 @@ def _update_causal_mask( cache_position: torch.Tensor, past_key_values: Cache, output_attentions: bool, - head_mask: torch.Tensor, alibi: torch.Tensor, ): # TODO: As of torch==2.2.0, the `attention_mask` passed to the model in `generate` is 2D and of dynamic length even when the static @@ -869,7 +854,6 @@ def _update_causal_mask( self.config._attn_implementation == "sdpa" and not using_static_cache and not output_attentions - and head_mask is None and alibi is None ): if AttentionMaskConverter._ignore_causal_mask_sdpa( @@ -904,7 +888,7 @@ def _update_causal_mask( ) # We take care to integrate alibi bias in the causal_mask here - if head_mask is None and alibi is not None: + if alibi is not None: alibi = alibi.reshape(batch_size, -1, *alibi.shape[1:]) causal_mask = torch.masked_fill( alibi / math.sqrt(self.config.hidden_size // self.num_heads), @@ -1008,7 +992,6 @@ def forward( past_key_values: Optional[Union[Cache, tuple[tuple[torch.Tensor, torch.Tensor], ...]]] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1044,7 +1027,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1109,7 +1091,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1141,7 +1122,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1235,7 +1215,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1267,7 +1246,6 @@ def forward( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1314,7 +1292,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1340,7 +1317,6 @@ def forward( outputs = self.transformer( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/flaubert/modeling_flaubert.py b/src/transformers/models/flaubert/modeling_flaubert.py index 91c6990b77b9..5812aa457cbc 100644 --- a/src/transformers/models/flaubert/modeling_flaubert.py +++ b/src/transformers/models/flaubert/modeling_flaubert.py @@ -116,7 +116,6 @@ def forward( mask, kv=None, cache=None, - head_mask=None, output_attentions=False, cache_position=None, ): @@ -168,10 +167,6 @@ def forward( weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) # (bs, n_heads, qlen, klen) weights = nn.functional.dropout(weights, p=self.dropout, training=self.training) # (bs, n_heads, qlen, klen) - # Mask heads if we want to - if head_mask is not None: - weights = weights * head_mask - context = torch.matmul(weights, v) # (bs, n_heads, qlen, head_dim) context = context.transpose(1, 2).contiguous().view(bs, -1, self.n_heads * self.head_dim) @@ -814,7 +809,6 @@ def forward( position_ids: Optional[torch.LongTensor] = None, lengths: Optional[torch.LongTensor] = None, cache: Optional[dict[str, torch.FloatTensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -900,9 +894,6 @@ def forward( assert langs.size() == (bs, slen) # (slen, bs) # langs = langs.transpose(0, 1) - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.n_layers) - # do not recompute cached elements if cache is not None and input_ids is not None: _slen = slen - cache.get_seq_length() @@ -945,7 +936,6 @@ def forward( tensor, attn_mask, cache=cache, - head_mask=head_mask[i], output_attentions=output_attentions, cache_position=cache_position, ) @@ -957,7 +947,7 @@ def forward( tensor = self.layer_norm1[i](tensor) else: tensor_normalized = self.layer_norm1[i](tensor) - attn_outputs = self.attentions[i](tensor_normalized, attn_mask, cache=cache, head_mask=head_mask[i]) + attn_outputs = self.attentions[i](tensor_normalized, attn_mask, cache=cache[i]) attn = attn_outputs[0] if output_attentions: attentions = attentions + (attn_outputs[1],) @@ -1032,7 +1022,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1072,7 +1061,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1122,7 +1110,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1160,7 +1147,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1229,7 +1215,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1265,7 +1250,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1321,7 +1305,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1356,7 +1339,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1457,7 +1439,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1521,7 +1502,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1578,7 +1558,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1659,7 +1638,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/flava/modeling_flava.py b/src/transformers/models/flava/modeling_flava.py index c48f2ca1279f..5d63b5e132ad 100644 --- a/src/transformers/models/flava/modeling_flava.py +++ b/src/transformers/models/flava/modeling_flava.py @@ -444,7 +444,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: batch_size, seq_length, _ = hidden_states.shape @@ -479,10 +478,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -541,11 +536,10 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: self_outputs = self.attention( - hidden_states, attention_mask=attention_mask, head_mask=head_mask, output_attentions=output_attentions + hidden_states, attention_mask=attention_mask, output_attentions=output_attentions ) attention_output = self.output(self_outputs[0], hidden_states) @@ -606,13 +600,11 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in ViT, layernorm is applied before self-attention attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -644,7 +636,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -656,9 +647,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] @@ -768,7 +757,6 @@ def forward( bool_masked_pos: Optional[torch.BoolTensor] = None, interpolate_pos_encoding: Optional[bool] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -786,13 +774,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) @@ -800,7 +781,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -863,7 +843,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -894,12 +873,6 @@ def forward( if attention_mask is None: attention_mask = torch.ones(input_shape, device=input_ids.device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) extended_attention_mask: torch.Tensor = self.get_extended_attention_mask( attention_mask, input_shape, input_ids.device ) @@ -913,7 +886,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -971,7 +943,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -996,12 +967,6 @@ def forward( if attention_mask is None: attention_mask = torch.ones((batch_size, seq_length), device=hidden_states.device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) extended_attention_mask: torch.Tensor = self.get_extended_attention_mask( attention_mask, (batch_size, seq_length), hidden_states.device ) @@ -1009,7 +974,6 @@ def forward( encoder_outputs = self.encoder( hidden_states, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1136,7 +1100,6 @@ def get_image_features( bool_masked_pos: Optional[torch.BoolTensor] = None, interpolate_pos_encoding: Optional[bool] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, ) -> torch.FloatTensor: r""" bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, image_num_patches)`): @@ -1169,7 +1132,6 @@ def get_image_features( pixel_values=pixel_values, bool_masked_pos=bool_masked_pos, attention_mask=attention_mask, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, ) pooled_output = image_outputs.last_hidden_state diff --git a/src/transformers/models/florence2/modeling_florence2.py b/src/transformers/models/florence2/modeling_florence2.py index 64947dea1285..9d1cb837e8ce 100644 --- a/src/transformers/models/florence2/modeling_florence2.py +++ b/src/transformers/models/florence2/modeling_florence2.py @@ -201,7 +201,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -213,9 +212,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -711,11 +707,8 @@ def forward( input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, @@ -749,7 +742,6 @@ def forward( encoder_outputs = self.language_model.encoder( attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -766,8 +758,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -868,9 +858,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -933,9 +920,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, diff --git a/src/transformers/models/florence2/modular_florence2.py b/src/transformers/models/florence2/modular_florence2.py index 102cff29d800..949e7f23f559 100644 --- a/src/transformers/models/florence2/modular_florence2.py +++ b/src/transformers/models/florence2/modular_florence2.py @@ -1545,11 +1545,8 @@ def forward( input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, @@ -1583,7 +1580,6 @@ def forward( encoder_outputs = self.language_model.encoder( attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1600,8 +1596,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1652,9 +1646,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1717,9 +1708,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, diff --git a/src/transformers/models/fsmt/modeling_fsmt.py b/src/transformers/models/fsmt/modeling_fsmt.py index 85618847dbf7..7ac29b403905 100644 --- a/src/transformers/models/fsmt/modeling_fsmt.py +++ b/src/transformers/models/fsmt/modeling_fsmt.py @@ -297,7 +297,7 @@ def __init__(self, config: FSMTConfig): self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim) self.final_layer_norm = LayerNorm(self.embed_dim) - def forward(self, x, encoder_padding_mask, layer_head_mask, output_attentions=False): + def forward(self, x, encoder_padding_mask, output_attentions=False): """ Args: x (`torch.Tensor`): input to the layer of shape *(seq_len, batch, embed_dim)* @@ -305,8 +305,6 @@ def forward(self, x, encoder_padding_mask, layer_head_mask, output_attentions=Fa *(batch, src_len)* where padding elements are indicated by `1`. for t_tgt, t_src is excluded (or masked out), =0 means it is included in attention - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - *(config.encoder_attention_heads,)*. Returns: encoded output of shape *(seq_len, batch, embed_dim)* @@ -316,7 +314,6 @@ def forward(self, x, encoder_padding_mask, layer_head_mask, output_attentions=Fa query=x, key=x, key_padding_mask=encoder_padding_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) x = nn.functional.dropout(x, p=self.dropout, training=self.training) @@ -359,7 +356,6 @@ def forward( input_ids: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -371,11 +367,6 @@ def forward( attention_mask (`torch.LongTensor`): indicating which indices are padding tokens inputs_embeds (`torch.FloatTensor`): embedding vectors of shape *(batch, src_len, embed_dim)* - head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Returns: BaseModelOutput or Tuple comprised of: @@ -416,11 +407,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: x = x.transpose(0, 1) # T x B x C -> B x T x C @@ -434,7 +420,6 @@ def forward( x, attn = encoder_layer( x, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -487,8 +472,6 @@ def forward( encoder_attn_mask=None, layer_state=None, causal_mask=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, decoder_padding_mask=None, output_attentions=False, cache_position=None, @@ -502,7 +485,6 @@ def forward( layer_state=layer_state, # adds keys to layer state key_padding_mask=decoder_padding_mask, attn_mask=causal_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -518,7 +500,6 @@ def forward( key=encoder_hidden_states, key_padding_mask=encoder_attn_mask, layer_state=layer_state, # mutates layer state - layer_head_mask=cross_attn_layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -583,9 +564,7 @@ def forward( encoder_padding_mask: torch.Tensor, decoder_padding_mask: torch.Tensor, decoder_causal_mask: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, @@ -604,17 +583,6 @@ def forward( encoder-side attention encoder_padding_mask: for ignoring pad tokens past_key_values (dict or None): dictionary used for storing state during generation - head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Returns: BaseModelOutputWithPast or tuple: @@ -671,13 +639,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attns = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -696,8 +657,6 @@ def forward( decoder_padding_mask=decoder_padding_mask, layer_state=past_key_values, causal_mask=decoder_causal_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), output_attentions=output_attentions, cache_position=cache_position, ) @@ -773,7 +732,6 @@ def forward( key_padding_mask: Optional[Tensor] = None, layer_state: Optional[Cache] = None, attn_mask: Optional[Tensor] = None, - layer_head_mask: Optional[Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[Tensor, Optional[Tensor]]: @@ -847,13 +805,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_heads,), ( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # make sure that attn_weights are included in graph attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) @@ -918,9 +869,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -945,12 +893,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ if decoder_input_ids is None: use_cache = False @@ -982,7 +924,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1003,8 +944,6 @@ def forward( decoder_padding_mask, decoder_causal_mask=causal_mask, inputs_embeds=decoder_inputs_embeds, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1064,9 +1003,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1092,12 +1028,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1132,9 +1062,6 @@ def forward( decoder_inputs_embeds=decoder_inputs_embeds, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/funnel/modeling_funnel.py b/src/transformers/models/funnel/modeling_funnel.py index d782be0856c8..1b477dbb551a 100644 --- a/src/transformers/models/funnel/modeling_funnel.py +++ b/src/transformers/models/funnel/modeling_funnel.py @@ -760,7 +760,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -789,7 +788,6 @@ def forward( if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - # TODO: deal with head_mask inputs_embeds = self.embeddings(input_ids, inputs_embeds=inputs_embeds) encoder_outputs = self.encoder( @@ -856,7 +854,6 @@ def forward( if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - # TODO: deal with head_mask inputs_embeds = self.embeddings(input_ids, inputs_embeds=inputs_embeds) encoder_outputs = self.encoder( diff --git a/src/transformers/models/git/modeling_git.py b/src/transformers/models/git/modeling_git.py index 82a1d5e451ca..c1e823767135 100644 --- a/src/transformers/models/git/modeling_git.py +++ b/src/transformers/models/git/modeling_git.py @@ -154,7 +154,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, pixel_values_present: Optional[bool] = False, @@ -222,10 +221,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -288,7 +283,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, pixel_values_present: Optional[bool] = False, @@ -296,7 +290,6 @@ def forward( attn_output, self_attn_weights = self.self( hidden_states, attention_mask, - head_mask, past_key_values, output_attentions, pixel_values_present, @@ -350,7 +343,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, pixel_values_present: Optional[bool] = False, @@ -359,7 +351,6 @@ def forward( attention_output, self_attention_weights = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, past_key_values=past_key_values, pixel_values_present=pixel_values_present, @@ -387,7 +378,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = False, @@ -411,12 +401,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, past_key_values, output_attentions, pixel_values_present, @@ -1035,7 +1022,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, past_key_values: Optional[Union[Cache, list[torch.FloatTensor]]] = None, use_cache: Optional[bool] = None, @@ -1093,13 +1079,6 @@ def forward( else past_key_values.get_seq_length() ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - projected_visual_features = None if pixel_values is not None: if pixel_values.ndim == 4: @@ -1174,7 +1153,6 @@ def forward( encoder_outputs = self.encoder( hidden_states, attention_mask=combined_attention_mask, - head_mask=head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1225,7 +1203,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, past_key_values: Optional[Union[Cache, list[torch.Tensor]]] = None, @@ -1376,7 +1353,6 @@ def forward( attention_mask=attention_mask, position_ids=position_ids, pixel_values=pixel_values, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, diff --git a/src/transformers/models/gpt2/modeling_gpt2.py b/src/transformers/models/gpt2/modeling_gpt2.py index 861bed81c820..b3e4bb51f408 100644 --- a/src/transformers/models/gpt2/modeling_gpt2.py +++ b/src/transformers/models/gpt2/modeling_gpt2.py @@ -50,7 +50,7 @@ logger = logging.get_logger(__name__) -def eager_attention_forward(module, query, key, value, attention_mask, head_mask=None, **kwargs): +def eager_attention_forward(module, query, key, value, attention_mask, **kwargs): attn_weights = torch.matmul(query, key.transpose(-1, -2)) if module.scale_attn_weights: @@ -83,10 +83,6 @@ def eager_attention_forward(module, query, key, value, attention_mask, head_mask attn_weights = attn_weights.type(value.dtype) attn_weights = module.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2) @@ -153,7 +149,7 @@ def prune_heads(self, heads): self.num_heads = self.num_heads - len(heads) self.pruned_heads = self.pruned_heads.union(heads) - def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None): + def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None): # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM) bsz, num_heads, q_seq_len, dk = query.size() _, _, k_seq_len, _ = key.size() @@ -197,10 +193,6 @@ def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, hea attn_weights = attn_weights.type(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2) @@ -213,7 +205,6 @@ def forward( past_key_values: Optional[Cache] = None, cache_position: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -279,7 +270,7 @@ def forward( if using_eager and self.reorder_and_upcast_attn: attn_output, attn_weights = self._upcast_and_reordered_attn( - query_states, key_states, value_states, attention_mask, head_mask + query_states, key_states, value_states, attention_mask ) else: attn_output, attn_weights = attention_interface( @@ -288,7 +279,6 @@ def forward( key_states, value_states, attention_mask, - head_mask=head_mask, dropout=self.attn_dropout.p if self.training else 0.0, is_causal=is_causal, **kwargs, @@ -341,7 +331,6 @@ def forward( past_key_values: Optional[Cache] = None, cache_position: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, @@ -355,7 +344,6 @@ def forward( past_key_values=past_key_values, cache_position=cache_position, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, **kwargs, @@ -376,7 +364,6 @@ def forward( hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -617,7 +604,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -718,7 +704,7 @@ def forward( # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - _use_sdpa = self._attn_implementation == "sdpa" and output_attentions is False and head_mask is None + _use_sdpa = self._attn_implementation == "sdpa" and output_attentions is False if self.config.add_cross_attention and encoder_hidden_states is not None: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) @@ -733,12 +719,6 @@ def forward( else: encoder_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - if token_type_ids is not None: token_type_embeds = self.wte(token_type_ids) hidden_states = hidden_states + token_type_embeds @@ -759,7 +739,6 @@ def forward( past_key_values if not (self.gradient_checkpointing and self.training) else None, cache_position, causal_mask, - head_mask[i], encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, @@ -824,7 +803,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -863,7 +841,6 @@ def forward( cache_position=cache_position, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -931,7 +908,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, mc_token_ids: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1000,7 +976,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1074,7 +1049,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -1108,7 +1082,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1203,7 +1176,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -1237,7 +1209,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1285,7 +1256,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1314,7 +1284,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py b/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py index 63b2ec4039f6..543a6dca195c 100644 --- a/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py +++ b/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py @@ -49,7 +49,7 @@ # Fused kernels # Use separate functions for each case because conditionals prevent kernel fusion. -# TODO: Could have better fused kernels depending on scaling, dropout and head mask. +# TODO: Could have better fused kernels depending on scaling and dropout. # Is it doable without writing 32 functions? @torch.jit.script def upcast_masked_softmax( @@ -97,7 +97,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): key_states = repeat_kv(key, module.num_key_value_groups) @@ -111,9 +110,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -172,7 +168,6 @@ def forward( hidden_states: torch.Tensor, layer_past: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = False, @@ -243,7 +238,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attn_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -298,7 +292,6 @@ def forward( hidden_states: Optional[tuple[torch.Tensor]], layer_past: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = False, @@ -314,7 +307,6 @@ def forward( hidden_states, layer_past=layer_past, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -337,7 +329,6 @@ def forward( cross_attn_outputs = self.crossattention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -434,7 +425,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -533,12 +523,6 @@ def forward( else: encoder_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - position_embeds = self.wpe(position_ids) hidden_states = inputs_embeds + position_embeds.to(inputs_embeds.device) @@ -561,7 +545,6 @@ def forward( hidden_states, past_key_values, causal_mask, - head_mask[i], encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, @@ -617,7 +600,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -655,7 +637,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -724,7 +705,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -759,7 +739,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -857,7 +836,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -891,7 +869,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/gpt_neo/modeling_gpt_neo.py b/src/transformers/models/gpt_neo/modeling_gpt_neo.py index 2c08544f9ee9..67234abbda36 100755 --- a/src/transformers/models/gpt_neo/modeling_gpt_neo.py +++ b/src/transformers/models/gpt_neo/modeling_gpt_neo.py @@ -116,7 +116,7 @@ def _merge_heads(self, tensor, num_heads, attn_head_size): new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,) return tensor.view(new_shape) - def _attn(self, query, key, value, attention_mask=None, head_mask=None): + def _attn(self, query, key, value, attention_mask=None): # Keep the attention weights computation in fp32 to avoid overflow issues query = query.to(torch.float32) key = key.to(torch.float32) @@ -140,10 +140,6 @@ def _attn(self, query, key, value, attention_mask=None, head_mask=None): attn_weights = attn_weights.to(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights @@ -153,7 +149,6 @@ def forward( hidden_states, attention_mask=None, layer_past=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -170,7 +165,7 @@ def forward( cache_kwargs = {"cache_position": cache_position} key, value = layer_past.update(key, value, self.layer_id, cache_kwargs) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) attn_output = self.out_proj(attn_output) @@ -199,7 +194,6 @@ def forward( hidden_states, attention_mask=None, layer_past=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -309,7 +303,6 @@ def forward( hidden_states, layer_past=None, attention_mask=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -318,7 +311,6 @@ def forward( hidden_states, attention_mask=attention_mask, layer_past=layer_past, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -357,7 +349,6 @@ def forward( hidden_states, layer_past=None, attention_mask=None, - head_mask=None, use_cache=False, output_attentions=False, cache_position=None, @@ -368,7 +359,6 @@ def forward( hidden_states, layer_past=layer_past, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -444,7 +434,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -501,11 +490,6 @@ def forward( attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_heads x N x N - # head_mask has shape n_layer x batch x num_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.num_layers) position_embeds = self.wpe(position_ids) hidden_states = inputs_embeds + position_embeds @@ -527,7 +511,6 @@ def forward( hidden_states, layer_past=past_key_values, attention_mask=causal_mask, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -707,7 +690,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -743,7 +725,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -817,7 +798,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -851,7 +831,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -940,7 +919,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -974,7 +952,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -1022,7 +999,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1051,7 +1027,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/gpt_neox/modeling_gpt_neox.py b/src/transformers/models/gpt_neox/modeling_gpt_neox.py index 4a8dd649c99a..07072c077089 100755 --- a/src/transformers/models/gpt_neox/modeling_gpt_neox.py +++ b/src/transformers/models/gpt_neox/modeling_gpt_neox.py @@ -102,7 +102,6 @@ def eager_attention_forward( attention_mask: torch.Tensor, scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -113,10 +112,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) @@ -144,7 +139,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, layer_past: Optional[Cache] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -183,7 +177,6 @@ def forward( attention_mask, scaling=self.scaling, dropout=0.0 if not self.training else self.attention_dropout, - head_mask=head_mask, **kwargs, ) @@ -210,7 +203,6 @@ def forward( hidden_states: Optional[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, layer_past: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -223,7 +215,6 @@ def forward( attention_mask=attention_mask, position_ids=position_ids, layer_past=layer_past, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -400,7 +391,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -449,18 +439,6 @@ def forward( position_ids=position_ids, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - converted_head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # Flex Attention converts it to a separate mask - if head_mask is not None: - converted_head_mask = ~converted_head_mask.bool() * torch.finfo(inputs_embeds.dtype).min - converted_head_mask = converted_head_mask.to(dtype=self.dtype, device=self.device) - head_mask = converted_head_mask - hidden_states = self.emb_dropout(inputs_embeds) # create position embeddings to be shared across the decoder layers @@ -476,7 +454,6 @@ def forward( hidden_states, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], layer_past=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -541,7 +518,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -578,7 +554,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -638,7 +613,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -656,7 +630,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -719,7 +692,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -738,7 +710,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -780,7 +751,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -791,7 +761,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/gpt_neox/modular_gpt_neox.py b/src/transformers/models/gpt_neox/modular_gpt_neox.py index 532b7a607ae8..82ab8170d2b4 100644 --- a/src/transformers/models/gpt_neox/modular_gpt_neox.py +++ b/src/transformers/models/gpt_neox/modular_gpt_neox.py @@ -85,7 +85,6 @@ def eager_attention_forward( attention_mask: torch.Tensor, scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -96,10 +95,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) @@ -127,7 +122,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, layer_past: Optional[Cache] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -166,7 +160,6 @@ def forward( attention_mask, scaling=self.scaling, dropout=0.0 if not self.training else self.attention_dropout, - head_mask=head_mask, **kwargs, ) @@ -193,7 +186,6 @@ def forward( hidden_states: Optional[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, layer_past: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -206,7 +198,6 @@ def forward( attention_mask=attention_mask, position_ids=position_ids, layer_past=layer_past, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -277,7 +268,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -326,18 +316,6 @@ def forward( position_ids=position_ids, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - converted_head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # Flex Attention converts it to a separate mask - if head_mask is not None: - converted_head_mask = ~converted_head_mask.bool() * torch.finfo(inputs_embeds.dtype).min - converted_head_mask = converted_head_mask.to(dtype=self.dtype, device=self.device) - head_mask = converted_head_mask - hidden_states = self.emb_dropout(inputs_embeds) # create position embeddings to be shared across the decoder layers @@ -353,7 +331,6 @@ def forward( hidden_states, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], layer_past=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -412,7 +389,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -449,7 +425,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -509,7 +484,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -527,7 +501,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, @@ -590,7 +563,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -609,7 +581,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -651,7 +622,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -662,7 +632,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py b/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py index 50f54a4cb64e..a930070bfba7 100755 --- a/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py +++ b/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py @@ -98,7 +98,6 @@ def forward( hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, position_ids: torch.LongTensor, - head_mask: Optional[torch.FloatTensor] = None, layer_past: Optional[Cache] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, @@ -142,7 +141,7 @@ def forward( key, value = layer_past.update(key, value, self.layer_idx, cache_kwargs) # Compute attention - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) # Reshape outputs attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_size) @@ -175,7 +174,7 @@ def _merge_heads(cls, tensor, num_attention_heads, attn_head_size): # -> [bs, seq_len, hidden_size] return tensor - def _attn(self, query, key, value, attention_mask=None, head_mask=None): + def _attn(self, query, key, value, attention_mask=None): # q, k, v: [bs, num_attention_heads, seq_len, attn_head_size] # compute causal mask from causal mask buffer batch_size, num_attention_heads, query_length, attn_head_size = query.size() @@ -209,10 +208,6 @@ def _attn(self, query, key, value, attention_mask=None, head_mask=None): attn_weights = self.attention_dropout(attn_weights) attn_weights = attn_weights.to(value.dtype) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights @@ -344,7 +339,6 @@ def forward( hidden_states: Optional[torch.FloatTensor], attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, layer_past: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -357,7 +351,6 @@ def forward( ln_out, attention_mask=attention_mask, layer_past=layer_past, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, position_ids=position_ids, @@ -411,7 +404,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, use_cache: Optional[bool] = None, @@ -464,12 +456,6 @@ def forward( attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) hidden_states = inputs_embeds # create position embeddings to be shared across the decoder layers @@ -485,7 +471,6 @@ def forward( hidden_states, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], layer_past=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -670,7 +655,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Union[Cache, tuple[tuple[torch.FloatTensor]]]] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -709,7 +693,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, diff --git a/src/transformers/models/gptj/modeling_gptj.py b/src/transformers/models/gptj/modeling_gptj.py index b2f0d3793d4f..8e09ce9ead44 100644 --- a/src/transformers/models/gptj/modeling_gptj.py +++ b/src/transformers/models/gptj/modeling_gptj.py @@ -150,7 +150,6 @@ def _attn( key, value, attention_mask=None, - head_mask=None, ): # Keep the attention weights computation in fp32 to avoid overflow issues query = query.to(torch.float32) @@ -167,10 +166,6 @@ def _attn( attn_weights = attn_weights.to(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights @@ -188,7 +183,6 @@ def forward( layer_past: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -244,7 +238,7 @@ def forward( key, value = layer_past.update(key, value, self.layer_idx, cache_kwargs) # compute self-attention: V x Softmax(QK^T) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim) attn_output = self.out_proj(attn_output) @@ -274,7 +268,6 @@ def forward( layer_past: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -436,7 +429,6 @@ def forward( layer_past: Optional[Cache] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = False, output_attentions: Optional[bool] = False, cache_position: Optional[torch.LongTensor] = None, @@ -448,7 +440,6 @@ def forward( layer_past=layer_past, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -519,7 +510,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -570,11 +560,6 @@ def forward( attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_attention_heads x N x N - # head_mask has shape n_layer x batch x num_attention_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) hidden_states = inputs_embeds if token_type_ids is not None: @@ -596,7 +581,6 @@ def forward( layer_past=past_key_values, attention_mask=causal_mask, position_ids=position_ids, - head_mask=head_mask[i], use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -773,7 +757,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -801,7 +784,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -875,7 +857,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -901,7 +882,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -988,7 +968,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1009,7 +988,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/hiera/modeling_hiera.py b/src/transformers/models/hiera/modeling_hiera.py index 7ae70f6cbe8b..499c0b454600 100644 --- a/src/transformers/models/hiera/modeling_hiera.py +++ b/src/transformers/models/hiera/modeling_hiera.py @@ -364,7 +364,6 @@ def __init__( def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: """Input should be of shape [batch, tokens, channels].""" @@ -388,10 +387,6 @@ def forward( attn_weights = (query * self.scale) @ key.transpose(-1, -2) attn_weights = attn_weights.softmax(dim=-1) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = attn_weights @ value attn_output = attn_output.transpose(1, 3).reshape(batch_size, -1, self.hidden_size_output) attn_output = self.proj(attn_output) @@ -482,7 +477,6 @@ def __init__( def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: batch_size, seq_len, _ = hidden_states.shape @@ -495,9 +489,7 @@ def forward( hidden_states.view(batch_size, self.query_stride, -1, self.hidden_size_output).max(dim=1).values ) - (hidden_states_norm, attn_weights) = self.attn( - hidden_states_norm, head_mask, output_attentions=output_attentions - ) + (hidden_states_norm, attn_weights) = self.attn(hidden_states_norm, output_attentions=output_attentions) hidden_states = hidden_states + self.drop_path(hidden_states_norm) residual = hidden_states @@ -547,13 +539,10 @@ def __init__( ) def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.FloatTensor], output_attentions: bool = False + self, hidden_states: torch.Tensor, output_attentions: bool = False ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - (hidden_states, attn_weights) = layer_module( - hidden_states, layer_head_mask, output_attentions=output_attentions - ) + (hidden_states, attn_weights) = layer_module(hidden_states, output_attentions=output_attentions) return hidden_states, attn_weights @@ -685,7 +674,6 @@ def forward( self, hidden_states: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -700,9 +688,7 @@ def forward( all_reshaped_hidden_states = all_reshaped_hidden_states + (reshaped_hidden_states,) for i, stage_module in enumerate(self.stages): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = stage_module(hidden_states, layer_head_mask, output_attentions) + layer_outputs = stage_module(hidden_states, output_attentions) hidden_states = layer_outputs[0] @@ -863,7 +849,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, noise: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, @@ -882,13 +867,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - embedding_output, bool_masked_pos, ids_restore = self.embeddings( pixel_values, interpolate_pos_encoding=interpolate_pos_encoding, noise=noise ) @@ -912,7 +890,6 @@ def forward( encoder_outputs = self.encoder( hidden_states, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -984,7 +961,6 @@ def forward( self, encoder_hidden_states: torch.Tensor, bool_masked_pos: torch.BoolTensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, torch.BoolTensor]: # Embed tokens @@ -1034,9 +1010,7 @@ def forward( hidden_states = hidden_states + self.decoder_position_embeddings # Apply decoder blocks - hidden_states, attn_weights = self.decoder_block( - hidden_states, head_mask=head_mask, output_attentions=output_attentions - ) + hidden_states, attn_weights = self.decoder_block(hidden_states, output_attentions=output_attentions) hidden_states = self.decoder_norm(hidden_states) # Predictor projection @@ -1160,7 +1134,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, noise: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: Optional[bool] = None, @@ -1200,7 +1173,6 @@ def forward( outputs = self.hiera( pixel_values, noise=noise, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=True, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1219,7 +1191,6 @@ def forward( logits, bool_masked_pos = self.decoder( fused_hidden_states, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, output_attentions=output_attentions, ) @@ -1279,7 +1250,6 @@ def __init__(self, config: HieraConfig) -> None: def forward( self, pixel_values, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1300,7 +1270,6 @@ def forward( outputs = self.hiera( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1398,7 +1367,6 @@ def forward( outputs = self.encoder( embedding_output, - head_mask=None, output_attentions=output_attentions, output_hidden_states=True, return_dict=return_dict, diff --git a/src/transformers/models/hubert/modeling_hubert.py b/src/transformers/models/hubert/modeling_hubert.py index 3588ca78e0d0..c792d0431444 100755 --- a/src/transformers/models/hubert/modeling_hubert.py +++ b/src/transformers/models/hubert/modeling_hubert.py @@ -244,7 +244,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -256,9 +255,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -305,7 +301,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -344,7 +339,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -493,8 +487,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -661,8 +653,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": diff --git a/src/transformers/models/ibert/modeling_ibert.py b/src/transformers/models/ibert/modeling_ibert.py index e1b3c7fb966c..761fd515acc6 100644 --- a/src/transformers/models/ibert/modeling_ibert.py +++ b/src/transformers/models/ibert/modeling_ibert.py @@ -228,7 +228,6 @@ def forward( hidden_states, hidden_states_scaling_factor, attention_mask=None, - head_mask=None, output_attentions=False, ): # Projection @@ -277,10 +276,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) if attention_probs_scaling_factor is not None: context_layer_scaling_factor = attention_probs_scaling_factor * value_layer_scaling_factor @@ -384,14 +379,12 @@ def forward( hidden_states, hidden_states_scaling_factor, attention_mask=None, - head_mask=None, output_attentions=False, ): self_outputs, self_outputs_scaling_factor = self.self( hidden_states, hidden_states_scaling_factor, attention_mask, - head_mask, output_attentions, ) attention_output, attention_output_scaling_factor = self.output( @@ -502,14 +495,12 @@ def forward( hidden_states, hidden_states_scaling_factor, attention_mask=None, - head_mask=None, output_attentions=False, ): self_attention_outputs, self_attention_outputs_scaling_factor = self.attention( hidden_states, hidden_states_scaling_factor, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -553,7 +544,6 @@ def forward( hidden_states, hidden_states_scaling_factor, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -566,13 +556,10 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, hidden_states_scaling_factor, attention_mask, - layer_head_mask, output_attentions, ) @@ -692,7 +679,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -726,13 +712,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, embedding_output_scaling_factor = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -743,7 +722,6 @@ def forward( embedding_output, embedding_output_scaling_factor, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -790,7 +768,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -810,7 +787,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -891,7 +867,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -911,7 +886,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -974,7 +948,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1028,7 +1001,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1077,7 +1049,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1095,7 +1066,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1162,7 +1132,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1177,7 +1146,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/ijepa/modeling_ijepa.py b/src/transformers/models/ijepa/modeling_ijepa.py index cee6e29f8f9b..2a15c40da4d3 100644 --- a/src/transformers/models/ijepa/modeling_ijepa.py +++ b/src/transformers/models/ijepa/modeling_ijepa.py @@ -192,9 +192,7 @@ def __init__(self, config: IJepaConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -211,7 +209,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -265,8 +263,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -312,9 +310,9 @@ def __init__(self, config: IJepaConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -375,10 +373,9 @@ def __init__(self, config: IJepaConfig): self.layer = nn.ModuleList([IJepaLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -435,7 +432,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: @@ -447,13 +443,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: @@ -463,7 +452,7 @@ def forward( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) @@ -504,7 +493,6 @@ def __init__(self, config: IJepaConfig): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], @@ -518,7 +506,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.ijepa( pixel_values, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) diff --git a/src/transformers/models/ijepa/modular_ijepa.py b/src/transformers/models/ijepa/modular_ijepa.py index 7b8e6e152f3c..b37bc41d13bf 100644 --- a/src/transformers/models/ijepa/modular_ijepa.py +++ b/src/transformers/models/ijepa/modular_ijepa.py @@ -146,7 +146,6 @@ def __init__(self, config: IJepaConfig): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], @@ -160,7 +159,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.ijepa( pixel_values, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) diff --git a/src/transformers/models/imagegpt/modeling_imagegpt.py b/src/transformers/models/imagegpt/modeling_imagegpt.py index 35b2c8860e06..7536b2812b28 100755 --- a/src/transformers/models/imagegpt/modeling_imagegpt.py +++ b/src/transformers/models/imagegpt/modeling_imagegpt.py @@ -115,7 +115,7 @@ def prune_heads(self, heads): self.num_heads = self.num_heads - len(heads) self.pruned_heads = self.pruned_heads.union(heads) - def _attn(self, query, key, value, attention_mask=None, head_mask=None): + def _attn(self, query, key, value, attention_mask=None): attn_weights = torch.matmul(query, key.transpose(-1, -2)) if self.scale_attn_weights: @@ -145,15 +145,11 @@ def _attn(self, query, key, value, attention_mask=None, head_mask=None): attn_weights = attn_weights.type(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights - def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, head_mask=None): + def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None): # Use `torch.baddbmm` (a bit more efficient w/ alpha param for scaling -- from Megatron-LM) bsz, num_heads, q_seq_len, dk = query.size() _, _, k_seq_len, _ = key.size() @@ -197,10 +193,6 @@ def _upcast_and_reordered_attn(self, query, key, value, attention_mask=None, hea attn_weights = attn_weights.type(value.dtype) attn_weights = self.attn_dropout(attn_weights) - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) return attn_output, attn_weights @@ -226,7 +218,6 @@ def forward( hidden_states: torch.Tensor, layer_past: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = False, @@ -281,9 +272,9 @@ def forward( query = query.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2) if self.reorder_and_upcast_attn: - attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask) else: - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) + attn_output, attn_weights = self._attn(query, key, value, attention_mask) attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim) attn_output = self.c_proj(attn_output) @@ -330,7 +321,6 @@ def forward( hidden_states: torch.Tensor, layer_past: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = False, @@ -343,7 +333,6 @@ def forward( hidden_states, layer_past=layer_past, attention_mask=attention_mask, - head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -366,7 +355,6 @@ def forward( hidden_states, layer_past=layer_past, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, @@ -461,7 +449,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -580,12 +567,6 @@ def forward( else: encoder_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - if inputs_embeds is None: inputs_embeds = self.wte(input_ids) position_embeds = self.wpe(position_ids) @@ -609,7 +590,6 @@ def forward( hidden_states, past_key_values, attention_mask, - head_mask[i], encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, @@ -671,7 +651,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -742,7 +721,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -803,7 +781,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -853,7 +830,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/informer/modeling_informer.py b/src/transformers/models/informer/modeling_informer.py index 544fed9f1e51..1906870e0c69 100644 --- a/src/transformers/models/informer/modeling_informer.py +++ b/src/transformers/models/informer/modeling_informer.py @@ -272,8 +272,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -297,8 +295,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -338,9 +334,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -370,7 +363,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -382,9 +374,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -441,7 +430,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -510,7 +498,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -566,7 +553,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -682,15 +668,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, u, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, u, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -796,7 +773,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -804,8 +780,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -814,7 +788,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -891,8 +864,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -907,10 +878,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -926,7 +893,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -943,7 +909,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -1007,7 +972,6 @@ def __init__(self, config: InformerConfig): def forward( self, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1022,12 +986,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -1061,14 +1019,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, (encoder_layer, conv_layer) in enumerate(zip(self.layers, self.conv_layers)): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -1085,7 +1035,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) if conv_layer is not None: @@ -1139,8 +1088,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -1169,19 +1116,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1262,15 +1196,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1285,8 +1210,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1472,9 +1395,6 @@ def forward( future_values: Optional[torch.Tensor] = None, future_time_features: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1567,11 +1487,6 @@ def forward( must but known at prediction time. The `num_features` here is equal to `config.`num_time_features` + `config.num_dynamic_real_features`. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1626,7 +1541,6 @@ def forward( enc_input = transformer_inputs[:, : self.config.context_length, ...] encoder_outputs = self.encoder( inputs_embeds=enc_input, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1653,8 +1567,6 @@ def forward( inputs_embeds=dec_input, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1766,9 +1678,6 @@ def forward( future_time_features: Optional[torch.Tensor] = None, future_observed_mask: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1869,11 +1778,6 @@ def forward( - 0 for values that are **missing** (i.e. NaNs that were replaced by zeros). This mask is used to filter out missing values for the final loss calculation. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1938,9 +1842,6 @@ def forward( future_values=future_values, future_time_features=future_time_features, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/informer/modular_informer.py b/src/transformers/models/informer/modular_informer.py index 157176c1fd38..955b463cd15e 100644 --- a/src/transformers/models/informer/modular_informer.py +++ b/src/transformers/models/informer/modular_informer.py @@ -113,8 +113,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -138,8 +136,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -179,9 +175,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -253,7 +246,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -369,15 +361,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, u, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, u, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -531,7 +514,6 @@ def __init__(self, config: InformerConfig): def forward( self, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -546,12 +528,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -585,14 +561,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, (encoder_layer, conv_layer) in enumerate(zip(self.layers, self.conv_layers)): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -609,7 +577,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) if conv_layer is not None: @@ -760,11 +727,6 @@ def forward(self, **super_kwargs): must but known at prediction time. The `num_features` here is equal to `config.`num_time_features` + `config.num_dynamic_real_features`. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -920,11 +882,6 @@ def forward(self, **super_kwargs): - 0 for values that are **missing** (i.e. NaNs that were replaced by zeros). This mask is used to filter out missing values for the final loss calculation. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of diff --git a/src/transformers/models/instructblip/modeling_instructblip.py b/src/transformers/models/instructblip/modeling_instructblip.py index 20c0def10fd1..5cddbcdfd3bf 100644 --- a/src/transformers/models/instructblip/modeling_instructblip.py +++ b/src/transformers/models/instructblip/modeling_instructblip.py @@ -219,7 +219,6 @@ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -285,7 +284,6 @@ def __init__(self, config: InstructBlipConfig): def forward( self, hidden_states: torch.Tensor, - attention_mask: torch.Tensor, **kwargs: Unpack[TransformersKwargs], ) -> torch.FloatTensor: residual = hidden_states @@ -293,7 +291,6 @@ def forward( hidden_states = self.layer_norm1(hidden_states) hidden_states, _ = self.self_attn( hidden_states=hidden_states, - head_mask=attention_mask, **kwargs, ) hidden_states = hidden_states + residual @@ -366,14 +363,12 @@ def __init__(self, config: InstructBlipConfig): def forward( self, inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutput]: hidden_states = inputs_embeds for encoder_layer in self.layers: hidden_states = encoder_layer( hidden_states, - attention_mask=attention_mask, **kwargs, ) @@ -483,7 +478,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -542,10 +536,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs_dropped = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - context_layer = torch.matmul(attention_probs_dropped, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -600,7 +590,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -608,7 +597,6 @@ def forward( attn_output, _ = self.attention( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -673,7 +661,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -682,7 +669,6 @@ def forward( attention_output = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) @@ -695,7 +681,6 @@ def forward( query_attention_output = self.crossattention( query_attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -751,7 +736,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -759,12 +743,10 @@ def forward( ): for i in range(self.config.num_hidden_layers): layer_module = self.layer[i] - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, query_length=query_length, @@ -923,7 +905,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, query_embeds: Optional[torch.Tensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -974,17 +955,9 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs: BaseModelOutput = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, query_length=query_length, diff --git a/src/transformers/models/instructblipvideo/modeling_instructblipvideo.py b/src/transformers/models/instructblipvideo/modeling_instructblipvideo.py index 863e22e82b17..abcaa17f70f7 100644 --- a/src/transformers/models/instructblipvideo/modeling_instructblipvideo.py +++ b/src/transformers/models/instructblipvideo/modeling_instructblipvideo.py @@ -229,7 +229,6 @@ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -293,7 +292,6 @@ def __init__(self, config: InstructBlipVideoConfig): def forward( self, hidden_states: torch.Tensor, - attention_mask: torch.Tensor, **kwargs: Unpack[TransformersKwargs], ) -> torch.FloatTensor: residual = hidden_states @@ -301,7 +299,6 @@ def forward( hidden_states = self.layer_norm1(hidden_states) hidden_states, _ = self.self_attn( hidden_states=hidden_states, - head_mask=attention_mask, **kwargs, ) hidden_states = hidden_states + residual @@ -334,14 +331,12 @@ def __init__(self, config: InstructBlipVideoConfig): def forward( self, inputs_embeds, - attention_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutput]: hidden_states = inputs_embeds for encoder_layer in self.layers: hidden_states = encoder_layer( hidden_states, - attention_mask=attention_mask, **kwargs, ) @@ -450,7 +445,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **kwargs: Unpack[TransformersKwargs], @@ -509,10 +503,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs_dropped = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - context_layer = torch.matmul(attention_probs_dropped, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -565,7 +555,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -573,7 +562,6 @@ def forward( attn_output, _ = self.attention( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -636,7 +624,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -645,7 +632,6 @@ def forward( attention_output = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) @@ -658,7 +644,6 @@ def forward( query_attention_output = self.crossattention( query_attention_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, **kwargs, @@ -713,7 +698,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, query_length=0, @@ -721,12 +705,10 @@ def forward( ): for i in range(self.config.num_hidden_layers): layer_module = self.layer[i] - layer_head_mask = head_mask[i] if head_mask is not None else None hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, query_length=query_length, @@ -885,7 +867,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, query_embeds: Optional[torch.Tensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -936,17 +917,9 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs: BaseModelOutput = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, query_length=query_length, diff --git a/src/transformers/models/janus/modeling_janus.py b/src/transformers/models/janus/modeling_janus.py index 94e1c6288bd3..00e0dbbc6c81 100644 --- a/src/transformers/models/janus/modeling_janus.py +++ b/src/transformers/models/janus/modeling_janus.py @@ -459,7 +459,6 @@ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" @@ -523,7 +522,6 @@ def __init__(self, config: JanusConfig): def forward( self, hidden_states: torch.Tensor, - attention_mask: torch.Tensor, **kwargs: Unpack[TransformersKwargs], ) -> torch.FloatTensor: residual = hidden_states @@ -531,7 +529,6 @@ def forward( hidden_states = self.layer_norm1(hidden_states) hidden_states, _ = self.self_attn( hidden_states=hidden_states, - head_mask=attention_mask, **kwargs, ) hidden_states = hidden_states + residual diff --git a/src/transformers/models/kosmos2/modeling_kosmos2.py b/src/transformers/models/kosmos2/modeling_kosmos2.py index d76107fcfe38..de6bc098f58c 100644 --- a/src/transformers/models/kosmos2/modeling_kosmos2.py +++ b/src/transformers/models/kosmos2/modeling_kosmos2.py @@ -707,7 +707,6 @@ def forward( encoder_hidden_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, **kwargs, @@ -842,8 +841,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -857,7 +854,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, **kwargs, @@ -881,7 +877,6 @@ def forward( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -1000,8 +995,6 @@ def forward( image_embeds_position_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, @@ -1081,15 +1074,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1104,8 +1088,6 @@ def forward( attention_mask, encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1253,8 +1235,6 @@ def forward( image_embeds_position_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, @@ -1274,11 +1254,6 @@ def forward( - 1 for places where to put the image features, - 0 for places that are not for image features (i.e. for text tokens). - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return self.model( input_ids=input_ids, @@ -1287,8 +1262,6 @@ def forward( image_embeds_position_mask=image_embeds_position_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, position_ids=position_ids, @@ -1336,8 +1309,6 @@ def forward( image_embeds_position_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, @@ -1358,11 +1329,6 @@ def forward( - 1 for places where to put the image features, - 0 for places that are not for image features (i.e. for text tokens). - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are @@ -1382,8 +1348,6 @@ def forward( image_embeds_position_mask=image_embeds_position_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, position_ids=position_ids, @@ -1556,7 +1520,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, image_embeds_position_mask: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, image_embeds: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1628,7 +1591,6 @@ def forward( attention_mask=attention_mask, image_embeds=image_embeds, image_embeds_position_mask=image_embeds_position_mask, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, position_ids=position_ids, @@ -1692,7 +1654,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, image_embeds_position_mask: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, image_embeds: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1782,7 +1743,6 @@ def forward( attention_mask=attention_mask, image_embeds=image_embeds, image_embeds_position_mask=image_embeds_position_mask, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, position_ids=position_ids, diff --git a/src/transformers/models/layoutlm/modeling_layoutlm.py b/src/transformers/models/layoutlm/modeling_layoutlm.py index 11b7fac2b78c..61444e9b3c4c 100644 --- a/src/transformers/models/layoutlm/modeling_layoutlm.py +++ b/src/transformers/models/layoutlm/modeling_layoutlm.py @@ -127,7 +127,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -138,9 +137,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -173,7 +169,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -196,7 +191,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -250,14 +244,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -311,14 +303,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -351,7 +341,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -364,12 +353,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -516,7 +502,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -591,16 +576,6 @@ def forward( extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(self.dtype).min - if head_mask is not None: - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.expand(self.config.num_hidden_layers, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.to(dtype=next(self.parameters()).dtype) - else: - head_mask = [None] * self.config.num_hidden_layers - embedding_output = self.embeddings( input_ids=input_ids, bbox=bbox, @@ -611,7 +586,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -659,7 +633,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -722,7 +695,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -777,7 +749,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -840,7 +811,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -913,7 +883,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -974,7 +943,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1027,7 +995,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1092,7 +1059,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py b/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py index f3b856518133..4df6a46cf88c 100755 --- a/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py +++ b/src/transformers/models/layoutlmv2/modeling_layoutlmv2.py @@ -143,7 +143,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -171,10 +170,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) @@ -194,7 +189,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -202,7 +196,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -270,7 +263,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -278,7 +270,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -413,7 +404,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -430,12 +420,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -717,7 +704,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -820,22 +806,11 @@ def forward( extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(self.dtype).min - if head_mask is not None: - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.expand(self.config.num_hidden_layers, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.to(dtype=next(self.parameters()).dtype) - else: - head_mask = [None] * self.config.num_hidden_layers - encoder_outputs = self.encoder( final_emb, extended_attention_mask, bbox=final_bbox, position_ids=final_position_ids, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -885,7 +860,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -999,7 +973,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1090,7 +1063,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1180,7 +1152,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1242,7 +1213,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1331,7 +1301,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py b/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py index 63631e12eab5..f4c58096735a 100644 --- a/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py +++ b/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py @@ -258,7 +258,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -302,10 +301,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -343,7 +338,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -351,7 +345,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -375,7 +368,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, rel_pos=None, rel_2d_pos=None, @@ -383,7 +375,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -498,7 +489,6 @@ def forward( hidden_states, bbox=None, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -516,12 +506,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, output_attentions, rel_pos=rel_pos, rel_2d_pos=rel_2d_pos, @@ -680,7 +667,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -839,19 +825,11 @@ def forward( attention_mask, None, device, dtype=embedding_output.dtype ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, bbox=final_bbox, position_ids=final_position_ids, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -928,7 +906,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -975,7 +952,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1028,7 +1004,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1079,7 +1054,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1149,7 +1123,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1195,7 +1168,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/led/modeling_led.py b/src/transformers/models/led/modeling_led.py index 26d1321842e6..898edad049c4 100755 --- a/src/transformers/models/led/modeling_led.py +++ b/src/transformers/models/led/modeling_led.py @@ -130,7 +130,6 @@ def forward( self, hidden_states, attention_mask=None, - layer_head_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, @@ -223,12 +222,6 @@ def forward( attn_scores, dim=-1, dtype=torch.float32 ) # use fp32 for numerical stability - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_heads,), ( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs - # softmax sometimes inserts NaN if all positions are masked, replace them with 0 attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0) attn_probs = attn_probs.type_as(attn_scores) @@ -266,7 +259,6 @@ def forward( global_attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden( hidden_states=hidden_states, max_num_global_attn_indices=max_num_global_attn_indices, - layer_head_mask=layer_head_mask, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, @@ -620,7 +612,6 @@ def _compute_global_attn_output_from_hidden( self, hidden_states, max_num_global_attn_indices, - layer_head_mask, is_local_index_global_attn_nonzero, is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, @@ -689,18 +680,6 @@ def _compute_global_attn_output_from_hidden( global_attn_scores, dim=-1, dtype=torch.float32 ) # use fp32 for numerical stability - # apply layer head masking - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_heads,), ( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - global_attn_probs_float = layer_head_mask.view(1, -1, 1, 1) * global_attn_probs_float.view( - batch_size, self.num_heads, max_num_global_attn_indices, seq_len - ) - global_attn_probs_float = global_attn_probs_float.view( - batch_size * self.num_heads, max_num_global_attn_indices, seq_len - ) - global_attn_probs = nn.functional.dropout( global_attn_probs_float.type_as(global_attn_scores), p=self.dropout, training=self.training ) @@ -735,7 +714,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, is_index_masked: Optional[torch.Tensor] = None, is_index_global_attn: Optional[torch.Tensor] = None, is_global_attn: Optional[bool] = None, @@ -746,7 +724,6 @@ def forward( self_outputs = self.longformer_self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -797,7 +774,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: @@ -868,14 +844,6 @@ def forward( attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) if output_attentions: # this operation is a bit awkward, but it's required to @@ -925,7 +893,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, @@ -936,14 +903,11 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape *(batch, seq_len, embed_dim)* attention_mask (`torch.FloatTensor`): attention mask of size *(batch, 1, tgt_len, src_len)* where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - *(encoder_attention_heads,)*. """ residual = hidden_states attn_outputs = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -1006,8 +970,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -1022,10 +984,6 @@ def forward( cross attention input to the layer of shape *(batch, seq_len, embed_dim)* encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size *(batch, 1, tgt_len, src_len)* where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - *(decoder_attention_heads,)*. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for encoder attention heads in a given layer of - size *(decoder_attention_heads,)*. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`): Whether the base model outputs attentions. This requires the attentions tensor to be reshaped in this function. @@ -1037,7 +995,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -1055,7 +1012,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -1436,7 +1392,6 @@ def forward( input_ids=None, attention_mask=None, global_attention_mask=None, - head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, @@ -1469,11 +1424,6 @@ def forward( - 0 for local attention (a sliding window attention), - 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them). - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -1545,13 +1495,6 @@ def forward( all_attentions = () if output_attentions else None all_global_attentions = () if (output_attentions and is_global_attn) else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -1564,7 +1507,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask=attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -1644,8 +1586,6 @@ def forward( global_attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -1692,18 +1632,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1799,14 +1727,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if output_attentions else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1821,8 +1741,6 @@ def forward( combined_attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1887,9 +1805,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, global_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -1919,12 +1834,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_led._prepare_decoder_inputs`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. global_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to decide the attention given on each token, local attention or global attention for the encoder. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is @@ -1956,7 +1865,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1977,8 +1885,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -2052,9 +1958,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, global_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -2085,12 +1988,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_led._prepare_decoder_inputs`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. global_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to decide the attention given on each token, local attention or global attention for the encoder. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is @@ -2175,9 +2072,6 @@ def forward( decoder_attention_mask=decoder_attention_mask, encoder_outputs=encoder_outputs, global_attention_mask=global_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -2250,9 +2144,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, global_attention_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -2281,12 +2172,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_led._prepare_decoder_inputs`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. global_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to decide the attention given on each token, local attention or global attention for the encoder. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is @@ -2316,9 +2201,6 @@ def forward( decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -2401,9 +2283,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, global_attention_mask: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, @@ -2433,12 +2312,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_led._prepare_decoder_inputs`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. global_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to decide the attention given on each token, local attention or global attention for the encoder. Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is @@ -2460,9 +2333,6 @@ def forward( decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, diff --git a/src/transformers/models/lilt/modeling_lilt.py b/src/transformers/models/lilt/modeling_lilt.py index 5c4734d0a9bd..191a58836b06 100644 --- a/src/transformers/models/lilt/modeling_lilt.py +++ b/src/transformers/models/lilt/modeling_lilt.py @@ -230,7 +230,6 @@ def forward( hidden_states, layout_inputs, attention_mask=None, - head_mask=None, output_attentions=False, ): layout_value_layer = self.transpose_for_scores(self.layout_value(layout_inputs), r=self.channel_shrink_ratio) @@ -280,10 +279,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. layout_attention_probs = self.dropout(layout_attention_probs) - # Mask heads if we want to - if head_mask is not None: - layout_attention_probs = layout_attention_probs * head_mask - layout_context_layer = torch.matmul(layout_attention_probs, layout_value_layer) layout_context_layer = layout_context_layer.permute(0, 2, 1, 3).contiguous() @@ -301,10 +296,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -371,14 +362,12 @@ def forward( hidden_states: torch.Tensor, layout_inputs: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, layout_inputs, attention_mask, - head_mask, output_attentions, ) attention_output = self.output(self_outputs[0][0], hidden_states) @@ -441,14 +430,12 @@ def forward( hidden_states: torch.Tensor, layout_inputs: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, layout_inputs, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0][0] @@ -489,7 +476,6 @@ def forward( hidden_states: torch.Tensor, layout_inputs: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -501,13 +487,10 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, layout_inputs, attention_mask, - layer_head_mask, output_attentions, ) @@ -616,7 +599,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -685,13 +667,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, position_ids = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -705,7 +680,6 @@ def forward( embedding_output, layout_embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -751,7 +725,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -797,7 +770,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -868,7 +840,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -911,7 +882,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -986,7 +956,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1033,7 +1002,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/longformer/modeling_longformer.py b/src/transformers/models/longformer/modeling_longformer.py index fc466a38ecc2..19e3019bbe64 100755 --- a/src/transformers/models/longformer/modeling_longformer.py +++ b/src/transformers/models/longformer/modeling_longformer.py @@ -484,7 +484,6 @@ def forward( self, hidden_states, attention_mask=None, - layer_head_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, @@ -577,12 +576,6 @@ def forward( attn_scores, dim=-1, dtype=torch.float32 ) # use fp32 for numerical stability - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_heads,), ( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs - # softmax sometimes inserts NaN if all positions are masked, replace them with 0 attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0) attn_probs = attn_probs.type_as(attn_scores) @@ -620,7 +613,6 @@ def forward( global_attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden( hidden_states=hidden_states, max_num_global_attn_indices=max_num_global_attn_indices, - layer_head_mask=layer_head_mask, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, @@ -974,7 +966,6 @@ def _compute_global_attn_output_from_hidden( self, hidden_states, max_num_global_attn_indices, - layer_head_mask, is_local_index_global_attn_nonzero, is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, @@ -1043,18 +1034,6 @@ def _compute_global_attn_output_from_hidden( global_attn_scores, dim=-1, dtype=torch.float32 ) # use fp32 for numerical stability - # apply layer head masking - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_heads,), ( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - ) - global_attn_probs_float = layer_head_mask.view(1, -1, 1, 1) * global_attn_probs_float.view( - batch_size, self.num_heads, max_num_global_attn_indices, seq_len - ) - global_attn_probs_float = global_attn_probs_float.view( - batch_size * self.num_heads, max_num_global_attn_indices, seq_len - ) - global_attn_probs = nn.functional.dropout( global_attn_probs_float.type_as(global_attn_scores), p=self.dropout, training=self.training ) @@ -1123,7 +1102,6 @@ def forward( self, hidden_states, attention_mask=None, - layer_head_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, @@ -1132,7 +1110,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -1187,7 +1164,6 @@ def forward( self, hidden_states, attention_mask=None, - layer_head_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, @@ -1196,7 +1172,6 @@ def forward( self_attn_outputs = self.attention( hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -1228,7 +1203,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, padding_len=0, output_attentions=False, output_hidden_states=False, @@ -1244,11 +1218,6 @@ def forward( all_attentions = () if output_attentions else None # All local attentions. all_global_attentions = () if (output_attentions and is_global_attn) else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layer)), ( - f"The head_mask should be specified for {len(self.layer)} layers, but it is for {head_mask.size()[0]}." - ) for idx, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1256,7 +1225,6 @@ def forward( layer_outputs = layer_module( hidden_states, attention_mask=attention_mask, - layer_head_mask=head_mask[idx] if head_mask is not None else None, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, @@ -1490,7 +1458,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1595,7 +1562,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, padding_len=padding_len, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1641,7 +1607,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1699,7 +1664,6 @@ def forward( input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds, @@ -1754,7 +1718,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1791,7 +1754,6 @@ def forward( input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds, @@ -1877,7 +1839,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1942,7 +1903,6 @@ def forward( input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds, @@ -2008,7 +1968,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -2037,7 +1996,6 @@ def forward( input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, - head_mask=head_mask, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds, @@ -2090,7 +2048,6 @@ def forward( token_type_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, global_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -2174,7 +2131,6 @@ def forward( token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, global_attention_mask=flat_global_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/longt5/modeling_longt5.py b/src/transformers/models/longt5/modeling_longt5.py index 0a25c70d1f93..3361e7aafe80 100644 --- a/src/transformers/models/longt5/modeling_longt5.py +++ b/src/transformers/models/longt5/modeling_longt5.py @@ -16,7 +16,6 @@ import copy import math -import warnings from typing import Any, Optional, Union import torch @@ -449,7 +448,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -537,10 +535,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -674,7 +668,6 @@ def forward( hidden_states, mask=None, position_bias=None, - layer_head_mask=None, output_attentions=False, ): batch_size, seq_length = hidden_states.shape[:2] @@ -729,9 +722,6 @@ def unshape(states): # (batch_size, num_blocks, n_heads, block_len, 3 * block_len) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask attn_weights = attn_weights.type(value_states.dtype) attn_output = unshape(torch.einsum("...hqk,...khd->...qhd", attn_weights, value_states)) attn_output = attn_output[:, :seq_length, :] @@ -893,7 +883,6 @@ def forward( hidden_states, mask=None, position_bias=None, - layer_head_mask=None, output_attentions=False, ): batch_size, seq_length = hidden_states.shape[:2] @@ -993,9 +982,6 @@ def unshape(states): attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask attn_weights = attn_weights.type(value_states.dtype) attn_output = unshape(torch.einsum("...hqk,...khd->...qhd", attn_weights, value_states)) attn_output = attn_output[:, :seq_length, :] @@ -1024,7 +1010,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -1035,7 +1020,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1060,7 +1044,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, output_attentions=False, **kwargs: Any, # to accept past_key_values and use_cache kwargs ): @@ -1069,7 +1052,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = hidden_states + self.dropout(attention_output[0]) @@ -1093,7 +1075,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, output_attentions=False, **kwargs: Any, # to accept past_key_values and use_cache kwargs ): @@ -1102,7 +1083,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = hidden_states + self.dropout(attention_output[0]) @@ -1125,7 +1105,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -1138,7 +1117,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -1183,8 +1161,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -1195,7 +1171,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1216,7 +1191,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -1404,8 +1378,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -1498,9 +1470,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None @@ -1510,9 +1479,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] - if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1523,8 +1489,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1702,15 +1666,6 @@ def _prepare_4d_causal_attention_mask_with_cache_position( return causal_mask -# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask -__HEAD_MASK_WARNING_MSG = """ -The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, -`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. -If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, -num_heads)`. -""" - - @auto_docstring class LongT5Model(LongT5PreTrainedModel): _keys_to_ignore_on_load_unexpected = [ @@ -1768,9 +1723,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1810,18 +1762,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1845,19 +1785,12 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1879,8 +1812,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1960,9 +1891,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -2003,18 +1931,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -2041,12 +1957,6 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed @@ -2054,7 +1964,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -2080,8 +1989,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2167,7 +2074,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -2203,7 +2109,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/luke/modeling_luke.py b/src/transformers/models/luke/modeling_luke.py index 9c50efd04759..3d6b484a7dfc 100644 --- a/src/transformers/models/luke/modeling_luke.py +++ b/src/transformers/models/luke/modeling_luke.py @@ -436,7 +436,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): word_size = word_hidden_states.size(1) @@ -490,10 +489,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -544,7 +539,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): word_size = word_hidden_states.size(1) @@ -552,7 +546,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask, - head_mask, output_attentions, ) if entity_hidden_states is None: @@ -621,7 +614,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): word_size = word_hidden_states.size(1) @@ -630,7 +622,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) if entity_hidden_states is None: @@ -671,7 +662,6 @@ def forward( word_hidden_states, entity_hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -685,12 +675,10 @@ def forward( all_word_hidden_states = all_word_hidden_states + (word_hidden_states,) all_entity_hidden_states = all_entity_hidden_states + (entity_hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None layer_outputs = layer_module( word_hidden_states, entity_hidden_states, attention_mask, - layer_head_mask, output_attentions, ) @@ -849,7 +837,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -941,13 +928,6 @@ def forward( if entity_token_type_ids is None: entity_token_type_ids = torch.zeros((batch_size, entity_seq_length), dtype=torch.long, device=device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # First, compute word embeddings word_embedding_output = self.embeddings( input_ids=input_ids, @@ -970,7 +950,6 @@ def forward( word_embedding_output, entity_embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1118,7 +1097,6 @@ def forward( entity_position_ids: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, entity_labels: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1165,7 +1143,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1252,7 +1229,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1314,7 +1290,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1382,7 +1357,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1447,7 +1421,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1519,7 +1492,6 @@ def forward( entity_position_ids: Optional[torch.LongTensor] = None, entity_start_positions: Optional[torch.LongTensor] = None, entity_end_positions: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1596,7 +1568,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1676,7 +1647,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1719,7 +1689,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1804,7 +1773,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -1847,7 +1815,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1906,7 +1873,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1946,7 +1912,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -2026,7 +1991,6 @@ def forward( entity_attention_mask: Optional[torch.FloatTensor] = None, entity_token_type_ids: Optional[torch.LongTensor] = None, entity_position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, @@ -2121,7 +2085,6 @@ def forward( entity_attention_mask=entity_attention_mask, entity_token_type_ids=entity_token_type_ids, entity_position_ids=entity_position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/m2m_100/modeling_m2m_100.py b/src/transformers/models/m2m_100/modeling_m2m_100.py index e7b64e50a79a..1b47bdff99a3 100755 --- a/src/transformers/models/m2m_100/modeling_m2m_100.py +++ b/src/transformers/models/m2m_100/modeling_m2m_100.py @@ -196,7 +196,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -208,9 +207,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -268,7 +264,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -337,7 +332,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -371,7 +365,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -379,8 +372,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -390,7 +381,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -451,8 +441,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -467,10 +455,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -487,7 +471,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -504,7 +487,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -565,8 +547,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -724,8 +704,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -792,7 +770,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -815,12 +792,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -869,13 +840,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for idx, encoder_layer in enumerate(self.layers): @@ -892,7 +856,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -958,8 +921,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -996,19 +957,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1115,14 +1063,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if output_attentions else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for idx, decoder_layer in enumerate(self.layers): @@ -1141,10 +1081,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=( - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None - ), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1221,9 +1157,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1249,12 +1182,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -1267,7 +1194,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1287,8 +1213,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1343,9 +1267,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1372,12 +1293,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1413,9 +1328,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, diff --git a/src/transformers/models/marian/modeling_marian.py b/src/transformers/models/marian/modeling_marian.py index 24d056043fee..18d2bf99d609 100755 --- a/src/transformers/models/marian/modeling_marian.py +++ b/src/transformers/models/marian/modeling_marian.py @@ -121,7 +121,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -133,9 +132,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -193,7 +189,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -262,7 +257,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -297,7 +291,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -305,8 +298,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -315,7 +306,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -384,8 +374,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -400,10 +388,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -419,7 +403,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -436,7 +419,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -510,8 +492,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -669,8 +649,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -732,7 +710,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -755,12 +732,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -808,11 +779,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -829,7 +795,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -885,8 +850,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -923,19 +886,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1050,13 +1000,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1071,8 +1014,6 @@ def forward( causal_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1194,9 +1135,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple[torch.Tensor], BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1222,12 +1160,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1260,7 +1192,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1280,8 +1211,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1448,9 +1377,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[Union[tuple[torch.Tensor], BaseModelOutput]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1477,12 +1403,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1525,9 +1445,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1613,8 +1530,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1625,11 +1540,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1664,8 +1574,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/markuplm/modeling_markuplm.py b/src/transformers/models/markuplm/modeling_markuplm.py index a0c6985b3da9..353c8faafa64 100755 --- a/src/transformers/models/markuplm/modeling_markuplm.py +++ b/src/transformers/models/markuplm/modeling_markuplm.py @@ -329,7 +329,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -340,9 +339,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -375,7 +371,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -398,7 +393,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -437,14 +431,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -467,14 +459,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -507,7 +497,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -520,12 +509,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -614,7 +600,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -671,16 +656,6 @@ def forward( extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - if head_mask is not None: - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.expand(self.config.num_hidden_layers, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) - head_mask = head_mask.to(dtype=next(self.parameters()).dtype) - else: - head_mask = [None] * self.config.num_hidden_layers - embedding_output = self.embeddings( input_ids=input_ids, xpath_tags_seq=xpath_tags_seq, @@ -692,7 +667,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -731,7 +705,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -778,7 +751,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -849,7 +821,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -894,7 +865,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -953,7 +923,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -997,7 +966,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/maskformer/modeling_maskformer_swin.py b/src/transformers/models/maskformer/modeling_maskformer_swin.py index 2de478440414..9d847e32624f 100644 --- a/src/transformers/models/maskformer/modeling_maskformer_swin.py +++ b/src/transformers/models/maskformer/modeling_maskformer_swin.py @@ -353,7 +353,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -392,10 +391,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) @@ -450,10 +445,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -538,7 +532,7 @@ def maybe_pad(self, hidden_states, height, width): hidden_states = nn.functional.pad(hidden_states, pad_values) return hidden_states, pad_values - def forward(self, hidden_states, input_dimensions, head_mask=None, output_attentions=False): + def forward(self, hidden_states, input_dimensions, output_attentions=False): height, width = input_dimensions batch_size, dim, channels = hidden_states.size() shortcut = hidden_states @@ -562,9 +556,7 @@ def forward(self, hidden_states, input_dimensions, head_mask=None, output_attent if attn_mask is not None: attn_mask = attn_mask.to(hidden_states_windows.device) - self_attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + self_attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = self_attention_outputs[0] @@ -626,9 +618,7 @@ def __init__(self, config, dim, input_resolution, depth, num_heads, drop_path, d self.pointing = False - def forward( - self, hidden_states, input_dimensions, head_mask=None, output_attentions=False, output_hidden_states=False - ): + def forward(self, hidden_states, input_dimensions, output_attentions=False, output_hidden_states=False): all_hidden_states = () if output_hidden_states else None height, width = input_dimensions @@ -636,9 +626,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - block_hidden_states = block_module(hidden_states, input_dimensions, layer_head_mask, output_attentions) + block_hidden_states = block_module(hidden_states, input_dimensions, output_attentions) hidden_states = block_hidden_states[0] @@ -683,7 +671,6 @@ def forward( self, hidden_states, input_dimensions, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -696,12 +683,9 @@ def forward( all_hidden_states = all_hidden_states + (hidden_states,) for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_hidden_states, output_dimensions, layer_all_hidden_states = layer_module( hidden_states, input_dimensions, - layer_head_mask, output_attentions, output_hidden_states, ) @@ -778,7 +762,6 @@ class PreTrainedModel def forward( self, pixel_values=None, - head_mask=None, output_attentions=None, output_hidden_states=None, interpolate_pos_encoding=False, @@ -793,13 +776,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - embedding_output, input_dimensions = self.embeddings( pixel_values, interpolate_pos_encoding=interpolate_pos_encoding ) @@ -807,7 +783,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, input_dimensions, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/mbart/modeling_mbart.py b/src/transformers/models/mbart/modeling_mbart.py index 3a0eff585103..9e19183e38fc 100755 --- a/src/transformers/models/mbart/modeling_mbart.py +++ b/src/transformers/models/mbart/modeling_mbart.py @@ -132,7 +132,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -144,9 +143,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -204,7 +200,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -273,7 +268,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -306,7 +300,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -314,8 +307,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -325,7 +316,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -385,8 +375,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -401,10 +389,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -421,7 +405,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -438,7 +421,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -533,8 +515,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -692,8 +672,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -766,7 +744,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -789,12 +766,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -843,13 +814,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -866,7 +830,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -931,8 +894,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -969,19 +930,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1096,14 +1044,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {attn_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1118,8 +1058,6 @@ def forward( causal_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1193,9 +1131,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1226,12 +1161,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -1249,7 +1178,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1269,8 +1197,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1343,9 +1269,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1377,12 +1300,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1442,9 +1359,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1511,9 +1425,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1546,12 +1457,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1570,9 +1475,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1657,9 +1559,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1693,12 +1592,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict if start_positions is not None and end_positions is not None: @@ -1709,9 +1602,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1816,8 +1706,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1828,11 +1716,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1867,8 +1750,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/megatron_bert/modeling_megatron_bert.py b/src/transformers/models/megatron_bert/modeling_megatron_bert.py index 121ae19850ff..07c011359023 100755 --- a/src/transformers/models/megatron_bert/modeling_megatron_bert.py +++ b/src/transformers/models/megatron_bert/modeling_megatron_bert.py @@ -138,7 +138,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -225,10 +224,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -283,7 +278,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -293,7 +287,6 @@ def forward( self_outputs = self.self( ln_outputs, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -355,7 +348,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -366,7 +358,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, past_key_values=past_key_values, cache_position=cache_position, @@ -384,7 +375,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -420,7 +410,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -454,12 +443,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -692,7 +678,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -755,13 +740,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -772,7 +750,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -834,7 +811,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, next_sentence_label: Optional[torch.LongTensor] = None, @@ -876,7 +852,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -940,7 +915,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -982,7 +956,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1053,7 +1026,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1076,7 +1048,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1142,7 +1113,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1191,7 +1161,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1244,7 +1213,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1264,7 +1232,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1329,7 +1296,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1384,7 +1350,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1434,7 +1399,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1452,7 +1416,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1500,7 +1463,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1515,7 +1477,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/mobilebert/modeling_mobilebert.py b/src/transformers/models/mobilebert/modeling_mobilebert.py index c44a24acbae9..7729bab4802e 100644 --- a/src/transformers/models/mobilebert/modeling_mobilebert.py +++ b/src/transformers/models/mobilebert/modeling_mobilebert.py @@ -154,7 +154,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -166,9 +165,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -200,7 +196,6 @@ def forward( key_tensor: torch.Tensor, value_tensor: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: input_shape = query_tensor.shape[:-1] @@ -223,7 +218,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) attn_output = attn_output.reshape(*input_shape, -1).contiguous() @@ -279,7 +273,6 @@ def forward( value_tensor: torch.Tensor, layer_input: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: attention_output, attn_weights = self.self( @@ -287,7 +280,6 @@ def forward( key_tensor, value_tensor, attention_mask, - head_mask, **kwargs, ) # Run a linear projection of `hidden_size` then add a residual @@ -439,7 +431,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: if self.use_bottleneck: @@ -453,7 +444,6 @@ def forward( value_tensor, layer_input, attention_mask, - head_mask, **kwargs, ) attention_output = self_attention_output @@ -476,14 +466,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutput]: for i, layer_module in enumerate(self.layer): hidden_states = layer_module( hidden_states, attention_mask, - head_mask[i], **kwargs, ) return BaseModelOutput(last_hidden_state=hidden_states) @@ -670,7 +658,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, BaseModelOutputWithPooling]: @@ -689,17 +676,9 @@ def forward( embedding_output, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, **kwargs, ) sequence_output = encoder_outputs[0] @@ -720,8 +699,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -774,7 +751,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, next_sentence_label: Optional[torch.LongTensor] = None, @@ -813,7 +789,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -872,7 +847,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -888,7 +862,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -943,7 +916,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -987,7 +959,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1040,7 +1011,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1056,7 +1026,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1119,7 +1088,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1130,7 +1098,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1193,7 +1160,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1245,7 +1211,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1295,7 +1260,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1309,7 +1273,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/mpnet/modeling_mpnet.py b/src/transformers/models/mpnet/modeling_mpnet.py index e2ea5cf300ad..3c4c5ca7d74d 100644 --- a/src/transformers/models/mpnet/modeling_mpnet.py +++ b/src/transformers/models/mpnet/modeling_mpnet.py @@ -146,7 +146,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, position_bias=None, output_attentions=False, **kwargs, @@ -184,9 +183,6 @@ def forward( attention_probs = self.dropout(attention_probs) - if head_mask is not None: - attention_probs = attention_probs * head_mask - c = torch.matmul(attention_probs, v) c = c.permute(0, 2, 1, 3).contiguous() @@ -228,7 +224,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, position_bias=None, output_attentions=False, **kwargs, @@ -236,7 +231,6 @@ def forward( self_outputs = self.attn( hidden_states, attention_mask, - head_mask, position_bias, output_attentions=output_attentions, ) @@ -287,7 +281,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, position_bias=None, output_attentions=False, **kwargs, @@ -295,7 +288,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, position_bias=position_bias, output_attentions=output_attentions, ) @@ -320,7 +312,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = False, @@ -336,7 +327,6 @@ def forward( layer_outputs = layer_module( hidden_states, attention_mask, - head_mask[i], position_bias, output_attentions=output_attentions, **kwargs, @@ -450,7 +440,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -479,12 +468,10 @@ def forward( attention_mask = torch.ones(input_shape, device=device) extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, inputs_embeds=inputs_embeds) encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -528,7 +515,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -547,7 +533,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -625,7 +610,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -645,7 +629,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -706,7 +689,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -752,7 +734,6 @@ def forward( flat_input_ids, position_ids=flat_position_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -800,7 +781,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -818,7 +798,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -884,7 +863,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -898,7 +876,6 @@ def forward( input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/mra/modeling_mra.py b/src/transformers/models/mra/modeling_mra.py index 6612336b6794..1616bcfdf979 100644 --- a/src/transformers/models/mra/modeling_mra.py +++ b/src/transformers/models/mra/modeling_mra.py @@ -733,7 +733,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_hidden_states=False, return_dict=True, ): @@ -869,7 +868,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -907,13 +905,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -923,7 +914,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_hidden_states=output_hidden_states, return_dict=return_dict, ) @@ -967,7 +957,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, @@ -986,7 +975,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1057,7 +1045,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, @@ -1076,7 +1063,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1138,7 +1124,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, @@ -1192,7 +1177,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1243,7 +1227,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, @@ -1260,7 +1243,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1318,7 +1300,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1332,7 +1313,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/mt5/modeling_mt5.py b/src/transformers/models/mt5/modeling_mt5.py index a2439f05d585..2288187abc6b 100644 --- a/src/transformers/models/mt5/modeling_mt5.py +++ b/src/transformers/models/mt5/modeling_mt5.py @@ -16,7 +16,6 @@ import copy import math -import warnings from typing import Optional, Union import torch @@ -286,7 +285,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -374,10 +372,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -407,7 +401,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -418,7 +411,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -444,7 +436,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -457,7 +448,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -492,8 +482,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -504,7 +492,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -529,7 +516,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -719,8 +705,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -821,9 +805,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None @@ -833,8 +814,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -845,8 +824,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1021,15 +998,6 @@ def _prepare_4d_causal_attention_mask_with_cache_position( return causal_mask -# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask -__HEAD_MASK_WARNING_MSG = """ -The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, -`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. -If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, -num_heads)`. -""" - - @auto_docstring class MT5Model(MT5PreTrainedModel): r""" @@ -1105,9 +1073,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1145,18 +1110,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1182,19 +1135,12 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1216,8 +1162,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1311,9 +1255,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1352,18 +1293,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1395,12 +1324,6 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed @@ -1408,7 +1331,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1434,8 +1356,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1539,7 +1459,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1574,7 +1493,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1610,9 +1528,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1649,18 +1564,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1690,9 +1593,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1773,7 +1673,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1799,7 +1698,6 @@ def forward( outputs = self.transformer( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1879,9 +1777,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1919,18 +1814,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict use_cache = use_cache if use_cache is not None else self.config.use_cache @@ -1952,19 +1835,12 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1986,8 +1862,6 @@ def forward( past_key_values=None, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/musicgen/modeling_musicgen.py b/src/transformers/models/musicgen/modeling_musicgen.py index fec5fabc5470..7326ede89e71 100644 --- a/src/transformers/models/musicgen/modeling_musicgen.py +++ b/src/transformers/models/musicgen/modeling_musicgen.py @@ -162,7 +162,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -174,9 +173,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -227,7 +223,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -294,7 +289,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -345,8 +339,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -361,10 +353,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -378,7 +366,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -395,7 +382,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -485,8 +471,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -525,12 +509,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -598,14 +576,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {attn_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -619,8 +589,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -664,8 +632,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -704,9 +670,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -749,8 +712,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -789,12 +750,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -809,8 +764,6 @@ def forward( attention_mask=attention_mask, encoder_attention_mask=encoder_attention_mask, encoder_hidden_states=encoder_hidden_states, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -876,8 +829,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -918,12 +869,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length, num_codebooks)`, *optional*): Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` @@ -940,8 +885,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -999,8 +942,6 @@ def prepare_inputs_for_generation( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=True, delay_pattern_mask=None, @@ -1033,8 +974,6 @@ def prepare_inputs_for_generation( "attention_mask": attention_mask, "encoder_hidden_states": encoder_hidden_states, "encoder_attention_mask": encoder_attention_mask, - "head_mask": head_mask, - "cross_attn_head_mask": cross_attn_head_mask, "past_key_values": past_key_values, "use_cache": use_cache, } @@ -1871,10 +1810,7 @@ def prepare_inputs_for_generation( decoder_input_ids, past_key_values=None, attention_mask=None, - head_mask=None, decoder_attention_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, use_cache=None, encoder_outputs=None, decoder_delay_pattern_mask=None, @@ -1918,9 +1854,6 @@ def prepare_inputs_for_generation( "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, "use_cache": use_cache, } diff --git a/src/transformers/models/musicgen_melody/modeling_musicgen_melody.py b/src/transformers/models/musicgen_melody/modeling_musicgen_melody.py index e7237157e156..cea583599ee2 100644 --- a/src/transformers/models/musicgen_melody/modeling_musicgen_melody.py +++ b/src/transformers/models/musicgen_melody/modeling_musicgen_melody.py @@ -168,7 +168,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -180,9 +179,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -234,7 +230,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -301,7 +296,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -341,7 +335,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -352,7 +345,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -366,7 +358,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -453,7 +444,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -566,14 +556,6 @@ def forward( all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `head_mask` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -585,7 +567,6 @@ def forward( layer_outputs = decoder_layer( hidden_states, attention_mask=attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -623,8 +604,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -687,7 +666,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -740,7 +718,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -807,7 +784,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -863,7 +839,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -921,7 +896,6 @@ def prepare_inputs_for_generation( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, past_key_values=None, use_cache=True, delay_pattern_mask=None, @@ -967,7 +941,6 @@ def prepare_inputs_for_generation( "attention_mask": attention_mask, "encoder_hidden_states": encoder_hidden_states, "encoder_attention_mask": encoder_attention_mask, - "head_mask": head_mask, "past_key_values": past_key_values, "use_cache": use_cache, } @@ -1755,7 +1728,6 @@ def prepare_inputs_for_generation( past_key_values=None, attention_mask=None, decoder_attention_mask=None, - decoder_head_mask=None, use_cache=None, decoder_delay_pattern_mask=None, guidance_scale=None, @@ -1802,7 +1774,6 @@ def prepare_inputs_for_generation( "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, - "decoder_head_mask": decoder_head_mask, "use_cache": use_cache, } diff --git a/src/transformers/models/mvp/modeling_mvp.py b/src/transformers/models/mvp/modeling_mvp.py index 6838f209cb4e..554e6bf54edc 100644 --- a/src/transformers/models/mvp/modeling_mvp.py +++ b/src/transformers/models/mvp/modeling_mvp.py @@ -131,7 +131,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, attn_prompt: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, @@ -212,15 +211,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -274,7 +264,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, self_attn_prompt: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: @@ -283,8 +272,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. self_attn_prompt (`torch.FloatTensor`): prompt of self attention of shape `(2, encoder_attention_heads, pro_len, head_dim)`. output_attentions (`bool`, *optional*): @@ -295,7 +282,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, attn_prompt=self_attn_prompt, output_attentions=output_attentions, ) @@ -356,8 +342,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, self_attn_prompt: Optional[torch.Tensor] = None, cross_attn_prompt: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, @@ -374,10 +358,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. self_attn_prompt (`torch.FloatTensor`): prompt of self attention of shape `(2, decoder_attention_heads, pro_len, head_dim)`. cross_attn_prompt (`torch.FloatTensor`): prompt of cross attention of shape @@ -394,7 +374,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, attn_prompt=self_attn_prompt, output_attentions=output_attentions, cache_position=cache_position, @@ -412,7 +391,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, attn_prompt=cross_attn_prompt, past_key_values=past_key_values, output_attentions=output_attentions, @@ -569,7 +547,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -592,12 +569,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -652,14 +623,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -676,7 +639,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), self_attn_prompt=(self_attn_prompt[idx] if self.use_prompt else None), output_attentions=output_attentions, ) @@ -752,8 +714,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -790,19 +750,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -900,15 +847,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -923,8 +861,6 @@ def forward( attention_mask, encoder_hidden_states, # as positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), self_attn_prompt=(self_attn_prompt[idx] if self.use_prompt else None), cross_attn_prompt=(cross_attn_prompt[idx] if self.use_prompt else None), past_key_values=past_key_values, @@ -1002,9 +938,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1037,12 +970,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_mvp._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ # different to other models, Mvp automatically creates decoder_input_ids from # input_ids if no decoder_input_ids are provided @@ -1069,7 +996,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1089,8 +1015,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1165,9 +1089,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1201,12 +1122,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_mvp._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1257,9 +1172,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1329,9 +1241,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1363,12 +1272,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_mvp._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1413,9 +1316,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1501,9 +1401,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1536,12 +1433,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_mvp._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1586,9 +1477,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1695,8 +1583,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1707,11 +1593,6 @@ def forward( cache_position: Optional[torch.Tensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1745,8 +1626,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/nllb_moe/modeling_nllb_moe.py b/src/transformers/models/nllb_moe/modeling_nllb_moe.py index 3fcbd936af9b..584385fac00b 100644 --- a/src/transformers/models/nllb_moe/modeling_nllb_moe.py +++ b/src/transformers/models/nllb_moe/modeling_nllb_moe.py @@ -486,7 +486,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -498,9 +497,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -552,7 +548,6 @@ def forward( encoder_hidden_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -619,7 +614,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -653,7 +647,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, output_router_logits: bool = False, ) -> torch.Tensor: @@ -664,8 +657,6 @@ def forward( attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -675,7 +666,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.attn_dropout(hidden_states) @@ -752,8 +742,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, output_router_logits: Optional[bool] = False, @@ -772,10 +760,6 @@ def forward( encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): - mask for attention heads in a given layer of size `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): - mask for cross-attention heads in a given layer of size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): @@ -790,7 +774,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -808,7 +791,6 @@ def forward( encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -924,7 +906,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -948,12 +929,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -1006,14 +981,6 @@ def forward( all_router_probs = () if output_router_logits else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -1025,7 +992,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, output_router_logits=output_router_logits, ) @@ -1065,8 +1031,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -1129,8 +1093,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1168,19 +1130,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1273,14 +1222,6 @@ def forward( all_router_probs = () if output_router_logits else None all_cross_attentions = () if output_attentions else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for idx, decoder_layer in enumerate(self.layers): @@ -1292,17 +1233,12 @@ def forward( skip_the_layer = self.training and dropout_probability < self.layerdrop if not skip_the_layer or synced_gpus: - layer_head_mask = head_mask[idx] if head_mask is not None else None - cross_attn_layer_head_mask = cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None - # under fsdp or deepspeed zero3 all gpus must run in sync layer_outputs = decoder_layer( hidden_states, attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1362,8 +1298,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -1403,9 +1337,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -1467,9 +1398,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1496,12 +1424,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1528,7 +1450,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1550,8 +1471,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1611,9 +1530,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1641,12 +1557,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1685,9 +1595,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, diff --git a/src/transformers/models/nystromformer/modeling_nystromformer.py b/src/transformers/models/nystromformer/modeling_nystromformer.py index 03c134ccadae..ffd46ed0c278 100755 --- a/src/transformers/models/nystromformer/modeling_nystromformer.py +++ b/src/transformers/models/nystromformer/modeling_nystromformer.py @@ -355,7 +355,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -492,7 +491,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -532,13 +530,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -548,7 +539,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -593,7 +583,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -613,7 +602,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -684,7 +672,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -704,7 +691,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -767,7 +753,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -822,7 +807,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -874,7 +858,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -892,7 +875,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -942,7 +924,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -957,7 +938,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/openai/modeling_openai.py b/src/transformers/models/openai/modeling_openai.py index a1b6bf2ed579..34b4ca5adf0d 100644 --- a/src/transformers/models/openai/modeling_openai.py +++ b/src/transformers/models/openai/modeling_openai.py @@ -78,7 +78,7 @@ def prune_heads(self, heads): self.n_head = self.n_head - len(heads) self.pruned_heads = self.pruned_heads.union(heads) - def _attn(self, q, k, v, attention_mask=None, head_mask=None, output_attentions=False): + def _attn(self, q, k, v, attention_mask=None, output_attentions=False): w = torch.matmul(q, k) if self.scale: w = w / math.sqrt(v.size(-1)) @@ -93,10 +93,6 @@ def _attn(self, q, k, v, attention_mask=None, head_mask=None, output_attentions= w = nn.functional.softmax(w, dim=-1) w = self.attn_dropout(w) - # Mask heads if we want to - if head_mask is not None: - w = w * head_mask - outputs = [torch.matmul(w, v)] if output_attentions: outputs.append(w) @@ -115,14 +111,14 @@ def split_heads(self, x, k=False): else: return x.permute(0, 2, 1, 3) - def forward(self, x, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, x, attention_mask=None, output_attentions=False): x = self.c_attn(x) query, key, value = x.split(self.split_size, dim=2) query = self.split_heads(query) key = self.split_heads(key, k=True) value = self.split_heads(value) - attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions) + attn_outputs = self._attn(query, key, value, attention_mask, output_attentions) a = attn_outputs[0] a = self.merge_heads(a) @@ -157,11 +153,10 @@ def __init__(self, n_positions, config, scale=False): self.mlp = MLP(4 * nx, config) self.ln_2 = nn.LayerNorm(nx, eps=config.layer_norm_epsilon) - def forward(self, x, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, x, attention_mask=None, output_attentions=False): attn_outputs = self.attn( x, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, ) a = attn_outputs[0] @@ -354,7 +349,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -398,9 +392,6 @@ def forward( attention_mask = attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - if inputs_embeds is None: inputs_embeds = self.tokens_embed(input_ids) position_embeds = self.positions_embed(position_ids) @@ -420,7 +411,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - outputs = block(hidden_states, attention_mask, head_mask[i], output_attentions=output_attentions) + outputs = block(hidden_states, attention_mask, output_attentions=output_attentions) hidden_states = outputs[0] if output_attentions: all_attentions = all_attentions + (outputs[1],) @@ -464,7 +455,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -485,7 +475,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -556,7 +545,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, mc_token_ids: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -605,7 +593,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -670,7 +657,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -690,7 +676,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/opt/modeling_opt.py b/src/transformers/models/opt/modeling_opt.py index fdd0550270bc..5ba51c46030c 100644 --- a/src/transformers/models/opt/modeling_opt.py +++ b/src/transformers/models/opt/modeling_opt.py @@ -144,7 +144,6 @@ def forward( hidden_states: torch.Tensor, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, **kwargs, @@ -219,7 +218,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = False, @@ -232,8 +230,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`, *optional*): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`, *optional*): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -257,7 +253,6 @@ def forward( past_key_values=past_key_values, position_ids=position_ids, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, **kwargs, @@ -501,7 +496,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -529,12 +523,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(num_hidden_layers, num_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -626,15 +614,6 @@ def forward( all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask], ["head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -649,7 +628,6 @@ def forward( hidden_states, attention_mask=causal_mask, position_ids=position_ids, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -700,7 +678,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -723,7 +700,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -773,7 +749,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -819,7 +794,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -881,7 +855,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -904,7 +877,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, @@ -993,7 +965,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, @@ -1043,7 +1014,6 @@ def forward( past_key_values=past_key_values, attention_mask=attention_mask, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, diff --git a/src/transformers/models/patchtsmixer/modeling_patchtsmixer.py b/src/transformers/models/patchtsmixer/modeling_patchtsmixer.py index b830a516804e..dd59ee37b203 100644 --- a/src/transformers/models/patchtsmixer/modeling_patchtsmixer.py +++ b/src/transformers/models/patchtsmixer/modeling_patchtsmixer.py @@ -246,7 +246,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -258,9 +257,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -308,7 +304,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -347,7 +342,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) diff --git a/src/transformers/models/patchtst/modeling_patchtst.py b/src/transformers/models/patchtst/modeling_patchtst.py index 1912e318f8fd..055ea0cc2203 100755 --- a/src/transformers/models/patchtst/modeling_patchtst.py +++ b/src/transformers/models/patchtst/modeling_patchtst.py @@ -43,7 +43,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -55,9 +54,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -105,7 +101,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -144,7 +139,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) diff --git a/src/transformers/models/pegasus/modeling_pegasus.py b/src/transformers/models/pegasus/modeling_pegasus.py index 09ea75c3b1fe..2dcbfab69444 100755 --- a/src/transformers/models/pegasus/modeling_pegasus.py +++ b/src/transformers/models/pegasus/modeling_pegasus.py @@ -120,7 +120,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -132,9 +131,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -192,7 +188,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -261,7 +256,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -295,7 +289,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -303,8 +296,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -314,7 +305,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -375,8 +365,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -391,10 +379,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -411,7 +395,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -428,7 +411,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -489,8 +471,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -648,8 +628,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -744,7 +722,6 @@ def forward( self, input_ids=None, attention_mask=None, - head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, @@ -767,12 +744,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -821,13 +792,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -844,7 +808,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -935,8 +898,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -973,19 +934,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1100,14 +1048,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1122,8 +1062,6 @@ def forward( causal_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1215,9 +1153,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1243,12 +1178,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1278,7 +1207,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1298,8 +1226,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1395,9 +1321,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1424,12 +1347,6 @@ def forward( decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1473,9 +1390,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1584,8 +1498,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1596,11 +1508,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1635,8 +1542,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/pegasus_x/modeling_pegasus_x.py b/src/transformers/models/pegasus_x/modeling_pegasus_x.py index b9ba1aca6d28..778ca656d39b 100755 --- a/src/transformers/models/pegasus_x/modeling_pegasus_x.py +++ b/src/transformers/models/pegasus_x/modeling_pegasus_x.py @@ -138,7 +138,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -150,9 +149,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -210,7 +206,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -279,7 +274,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -782,8 +776,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -941,8 +933,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, diff --git a/src/transformers/models/perceiver/modeling_perceiver.py b/src/transformers/models/perceiver/modeling_perceiver.py index 21c55d51af8e..499d01774d06 100755 --- a/src/transformers/models/perceiver/modeling_perceiver.py +++ b/src/transformers/models/perceiver/modeling_perceiver.py @@ -185,7 +185,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs: Optional[torch.FloatTensor] = None, inputs_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -232,10 +231,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, values) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -330,7 +325,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs: Optional[torch.FloatTensor] = None, inputs_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -338,7 +332,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask, - head_mask, inputs, inputs_mask, output_attentions, @@ -409,7 +402,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs: Optional[torch.FloatTensor] = None, inputs_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -417,7 +409,6 @@ def forward( attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, inputs, inputs_mask, output_attentions, @@ -496,7 +487,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs: Optional[torch.FloatTensor] = None, inputs_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -511,7 +501,6 @@ def forward( layer_outputs = self.cross_attention( hidden_states, attention_mask=attention_mask, - head_mask=None, inputs=inputs, inputs_mask=inputs_mask, output_attentions=output_attentions, @@ -527,12 +516,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, ) @@ -648,7 +634,6 @@ def forward( inputs: torch.FloatTensor, attention_mask: Optional[torch.FloatTensor] = None, subsampled_output_points: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -780,19 +765,11 @@ def forward( # Make the attention mask broadcastable to [batch_size, num_heads, seq_length, seq_length] extended_attention_mask = self.invert_attention_mask(attention_mask) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_blocks x num_heads] - # and head_mask is converted to shape [num_blocks x batch x num_heads x N x N] - head_mask = self.get_head_mask(head_mask, self.config.num_blocks * self.config.num_self_attends_per_block) - embedding_output = self.embeddings(batch_size=batch_size) encoder_outputs = self.encoder( embedding_output, attention_mask=None, - head_mask=head_mask, inputs=inputs, inputs_mask=extended_attention_mask, output_attentions=output_attentions, @@ -891,7 +868,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -959,7 +935,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1018,7 +993,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1058,7 +1032,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1151,7 +1124,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1201,7 +1173,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1276,7 +1247,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1324,7 +1294,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1399,7 +1368,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1447,7 +1415,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1539,7 +1506,6 @@ def forward( self, inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1578,7 +1544,6 @@ def forward( outputs = self.perceiver( inputs=inputs, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1749,7 +1714,6 @@ def forward( inputs: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, subsampled_output_points: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, @@ -1814,7 +1778,6 @@ def forward( inputs=inputs, attention_mask=attention_mask, subsampled_output_points=subsampled_output_points, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -2088,7 +2051,6 @@ def forward( layer_outputs = self.decoding_cross_attention( query, attention_mask=query_mask, - head_mask=None, inputs=z, inputs_mask=None, output_attentions=output_attentions, diff --git a/src/transformers/models/pix2struct/modeling_pix2struct.py b/src/transformers/models/pix2struct/modeling_pix2struct.py index b00603e09c65..dfb95b5ccd5a 100644 --- a/src/transformers/models/pix2struct/modeling_pix2struct.py +++ b/src/transformers/models/pix2struct/modeling_pix2struct.py @@ -154,7 +154,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, output_attentions=False, ): """ @@ -211,10 +210,6 @@ def to_projection_shape(states): # (batch_size, n_heads, seq_length, key_length) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) # (batch_size, seq_length, dim) @@ -273,7 +268,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: residual = hidden_states @@ -284,7 +278,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - layer_head_mask=head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -313,7 +306,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -325,9 +317,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] @@ -504,7 +494,6 @@ def forward( self, flattened_patches: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -550,19 +539,11 @@ def forward( # check where `flattened_patches` is not 0 attention_mask = (flattened_patches.sum(dim=-1) != 0).float() - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(flattened_patches) encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -737,7 +718,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -824,10 +804,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -857,7 +833,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -868,7 +843,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -894,7 +868,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -907,7 +880,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -945,8 +917,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -957,7 +927,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -978,7 +947,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -1048,8 +1016,6 @@ def forward( encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1071,12 +1037,6 @@ def forward( To know more on how to prepare `input_ids` for pretraining take a look a [Pix2StructText Training](./t5#training). - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1172,9 +1132,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions) else None @@ -1184,8 +1141,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1196,8 +1151,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1426,9 +1379,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, labels: Optional[torch.LongTensor] = None, @@ -1462,18 +1412,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss for the decoder. @@ -1541,7 +1479,6 @@ def forward( encoder_outputs = self.encoder( flattened_patches=flattened_patches, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1574,8 +1511,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/plbart/modeling_plbart.py b/src/transformers/models/plbart/modeling_plbart.py index 5c056be5ae89..5da8cb213488 100644 --- a/src/transformers/models/plbart/modeling_plbart.py +++ b/src/transformers/models/plbart/modeling_plbart.py @@ -91,8 +91,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -250,8 +248,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -309,7 +305,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -321,9 +316,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -380,7 +372,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -449,7 +440,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -483,7 +473,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -491,8 +480,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -501,7 +488,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -573,7 +559,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -596,12 +581,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -650,14 +629,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -674,7 +645,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -732,8 +702,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -748,10 +716,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -767,7 +731,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -784,7 +747,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -852,8 +814,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -890,19 +850,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1018,15 +965,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1041,8 +979,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1133,9 +1069,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.LongTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1167,13 +1100,6 @@ def forward( obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -1191,7 +1117,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1211,8 +1136,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1284,9 +1207,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.LongTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1319,13 +1239,6 @@ def forward( obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1364,9 +1277,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1457,9 +1367,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1491,13 +1398,6 @@ def forward( obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1516,9 +1416,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1631,8 +1528,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1643,11 +1538,6 @@ def forward( cache_position: Optional[torch.LongTensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1682,8 +1572,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/plbart/modular_plbart.py b/src/transformers/models/plbart/modular_plbart.py index 9ca406775eae..e7d9642b06f8 100644 --- a/src/transformers/models/plbart/modular_plbart.py +++ b/src/transformers/models/plbart/modular_plbart.py @@ -75,8 +75,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -234,8 +232,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -305,9 +301,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.LongTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -339,13 +332,6 @@ def forward( obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -363,7 +349,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -383,8 +368,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -456,9 +439,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.LongTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -491,13 +471,6 @@ def forward( obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -536,9 +509,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -603,13 +573,6 @@ def forward(**super_kwargs): obj:*torch.LongTensor* of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (: - obj:*torch.Tensor* of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify - selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -621,11 +584,6 @@ class PLBartForCausalLM(BartForCausalLM): @auto_docstring def forward(**super_kwargs): r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored diff --git a/src/transformers/models/pop2piano/modeling_pop2piano.py b/src/transformers/models/pop2piano/modeling_pop2piano.py index 9831dfab3e0c..94c2a7515a44 100644 --- a/src/transformers/models/pop2piano/modeling_pop2piano.py +++ b/src/transformers/models/pop2piano/modeling_pop2piano.py @@ -291,7 +291,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -379,10 +378,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -412,7 +407,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -423,7 +417,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -449,7 +442,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -462,7 +454,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -499,8 +490,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -511,7 +500,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -536,7 +524,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -683,8 +670,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -781,9 +766,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None @@ -793,8 +775,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -804,10 +784,7 @@ def forward( position_bias, encoder_hidden_states, encoder_extended_attention_mask, - encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, - past_key_values=past_key_values, + encoder_decoder_position_bias, # as a positional argument for gradient checkpointing past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, cache_position=cache_position, @@ -1096,9 +1073,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1127,16 +1101,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1157,7 +1121,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1183,8 +1146,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/prophetnet/modeling_prophetnet.py b/src/transformers/models/prophetnet/modeling_prophetnet.py index 901437964896..fa28f07f49cf 100644 --- a/src/transformers/models/prophetnet/modeling_prophetnet.py +++ b/src/transformers/models/prophetnet/modeling_prophetnet.py @@ -439,7 +439,6 @@ def forward( hidden_states, key_value_states: Optional[Tensor] = None, attention_mask: Optional[Tensor] = None, - layer_head_mask: Optional[Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, @@ -515,18 +514,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view( - batch_size, self.num_attn_heads, tgt_len, src_len - ) - - # apply head_mask also on attn_weights_reshaped which is used for n-gram attention inside the model - attn_weights_reshaped = layer_head_mask.view(1, -1, 1, 1) * attn_weights_reshaped - attn_probs = nn.functional.dropout( attn_weights, p=self.attention_dropout, @@ -610,7 +597,6 @@ def forward( hidden_states, past_key_values: Optional[Cache] = None, attention_mask=None, - layer_head_mask=None, extended_predict_attention_mask=None, main_relative_position_buckets=None, predict_relative_position_buckets=None, @@ -689,15 +675,6 @@ def forward( onnx_trace=self.onnx_trace, ).type_as(main_attn_weights) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - main_attn_probs = layer_head_mask.view(1, -1, 1, 1) * main_attn_probs.view( - batch_size, self.num_attn_heads, -1, sequence_length - ) - main_attn_probs = nn.functional.dropout(main_attn_probs, p=self.attention_dropout, training=self.training) # project to attn_output # [batch_size, number_heads, sequence_length, sequence_length] @@ -751,13 +728,6 @@ def forward( onnx_trace=self.onnx_trace, ).type_as(predict_attn_weights) - if layer_head_mask is not None: - assert layer_head_mask.size() == (self.num_attn_heads,), ( - f"Head mask for a single layer should be of size {(self.num_attn_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - predict_attn_probs = layer_head_mask.view(1, 1, -1, 1, 1) * predict_attn_probs - predict_attn_probs = nn.functional.dropout( predict_attn_probs, p=self.attention_dropout, training=self.training ) @@ -909,14 +879,12 @@ def forward( self, hidden_states, attention_mask, - layer_head_mask, output_attentions: bool = False, ): # 1st residual block attention_output, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = self.self_attn_layer_norm(attention_output + hidden_states) @@ -960,8 +928,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attn_mask=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, extended_predict_attention_mask=None, main_relative_position_buckets=None, predict_relative_position_buckets=None, @@ -976,7 +942,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, extended_predict_attention_mask=extended_predict_attention_mask, main_relative_position_buckets=main_relative_position_buckets, predict_relative_position_buckets=predict_relative_position_buckets, @@ -991,7 +956,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attn_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -1048,7 +1012,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1100,11 +1063,6 @@ def forward( encoder_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_hidden_states = encoder_hidden_states + (hidden_states,) @@ -1112,7 +1070,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask=extended_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -1181,8 +1138,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1192,12 +1147,6 @@ def forward( cache_position: Optional[torch.Tensor] = None, ) -> Union[tuple, ProphetNetDecoderModelOutput]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - Example: ```python @@ -1313,13 +1262,6 @@ def forward( all_ngram_stream_attns = () if output_attentions else None all_cross_attns = () if output_attentions and self.config.add_cross_attention else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): if output_hidden_states: # grad cannot be kept because tensor is sliced @@ -1332,8 +1274,6 @@ def forward( extended_attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attn_mask=extended_encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), extended_predict_attention_mask=extended_predict_attention_mask, main_relative_position_buckets=main_relative_position_buckets, predict_relative_position_buckets=predict_relative_position_buckets, @@ -1513,9 +1453,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1541,11 +1478,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1575,7 +1507,6 @@ def forward( encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1588,8 +1519,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, output_attentions=output_attentions, @@ -1649,9 +1578,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1679,11 +1605,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1717,9 +1638,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, inputs_embeds=inputs_embeds, @@ -1856,8 +1774,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, @@ -1868,11 +1784,6 @@ def forward( **kwargs, ) -> Union[tuple, ProphetNetDecoderLMOutput]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are @@ -1923,8 +1834,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -1994,7 +1903,6 @@ def prepare_inputs_for_generation( input_ids, past_key_values=None, attention_mask=None, - head_mask=None, use_cache=None, **kwargs, ): @@ -2010,7 +1918,6 @@ def prepare_inputs_for_generation( model_inputs = { "input_ids": input_ids, # encoder_outputs is defined. input_ids not needed "attention_mask": attention_mask, - "head_mask": head_mask, "past_key_values": past_key_values, "use_cache": use_cache, } diff --git a/src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py b/src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py index 6b8910d270bb..f2ce95e726af 100644 --- a/src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py +++ b/src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py @@ -649,8 +649,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. diff --git a/src/transformers/models/qwen2_audio/modeling_qwen2_audio.py b/src/transformers/models/qwen2_audio/modeling_qwen2_audio.py index b4d1f41f3ec2..226025c259bb 100644 --- a/src/transformers/models/qwen2_audio/modeling_qwen2_audio.py +++ b/src/transformers/models/qwen2_audio/modeling_qwen2_audio.py @@ -77,7 +77,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -89,9 +88,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -150,7 +146,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -180,7 +175,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=1.0, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -214,7 +208,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -222,8 +215,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -233,7 +224,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -347,7 +337,6 @@ def forward( self, input_features, attention_mask=None, - head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, @@ -357,11 +346,6 @@ def forward( attention_mask (`torch.Tensor`)`, *optional*): Qwen2Audio does not support masking of the `input_features`, this argument is preserved for compatibility, but it is not used. By default the silence in the input log mel spectrogram are ignored. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -399,12 +383,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -422,7 +400,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) diff --git a/src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py b/src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py index 1172ebf90919..21b8afaac49d 100644 --- a/src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py +++ b/src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py @@ -578,8 +578,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -2590,9 +2588,8 @@ def forward( **kwargs, ) -> CausalLMOutputWithPast: r""" - Args: - generation_steps (`int`): - generation step of code predictor, 0..num_code_groups-1 + generation_steps (`int`): + generation step of code predictor, 0..num_code_groups-1 """ # Prefill stage @@ -3017,27 +3014,26 @@ def forward( **kwargs, ) -> MoeCausalLMOutputWithPast: r""" - Args: - use_audio_in_video (`bool`, *optional*): - If set to `True`, use the audio in video. - audio_feature_lengths (`torch.LongTensor` of shape `(num_audios)`, *optional*): - The length of feature shape of each audio in LLM. - video_second_per_grid (`torch.LongTensor` of shape `(num_videos)`, *optional*): - Number of seconds per grid for each video, used for temporal feature mapping. - image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*): - The temporal, height and width of feature shape of each image in LLM. - video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*): - The temporal, height and width of feature shape of each video in LLM. - residual_codes (`torch.Tensor`): - The predicted residual codes of previous step. - trailing_text_hidden (`torch.Tensor`): - Text hidden states from thinker after the first token. - tts_pad_embed (`torch.Tensor`): - Embedding tensor of `tts_pad_token_id`. - generation_step (`int`): - Generation step since prefill, used to sync with `trailing_text_hidden`. - talker_input_ids (`torch.Tensor`): - Input ids from thinker, used to compute 3d RoPE. + use_audio_in_video (`bool`, *optional*): + If set to `True`, use the audio in video. + audio_feature_lengths (`torch.LongTensor` of shape `(num_audios)`, *optional*): + The length of feature shape of each audio in LLM. + video_second_per_grid (`torch.LongTensor` of shape `(num_videos)`, *optional*): + Number of seconds per grid for each video, used for temporal feature mapping. + image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*): + The temporal, height and width of feature shape of each image in LLM. + video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*): + The temporal, height and width of feature shape of each video in LLM. + residual_codes (`torch.Tensor`): + The predicted residual codes of previous step. + trailing_text_hidden (`torch.Tensor`): + Text hidden states from thinker after the first token. + tts_pad_embed (`torch.Tensor`): + Embedding tensor of `tts_pad_token_id`. + generation_step (`int`): + Generation step since prefill, used to sync with `trailing_text_hidden`. + talker_input_ids (`torch.Tensor`): + Input ids from thinker, used to compute 3d RoPE. """ # Prefill if inputs_embeds is not None and inputs_embeds.shape[1] > 1: diff --git a/src/transformers/models/qwen3_omni_moe/modular_qwen3_omni_moe.py b/src/transformers/models/qwen3_omni_moe/modular_qwen3_omni_moe.py index 28347f03a6aa..c9d3696900c2 100644 --- a/src/transformers/models/qwen3_omni_moe/modular_qwen3_omni_moe.py +++ b/src/transformers/models/qwen3_omni_moe/modular_qwen3_omni_moe.py @@ -1603,9 +1603,8 @@ def forward( **kwargs, ): r""" - Args: - generation_steps (`int`): - generation step of code predictor, 0..num_code_groups-1 + generation_steps (`int`): + generation step of code predictor, 0..num_code_groups-1 """ # Prefill stage @@ -1792,27 +1791,26 @@ def forward( **kwargs, ): r""" - Args: - use_audio_in_video (`bool`, *optional*): - If set to `True`, use the audio in video. - audio_feature_lengths (`torch.LongTensor` of shape `(num_audios)`, *optional*): - The length of feature shape of each audio in LLM. - video_second_per_grid (`torch.LongTensor` of shape `(num_videos)`, *optional*): - Number of seconds per grid for each video, used for temporal feature mapping. - image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*): - The temporal, height and width of feature shape of each image in LLM. - video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*): - The temporal, height and width of feature shape of each video in LLM. - residual_codes (`torch.Tensor`): - The predicted residual codes of previous step. - trailing_text_hidden (`torch.Tensor`): - Text hidden states from thinker after the first token. - tts_pad_embed (`torch.Tensor`): - Embedding tensor of `tts_pad_token_id`. - generation_step (`int`): - Generation step since prefill, used to sync with `trailing_text_hidden`. - talker_input_ids (`torch.Tensor`): - Input ids from thinker, used to compute 3d RoPE. + use_audio_in_video (`bool`, *optional*): + If set to `True`, use the audio in video. + audio_feature_lengths (`torch.LongTensor` of shape `(num_audios)`, *optional*): + The length of feature shape of each audio in LLM. + video_second_per_grid (`torch.LongTensor` of shape `(num_videos)`, *optional*): + Number of seconds per grid for each video, used for temporal feature mapping. + image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*): + The temporal, height and width of feature shape of each image in LLM. + video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*): + The temporal, height and width of feature shape of each video in LLM. + residual_codes (`torch.Tensor`): + The predicted residual codes of previous step. + trailing_text_hidden (`torch.Tensor`): + Text hidden states from thinker after the first token. + tts_pad_embed (`torch.Tensor`): + Embedding tensor of `tts_pad_token_id`. + generation_step (`int`): + Generation step since prefill, used to sync with `trailing_text_hidden`. + talker_input_ids (`torch.Tensor`): + Input ids from thinker, used to compute 3d RoPE. """ # Prefill if inputs_embeds is not None and inputs_embeds.shape[1] > 1: diff --git a/src/transformers/models/reformer/modeling_reformer.py b/src/transformers/models/reformer/modeling_reformer.py index b2ec54f0d867..e2cb1c4657a8 100755 --- a/src/transformers/models/reformer/modeling_reformer.py +++ b/src/transformers/models/reformer/modeling_reformer.py @@ -464,7 +464,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, num_hashes=None, buckets=None, past_buckets_states=None, @@ -639,7 +638,6 @@ def forward( value_vectors=value_vectors, sorted_bucket_idx_per_hash=sorted_bucket_idx_per_hash, attention_mask=attention_mask, - head_mask=head_mask, do_standard_self_attention=do_standard_self_attention, use_cache=exists_cache, ) @@ -834,7 +832,6 @@ def _attend( value_vectors, sorted_bucket_idx_per_hash, attention_mask, - head_mask, do_standard_self_attention, use_cache, ): @@ -922,10 +919,6 @@ def _attend( # dropout attention_probs = nn.functional.dropout(attention_probs, p=self.dropout, training=self.training) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - # attend values out_vectors = torch.matmul(attention_probs, value_vectors) @@ -1161,7 +1154,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, past_buckets_states=None, use_cache=False, output_attentions=False, @@ -1299,10 +1291,6 @@ def forward( # dropout attention_probs = nn.functional.dropout(attention_probs, p=self.dropout, training=self.training) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - # attend values out_vectors = torch.matmul(attention_probs, value_vectors) @@ -1401,7 +1389,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, num_hashes=None, past_buckets_states=None, use_cache=False, @@ -1415,7 +1402,6 @@ def forward( # use cached buckets for backprob if buckets not None for LSHSelfAttention self_attention_outputs = self.self_attention( hidden_states=hidden_states, - head_mask=head_mask, attention_mask=attention_mask, num_hashes=num_hashes, past_buckets_states=past_buckets_states, @@ -1569,7 +1555,6 @@ def forward( prev_attn_output, hidden_states, attention_mask=None, - head_mask=None, num_hashes=None, past_buckets_states=None, use_cache=False, @@ -1585,7 +1570,6 @@ def forward( attn_outputs = self.attention( hidden_states=hidden_states, - head_mask=head_mask, attention_mask=attention_mask, num_hashes=num_hashes, past_buckets_states=past_buckets_states, @@ -1624,7 +1608,6 @@ def backward_pass( grad_attn_output, grad_hidden_states, attention_mask=None, - head_mask=None, buckets=None, ): # Implements the backward pass for reversible ResNets. @@ -1663,7 +1646,6 @@ def backward_pass( # use cached buckets for backprob if buckets not None for LSHSelfAttention output = self.attention( hidden_states=hidden_states, - head_mask=head_mask, attention_mask=attention_mask, buckets=buckets, ).hidden_states @@ -1699,7 +1681,6 @@ def forward( hidden_states, layers, attention_mask, - head_mask, num_hashes, all_hidden_states, all_attentions, @@ -1714,7 +1695,7 @@ def forward( # split duplicated tensor hidden_states, attn_output = torch.chunk(hidden_states, 2, dim=-1) - for layer_id, (layer, layer_head_mask) in enumerate(zip(layers, head_mask)): + for layer in layers: if output_hidden_states is True: all_hidden_states.append(hidden_states) @@ -1722,7 +1703,6 @@ def forward( prev_attn_output=attn_output, hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, num_hashes=num_hashes, past_buckets_states=past_buckets_states, use_cache=use_cache, @@ -1745,7 +1725,6 @@ def forward( ctx.save_for_backward(attn_output.detach(), hidden_states.detach()) ctx.layers = layers ctx.all_buckets = all_buckets - ctx.head_mask = head_mask ctx.attention_mask = attention_mask # Concatenate 2 RevNet outputs @@ -1771,7 +1750,6 @@ def backward(ctx, grad_hidden_states): layers = ctx.layers all_buckets = ctx.all_buckets - head_mask = ctx.head_mask attention_mask = ctx.attention_mask for idx, layer in enumerate(layers[::-1]): @@ -1785,7 +1763,6 @@ def backward(ctx, grad_hidden_states): hidden_states=output.hidden_states, grad_attn_output=output.grad_attn_output, grad_hidden_states=output.grad_hidden_states, - head_mask=head_mask[len(layers) - idx - 1], attention_mask=attention_mask, buckets=buckets, ) @@ -1812,7 +1789,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, num_hashes=None, past_buckets_states=None, use_cache=False, @@ -1841,7 +1817,6 @@ def forward( hidden_states, self.layers, attention_mask, - head_mask, num_hashes, all_hidden_states, all_attentions, @@ -2022,7 +1997,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, num_hashes: Optional[int] = None, past_buckets_states: Optional[list[tuple[torch.Tensor]]] = None, @@ -2080,9 +2054,6 @@ def forward( if past_buckets_states is not None: assert not self.training, "`past_buckets_states` can only be used for inference, not for training`." - # prepare head mask - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers, is_attention_chunked=True) - # original sequence length for padding orig_sequence_length = input_shape[-1] @@ -2133,7 +2104,6 @@ def forward( encoder_outputs = self.encoder( hidden_states=embedding_output, - head_mask=head_mask, attention_mask=attention_mask, num_hashes=num_hashes, past_buckets_states=past_buckets_states, @@ -2256,7 +2226,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, num_hashes: Optional[int] = None, past_buckets_states: Optional[list[tuple[torch.Tensor]]] = None, @@ -2300,7 +2269,6 @@ def forward( input_ids, position_ids=position_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, num_hashes=num_hashes, past_buckets_states=past_buckets_states, @@ -2407,7 +2375,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, num_hashes: Optional[int] = None, labels: Optional[torch.Tensor] = None, @@ -2485,7 +2452,6 @@ def forward( input_ids, position_ids=position_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, num_hashes=num_hashes, use_cache=False, # no causal mask @@ -2540,7 +2506,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, num_hashes: Optional[int] = None, labels: Optional[torch.Tensor] = None, @@ -2603,7 +2568,6 @@ def forward( input_ids, position_ids=position_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, num_hashes=num_hashes, output_hidden_states=output_hidden_states, @@ -2690,7 +2654,6 @@ def forward( input_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, num_hashes: Optional[int] = None, start_positions: Optional[torch.Tensor] = None, @@ -2721,7 +2684,6 @@ def forward( input_ids, position_ids=position_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, num_hashes=num_hashes, use_cache=False, # no causal mask diff --git a/src/transformers/models/rembert/modeling_rembert.py b/src/transformers/models/rembert/modeling_rembert.py index 8e187421f152..08aa6ca3d53b 100755 --- a/src/transformers/models/rembert/modeling_rembert.py +++ b/src/transformers/models/rembert/modeling_rembert.py @@ -140,7 +140,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: bool = False, @@ -208,10 +207,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -267,7 +262,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -276,7 +270,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -338,7 +331,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -348,7 +340,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, past_key_values=past_key_values, cache_position=cache_position, @@ -366,7 +357,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -402,7 +392,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -438,12 +427,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -596,7 +582,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -659,13 +644,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -676,7 +654,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -734,7 +711,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -756,7 +732,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -840,7 +815,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -881,7 +855,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -942,7 +915,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -962,7 +934,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1027,7 +998,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1082,7 +1052,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1132,7 +1101,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1150,7 +1118,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1199,7 +1166,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1214,7 +1180,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/roberta/modeling_roberta.py b/src/transformers/models/roberta/modeling_roberta.py index 5bf4396fdfc4..7462a68fa97f 100644 --- a/src/transformers/models/roberta/modeling_roberta.py +++ b/src/transformers/models/roberta/modeling_roberta.py @@ -172,7 +172,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -213,9 +212,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -257,7 +253,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -301,7 +296,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -345,7 +339,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -393,7 +386,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -451,7 +443,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -463,7 +454,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -526,7 +516,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -536,7 +525,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -553,7 +541,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -615,7 +602,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -624,12 +610,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -714,7 +697,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -771,17 +753,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -861,8 +835,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -886,8 +858,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -944,7 +914,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -993,7 +962,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1061,7 +1029,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1088,7 +1055,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1170,7 +1136,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1195,7 +1160,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1257,7 +1221,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -1309,7 +1272,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1359,7 +1321,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1382,7 +1343,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1450,7 +1410,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1472,7 +1431,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/roberta/modular_roberta.py b/src/transformers/models/roberta/modular_roberta.py index 5c2b2fd6e54d..b7b65f004499 100644 --- a/src/transformers/models/roberta/modular_roberta.py +++ b/src/transformers/models/roberta/modular_roberta.py @@ -221,7 +221,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -270,7 +269,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -338,7 +336,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -365,7 +362,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -447,7 +443,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -472,7 +467,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -534,7 +528,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -586,7 +579,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -636,7 +628,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -659,7 +650,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -727,7 +717,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -749,7 +738,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/roberta_prelayernorm/modeling_roberta_prelayernorm.py b/src/transformers/models/roberta_prelayernorm/modeling_roberta_prelayernorm.py index f3383194165d..0a5652f117b7 100644 --- a/src/transformers/models/roberta_prelayernorm/modeling_roberta_prelayernorm.py +++ b/src/transformers/models/roberta_prelayernorm/modeling_roberta_prelayernorm.py @@ -169,7 +169,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -210,9 +209,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -255,7 +251,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -299,7 +294,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -344,7 +338,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -392,7 +385,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -451,7 +443,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -464,7 +455,6 @@ def forward( hidden_states_pre_layer_norm, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -529,7 +519,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -539,7 +528,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -556,7 +544,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -588,7 +575,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -597,12 +583,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -726,7 +709,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -794,17 +776,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -887,8 +861,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -913,8 +885,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -974,7 +944,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1023,7 +992,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1097,7 +1065,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1124,7 +1091,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1208,7 +1174,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1233,7 +1198,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1296,7 +1260,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -1348,7 +1311,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1399,7 +1361,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1422,7 +1383,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1492,7 +1452,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1514,7 +1473,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/roc_bert/modeling_roc_bert.py b/src/transformers/models/roc_bert/modeling_roc_bert.py index d7614e59c2d6..b97787f557fd 100644 --- a/src/transformers/models/roc_bert/modeling_roc_bert.py +++ b/src/transformers/models/roc_bert/modeling_roc_bert.py @@ -190,7 +190,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -231,9 +230,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -276,7 +272,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -320,7 +315,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -365,7 +359,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -413,7 +406,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -473,7 +465,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -485,7 +476,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -551,7 +541,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -561,7 +550,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -578,7 +566,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -608,7 +595,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -617,12 +603,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -807,7 +790,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -882,17 +864,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -974,8 +948,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -1000,8 +972,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1065,7 +1035,6 @@ def forward( attack_attention_mask: Optional[torch.Tensor] = None, attack_token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels_input_ids: Optional[torch.Tensor] = None, labels_input_shape_ids: Optional[torch.Tensor] = None, @@ -1160,7 +1129,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1262,7 +1230,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -1317,7 +1284,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1419,7 +1385,6 @@ def forward( inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[list[torch.Tensor]] = None, labels: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -1470,7 +1435,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1563,7 +1527,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1595,7 +1558,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1663,7 +1625,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1737,7 +1698,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1789,7 +1749,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1819,7 +1778,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1866,7 +1824,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1895,7 +1852,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/roformer/modeling_roformer.py b/src/transformers/models/roformer/modeling_roformer.py index 03a2195da287..5a9e41129b4a 100644 --- a/src/transformers/models/roformer/modeling_roformer.py +++ b/src/transformers/models/roformer/modeling_roformer.py @@ -141,7 +141,6 @@ def forward( hidden_states, attention_mask=None, sinusoidal_pos=None, - head_mask=None, encoder_hidden_states=None, past_key_values=None, output_attentions=False, @@ -223,10 +222,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -310,7 +305,6 @@ def forward( hidden_states, attention_mask=None, sinusoidal_pos=None, - head_mask=None, encoder_hidden_states=None, past_key_values=None, output_attentions=False, @@ -320,7 +314,6 @@ def forward( hidden_states, attention_mask=attention_mask, sinusoidal_pos=sinusoidal_pos, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -383,7 +376,6 @@ def forward( hidden_states, attention_mask=None, sinusoidal_pos=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -394,7 +386,6 @@ def forward( hidden_states, attention_mask=attention_mask, sinusoidal_pos=sinusoidal_pos, - head_mask=head_mask, output_attentions=output_attentions, past_key_values=past_key_values, cache_position=cache_position, @@ -413,7 +404,6 @@ def forward( attention_output, attention_mask=encoder_attention_mask, sinusoidal_pos=sinusoidal_pos, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -447,7 +437,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -487,13 +476,10 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, sinusoidal_pos, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -754,7 +740,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -817,13 +802,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds ) @@ -833,7 +811,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, @@ -889,7 +866,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -910,7 +886,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -989,8 +964,6 @@ def forward( inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, @@ -1028,7 +1001,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1109,7 +1081,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1128,7 +1099,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1190,7 +1160,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1239,7 +1208,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1288,7 +1256,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1305,7 +1272,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1354,7 +1320,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1368,7 +1333,6 @@ def forward( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/sew/modeling_sew.py b/src/transformers/models/sew/modeling_sew.py index a001cdd61d58..4a4cd9587979 100644 --- a/src/transformers/models/sew/modeling_sew.py +++ b/src/transformers/models/sew/modeling_sew.py @@ -237,7 +237,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -249,9 +248,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -298,7 +294,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -337,7 +332,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) diff --git a/src/transformers/models/speech_to_text/modeling_speech_to_text.py b/src/transformers/models/speech_to_text/modeling_speech_to_text.py index bb2a8649ce9b..43b6511b2314 100755 --- a/src/transformers/models/speech_to_text/modeling_speech_to_text.py +++ b/src/transformers/models/speech_to_text/modeling_speech_to_text.py @@ -185,7 +185,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -197,9 +196,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -251,7 +247,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -318,7 +313,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -352,7 +346,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -360,8 +353,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -371,7 +362,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -434,8 +424,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -450,10 +438,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -467,7 +451,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -484,7 +467,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -599,7 +581,6 @@ def forward( self, input_features, attention_mask=None, - head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, @@ -622,12 +603,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -665,12 +640,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -687,7 +656,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -716,8 +684,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -771,8 +737,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -809,19 +773,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention - on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -907,13 +858,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -928,8 +872,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -974,8 +916,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -1015,9 +955,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -1066,9 +1003,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1097,11 +1031,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_speech_to_text._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1134,7 +1063,6 @@ def forward( encoder_outputs = self.encoder( input_features, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1161,8 +1089,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, @@ -1217,9 +1143,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1249,11 +1172,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_speech_to_text._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is @@ -1297,9 +1215,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/speecht5/modeling_speecht5.py b/src/transformers/models/speecht5/modeling_speecht5.py index b3e79a46680c..b6f74b527ffb 100644 --- a/src/transformers/models/speecht5/modeling_speecht5.py +++ b/src/transformers/models/speecht5/modeling_speecht5.py @@ -883,7 +883,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, position_bias: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, @@ -966,15 +965,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -1049,7 +1039,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, position_bias: Optional[torch.Tensor] = None, output_attentions: bool = False, ): @@ -1060,8 +1049,6 @@ def forward( attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(config.encoder_attention_heads,)`. position_bias (`torch.FloatTensor`): relative position embeddings of size `(seq_len, seq_len, hidden_size // encoder_attention_heads)` output_attentions (`bool`, *optional*): @@ -1072,7 +1059,6 @@ def forward( hidden_states, attn_weights = self.attention( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, position_bias=position_bias, output_attentions=output_attentions, ) @@ -1124,8 +1110,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -1140,10 +1124,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, hidden_size)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -1156,7 +1136,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -1173,7 +1152,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -1264,7 +1242,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -1284,12 +1261,6 @@ def forward( output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. @@ -1317,14 +1288,6 @@ def forward( all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != len(self.layers): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1341,7 +1304,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) hidden_states = layer_outputs[0] @@ -1383,7 +1345,6 @@ def forward( self, input_values: torch.FloatTensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -1393,7 +1354,6 @@ def forward( outputs = self.wrapped_encoder( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1425,7 +1385,6 @@ def forward( self, input_values: torch.FloatTensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -1435,7 +1394,6 @@ def forward( outputs = self.wrapped_encoder( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1461,7 +1419,6 @@ def forward( self, input_values: torch.FloatTensor, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -1469,7 +1426,6 @@ def forward( return self.wrapped_encoder( hidden_states=input_values, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1498,8 +1454,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1529,19 +1483,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1610,15 +1551,6 @@ def forward( all_self_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1636,8 +1568,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1690,8 +1620,6 @@ def forward( encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, speaker_embeddings: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1706,8 +1634,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1744,8 +1670,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1760,8 +1684,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1792,8 +1714,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1806,8 +1726,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1995,9 +1913,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_values: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -2022,11 +1937,6 @@ def forward( If you want to change padding behavior, you should read [`SpeechT5Decoder._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. speaker_embeddings (`torch.FloatTensor` of shape `(batch_size, config.speaker_embedding_dim)`, *optional*): Tensor containing the speaker embeddings. """ @@ -2042,7 +1952,6 @@ def forward( encoder_outputs = self.encoder( input_values=input_values, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -2073,8 +1982,6 @@ def forward( attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -2153,9 +2060,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -2190,11 +2094,6 @@ def forward( If you want to change padding behavior, you should read [`SpeechT5Decoder._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is @@ -2250,9 +2149,6 @@ def forward( attention_mask=attention_mask, decoder_input_values=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, use_cache=use_cache, @@ -2477,9 +2373,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_values: Optional[torch.FloatTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -2512,11 +2405,6 @@ def forward( If you want to change padding behavior, you should read [`SpeechT5Decoder._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. speaker_embeddings (`torch.FloatTensor` of shape `(batch_size, config.speaker_embedding_dim)`, *optional*): Tensor containing the speaker embeddings. labels (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_mel_bins)`, *optional*): @@ -2562,9 +2450,6 @@ def forward( attention_mask=attention_mask, decoder_input_values=decoder_input_values, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, use_cache=use_cache, @@ -2832,9 +2717,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_values: Optional[torch.FloatTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, use_cache: Optional[bool] = None, @@ -2866,11 +2748,6 @@ def forward( If you want to change padding behavior, you should read [`SpeechT5Decoder._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. speaker_embeddings (`torch.FloatTensor` of shape `(batch_size, config.speaker_embedding_dim)`, *optional*): Tensor containing the speaker embeddings. labels (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_mel_bins)`, *optional*): @@ -2922,9 +2799,6 @@ def forward( attention_mask=attention_mask, decoder_input_values=decoder_input_values, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, use_cache=use_cache, diff --git a/src/transformers/models/splinter/modeling_splinter.py b/src/transformers/models/splinter/modeling_splinter.py index 1d9c02877841..490ae8ae4791 100755 --- a/src/transformers/models/splinter/modeling_splinter.py +++ b/src/transformers/models/splinter/modeling_splinter.py @@ -97,7 +97,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling @@ -108,9 +107,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights @@ -143,7 +139,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: @@ -166,7 +161,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.attention_dropout, scaling=self.scaling, - head_mask=head_mask, **kwargs, ) @@ -220,14 +214,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -281,14 +273,12 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, **kwargs, ) -> tuple[torch.Tensor]: self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, **kwargs, ) @@ -321,7 +311,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -334,12 +323,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, - head_mask=layer_head_mask, output_attentions=output_attentions, **kwargs, ) @@ -419,7 +405,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -468,13 +453,6 @@ def forward( # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -484,7 +462,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, @@ -573,7 +550,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -622,7 +598,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -717,7 +692,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -781,7 +755,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/squeezebert/modeling_squeezebert.py b/src/transformers/models/squeezebert/modeling_squeezebert.py index 9e26d1953f1c..3e22cf3fc5c7 100644 --- a/src/transformers/models/squeezebert/modeling_squeezebert.py +++ b/src/transformers/models/squeezebert/modeling_squeezebert.py @@ -303,19 +303,10 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): - if head_mask is None: - head_mask_is_all_none = True - elif head_mask.count(None) == len(head_mask): - head_mask_is_all_none = True - else: - head_mask_is_all_none = False - assert head_mask_is_all_none is True, "head_mask is not yet supported in the SqueezeBert implementation." - # [batch_size, sequence_length, hidden_size] --> [batch_size, hidden_size, sequence_length] hidden_states = hidden_states.permute(0, 2, 1) @@ -468,7 +459,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -498,12 +488,6 @@ def forward( token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds @@ -511,7 +495,6 @@ def forward( encoder_outputs = self.encoder( hidden_states=embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -557,7 +540,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -577,7 +559,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -630,7 +611,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -650,7 +630,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -716,7 +695,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -771,7 +749,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -821,7 +798,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -839,7 +815,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -887,7 +862,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -902,7 +876,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/superglue/modeling_superglue.py b/src/transformers/models/superglue/modeling_superglue.py index 4fc524314e89..0f2f86799b7f 100644 --- a/src/transformers/models/superglue/modeling_superglue.py +++ b/src/transformers/models/superglue/modeling_superglue.py @@ -264,7 +264,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, @@ -325,10 +324,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -389,7 +384,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, @@ -397,7 +391,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_attentions=output_attentions, diff --git a/src/transformers/models/swin/modeling_swin.py b/src/transformers/models/swin/modeling_swin.py index 7f9e04337ba4..c9fdc0d7d044 100644 --- a/src/transformers/models/swin/modeling_swin.py +++ b/src/transformers/models/swin/modeling_swin.py @@ -433,7 +433,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -472,10 +471,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) @@ -528,10 +523,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -625,7 +619,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor, torch.Tensor]: @@ -658,9 +651,7 @@ def forward( height_pad, width_pad, dtype=hidden_states.dtype, device=hidden_states_windows.device ) - attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = attention_outputs[0] @@ -720,17 +711,12 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, always_partition: Optional[bool] = False, ) -> tuple[torch.Tensor]: height, width = input_dimensions for i, layer_module in enumerate(self.blocks): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] @@ -776,7 +762,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, output_hidden_states_before_downsampling: Optional[bool] = False, @@ -796,11 +781,7 @@ def forward( all_reshaped_hidden_states += (reshaped_hidden_state,) for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module( - hidden_states, input_dimensions, layer_head_mask, output_attentions, always_partition - ) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions, always_partition) hidden_states = layer_outputs[0] hidden_states_before_downsampling = layer_outputs[1] @@ -905,7 +886,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -924,13 +904,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - embedding_output, input_dimensions = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) @@ -938,7 +911,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, input_dimensions, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1000,7 +972,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -1038,7 +1009,6 @@ def forward( outputs = self.swin( pixel_values, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1114,7 +1084,6 @@ def __init__(self, config): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1131,7 +1100,6 @@ def forward( outputs = self.swin( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1228,7 +1196,6 @@ def forward( outputs = self.encoder( embedding_output, input_dimensions, - head_mask=None, output_attentions=output_attentions, output_hidden_states=True, output_hidden_states_before_downsampling=True, diff --git a/src/transformers/models/swin2sr/modeling_swin2sr.py b/src/transformers/models/swin2sr/modeling_swin2sr.py index 83dfe13baded..19bd7b208a19 100644 --- a/src/transformers/models/swin2sr/modeling_swin2sr.py +++ b/src/transformers/models/swin2sr/modeling_swin2sr.py @@ -294,7 +294,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -349,8 +348,6 @@ def forward( attention_probs = self.dropout(attention_probs) # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -414,10 +411,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -523,7 +519,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor, torch.Tensor]: height, width = input_dimensions @@ -547,9 +542,7 @@ def forward( if attn_mask is not None: attn_mask = attn_mask.to(hidden_states_windows.device) - attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = attention_outputs[0] @@ -621,16 +614,13 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: residual = hidden_states height, width = input_dimensions for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, input_dimensions, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, input_dimensions, output_attentions) hidden_states = layer_outputs[0] @@ -676,7 +666,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, @@ -689,9 +678,7 @@ def forward( all_hidden_states += (hidden_states,) for i, stage_module in enumerate(self.stages): - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = stage_module(hidden_states, input_dimensions, layer_head_mask, output_attentions) + layer_outputs = stage_module(hidden_states, input_dimensions, output_attentions) hidden_states = layer_outputs[0] output_dimensions = layer_outputs[1] @@ -788,7 +775,6 @@ def pad_and_normalize(self, pixel_values): def forward( self, pixel_values: torch.FloatTensor, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -799,13 +785,6 @@ def forward( ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - _, _, height, width = pixel_values.shape # some preprocessing: padding + normalization @@ -817,7 +796,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, input_dimensions, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1014,7 +992,6 @@ def __init__(self, config): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1065,7 +1042,6 @@ def forward( outputs = self.swin2sr( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/swinv2/modeling_swinv2.py b/src/transformers/models/swinv2/modeling_swinv2.py index ddc4dab73768..33be714f96b3 100644 --- a/src/transformers/models/swinv2/modeling_swinv2.py +++ b/src/transformers/models/swinv2/modeling_swinv2.py @@ -465,7 +465,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: batch_size, dim, num_channels = hidden_states.shape @@ -520,8 +519,6 @@ def forward( attention_probs = self.dropout(attention_probs) # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -584,10 +581,9 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: - self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + self_outputs = self.self(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs @@ -692,7 +688,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor, torch.Tensor]: height, width = input_dimensions @@ -716,9 +711,7 @@ def forward( if attn_mask is not None: attn_mask = attn_mask.to(hidden_states_windows.device) - attention_outputs = self.attention( - hidden_states_windows, attn_mask, head_mask, output_attentions=output_attentions - ) + attention_outputs = self.attention(hidden_states_windows, attn_mask, output_attentions=output_attentions) attention_output = attention_outputs[0] @@ -780,17 +773,13 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, ) -> tuple[torch.Tensor]: height, width = input_dimensions for i, layer_module in enumerate(self.blocks): - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, input_dimensions, - layer_head_mask, output_attentions, ) @@ -841,7 +830,6 @@ def forward( self, hidden_states: torch.Tensor, input_dimensions: tuple[int, int], - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, output_hidden_states_before_downsampling: Optional[bool] = False, @@ -860,12 +848,9 @@ def forward( all_reshaped_hidden_states += (reshaped_hidden_state,) for i, layer_module in enumerate(self.layers): - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, input_dimensions, - layer_head_mask, output_attentions, ) @@ -977,7 +962,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -996,13 +980,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, len(self.config.depths)) - embedding_output, input_dimensions = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) @@ -1010,7 +987,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, input_dimensions, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1074,7 +1050,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, @@ -1112,7 +1087,6 @@ def forward( outputs = self.swinv2( pixel_values, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1189,7 +1163,6 @@ def __init__(self, config): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1206,7 +1179,6 @@ def forward( outputs = self.swinv2( pixel_values, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, @@ -1297,7 +1269,6 @@ def forward( outputs = self.encoder( embedding_output, input_dimensions, - head_mask=None, output_attentions=output_attentions, output_hidden_states=True, output_hidden_states_before_downsampling=True, diff --git a/src/transformers/models/switch_transformers/modeling_switch_transformers.py b/src/transformers/models/switch_transformers/modeling_switch_transformers.py index 761f1c1ccc8f..653a25c0cf3d 100644 --- a/src/transformers/models/switch_transformers/modeling_switch_transformers.py +++ b/src/transformers/models/switch_transformers/modeling_switch_transformers.py @@ -16,7 +16,6 @@ import copy import math -import warnings from typing import Optional, Union import torch @@ -484,7 +483,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -572,10 +570,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -605,7 +599,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -616,7 +609,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -644,7 +636,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -657,7 +648,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -693,8 +683,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -706,7 +694,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -727,7 +714,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -892,8 +878,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -991,9 +975,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_router_probs = () if output_router_logits else None @@ -1004,9 +985,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] - if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -1017,8 +995,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1202,15 +1178,6 @@ def _prepare_4d_causal_attention_mask_with_cache_position( return causal_mask -# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask -__HEAD_MASK_WARNING_MSG = """ -The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, -`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. -If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, -num_heads)`. -""" - - @auto_docstring class SwitchTransformersModel(SwitchTransformersPreTrainedModel): _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] @@ -1267,9 +1234,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1310,18 +1274,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1347,12 +1299,6 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - if ( output_router_logits and self.config.num_sparse_encoder_layers == 0 @@ -1368,7 +1314,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, @@ -1392,8 +1337,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1479,9 +1422,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1523,18 +1463,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1566,12 +1494,6 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed @@ -1579,7 +1501,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, @@ -1607,8 +1528,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1751,7 +1670,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1788,7 +1706,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, output_router_logits=output_router_logits, diff --git a/src/transformers/models/t5/modeling_t5.py b/src/transformers/models/t5/modeling_t5.py index acb3f8ccf826..704e1ca428a4 100644 --- a/src/transformers/models/t5/modeling_t5.py +++ b/src/transformers/models/t5/modeling_t5.py @@ -16,7 +16,6 @@ import copy import math -import warnings from typing import Optional, Union import torch @@ -296,7 +295,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -384,10 +382,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -416,7 +410,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -427,7 +420,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -452,7 +444,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -465,7 +456,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -499,8 +489,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -511,7 +499,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -536,7 +523,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -723,8 +709,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -825,9 +809,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None @@ -837,8 +818,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -849,8 +828,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1025,15 +1002,6 @@ def _prepare_4d_causal_attention_mask_with_cache_position( return causal_mask -# Warning message for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask -__HEAD_MASK_WARNING_MSG = """ -The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, -`decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. -If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, -num_heads)`. -""" - - @auto_docstring class T5Model(T5PreTrainedModel): _keys_to_ignore_on_load_unexpected = [ @@ -1091,9 +1059,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1131,18 +1096,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1168,19 +1121,12 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1202,8 +1148,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1283,9 +1227,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1324,18 +1265,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1367,12 +1296,6 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed @@ -1380,7 +1303,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1406,8 +1328,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1493,7 +1413,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1528,7 +1447,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1562,9 +1480,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1601,18 +1516,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1642,9 +1545,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1723,7 +1623,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1749,7 +1648,6 @@ def forward( outputs = self.transformer( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1829,9 +1727,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1869,18 +1764,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict use_cache = use_cache if use_cache is not None else self.config.use_cache @@ -1902,19 +1785,12 @@ def forward( use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict - # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask - if head_mask is not None and decoder_head_mask is None: - if self.config.num_layers == self.config.num_decoder_layers: - warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) - decoder_head_mask = head_mask - # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1936,8 +1812,6 @@ def forward( past_key_values=None, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/tapas/modeling_tapas.py b/src/transformers/models/tapas/modeling_tapas.py index cedd1fabbb3f..a9f3c833eab6 100644 --- a/src/transformers/models/tapas/modeling_tapas.py +++ b/src/transformers/models/tapas/modeling_tapas.py @@ -168,7 +168,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, past_key_values=None, output_attentions=False, @@ -235,10 +234,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -297,7 +292,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, @@ -306,7 +300,6 @@ def forward( self_outputs = self.self( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -368,7 +361,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -378,7 +370,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, past_key_values=past_key_values, cache_position=cache_position, @@ -396,7 +387,6 @@ def forward( cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, output_attentions=output_attentions, @@ -430,7 +420,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, @@ -456,12 +445,9 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -625,7 +611,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -708,20 +693,12 @@ class for more info. else: encoder_extended_attention_mask = None - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds ) encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, output_attentions=output_attentions, @@ -771,7 +748,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -831,7 +807,6 @@ class for more info. attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -909,7 +884,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, table_mask: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -987,7 +961,6 @@ class for more info. attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1237,7 +1210,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1297,7 +1269,6 @@ class for more info. attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py b/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py index 462656986711..5a7c6fac3e10 100644 --- a/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py +++ b/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py @@ -287,7 +287,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -299,9 +298,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -359,7 +355,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -428,7 +423,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -463,7 +457,6 @@ def forward( self, hidden_states: torch.FloatTensor, attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, output_attentions: Optional[bool] = False, ) -> tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: """ @@ -471,8 +464,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -481,7 +472,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -550,8 +540,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -566,10 +554,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -585,7 +569,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -602,7 +585,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -663,8 +645,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -688,8 +668,6 @@ def _update_causal_mask( # 2d mask is passed through the layers attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. attention_mask = _prepare_4d_causal_attention_mask_for_sdpa( attention_mask, input_shape, @@ -729,9 +707,6 @@ def _update_cross_attn_mask( if self.config._attn_implementation == "flash_attention_2": encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, inputs_embeds.dtype, @@ -784,7 +759,6 @@ def __init__(self, config: TimeSeriesTransformerConfig): def forward( self, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -799,12 +773,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors @@ -838,14 +806,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -862,7 +822,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -915,8 +874,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, @@ -945,19 +902,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -1038,15 +982,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -1061,8 +996,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -1248,9 +1181,6 @@ def forward( future_values: Optional[torch.Tensor] = None, future_time_features: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1343,11 +1273,6 @@ def forward( must but known at prediction time. The `num_features` here is equal to `config.`num_time_features` + `config.num_dynamic_real_features`. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1402,7 +1327,6 @@ def forward( enc_input = transformer_inputs[:, : self.config.context_length, ...] encoder_outputs = self.encoder( inputs_embeds=enc_input, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1429,8 +1353,6 @@ def forward( inputs_embeds=dec_input, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1510,9 +1432,6 @@ def forward( future_time_features: Optional[torch.Tensor] = None, future_observed_mask: Optional[torch.Tensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, past_key_values: Optional[Cache] = None, output_hidden_states: Optional[bool] = None, @@ -1613,11 +1532,6 @@ def forward( - 0 for values that are **missing** (i.e. NaNs that were replaced by zeros). This mask is used to filter out missing values for the final loss calculation. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): Tuple consists of `last_hidden_state`, `hidden_states` (*optional*) and `attentions` (*optional*) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` (*optional*) is a sequence of @@ -1682,9 +1596,6 @@ def forward( future_values=future_values, future_time_features=future_time_features, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, past_key_values=past_key_values, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/trocr/modeling_trocr.py b/src/transformers/models/trocr/modeling_trocr.py index 70cded0a5147..0aba44015fa6 100644 --- a/src/transformers/models/trocr/modeling_trocr.py +++ b/src/transformers/models/trocr/modeling_trocr.py @@ -188,7 +188,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -260,15 +259,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -342,8 +332,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -358,10 +346,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size *(decoder_attention_heads,)*. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -374,7 +358,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -392,7 +375,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -484,8 +466,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, use_cache=None, @@ -522,19 +502,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention - on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -632,14 +599,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -654,8 +613,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -748,8 +705,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -760,11 +715,6 @@ def forward( cache_position: Optional[torch.Tensor] = None, ) -> Union[tuple, CausalLMOutputWithCrossAttentions]: r""" - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -830,8 +780,6 @@ def forward( attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/tvp/modeling_tvp.py b/src/transformers/models/tvp/modeling_tvp.py index dcbd220331f9..eb6e3da17b38 100644 --- a/src/transformers/models/tvp/modeling_tvp.py +++ b/src/transformers/models/tvp/modeling_tvp.py @@ -379,7 +379,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions: Optional[bool] = None, ): batch_size, sequence_length = hidden_states.shape[:2] @@ -405,10 +404,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.attn_dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - attn_output = torch.matmul(attention_probs, value_layer) attn_output = attn_output.transpose(1, 2).contiguous() attn_output = attn_output.reshape(batch_size, sequence_length, self.all_head_size) @@ -462,13 +457,11 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions: Optional[bool] = None, ): self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -490,7 +483,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -507,7 +499,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i], output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] if output_attentions: @@ -754,7 +746,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -803,7 +794,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=self.get_head_mask(head_mask, self.config.num_hidden_layers), output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -857,7 +847,6 @@ def forward( pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, labels: Optional[tuple[torch.Tensor]] = None, - head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -885,7 +874,6 @@ def forward( input_ids, pixel_values, attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/udop/modeling_udop.py b/src/transformers/models/udop/modeling_udop.py index 22f45731030e..58de9c10b117 100644 --- a/src/transformers/models/udop/modeling_udop.py +++ b/src/transformers/models/udop/modeling_udop.py @@ -562,7 +562,6 @@ def forward( key_value_states=None, position_bias=None, past_key_values=None, - layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, @@ -650,10 +649,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -683,7 +678,6 @@ def forward( hidden_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -694,7 +688,6 @@ def forward( normed_hidden_states, mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -720,7 +713,6 @@ def forward( key_value_states, attention_mask=None, position_bias=None, - layer_head_mask=None, past_key_values=None, use_cache=False, query_length=None, @@ -733,7 +725,6 @@ def forward( mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, query_length=query_length, @@ -770,8 +761,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -782,7 +771,6 @@ def forward( hidden_states, attention_mask=attention_mask, position_bias=position_bias, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -807,7 +795,6 @@ def forward( key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, query_length=cache_position[-1] + 1, use_cache=use_cache, @@ -1147,8 +1134,6 @@ def forward( visual_bbox=None, image_embeddings=None, position_bias=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -1263,8 +1248,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None @@ -1291,7 +1274,6 @@ def forward( encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, # as a positional argument for gradient checkpointing - layer_head_mask=head_mask[i], past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1526,10 +1508,7 @@ def forward( inputs_embeds: Optional[Tensor] = None, encoder_outputs: Optional[Tensor] = None, past_key_values: Optional[Cache] = None, - head_mask: Optional[Tensor] = None, decoder_inputs_embeds: Optional[Tensor] = None, - decoder_head_mask: Optional[Tensor] = None, - cross_attn_head_mask: Optional[Tensor] = None, use_cache=True, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1557,16 +1536,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1610,7 +1579,6 @@ def forward( pixel_values=pixel_values, visual_bbox=visual_bbox, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1627,8 +1595,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1722,10 +1688,7 @@ def forward( inputs_embeds: Optional[Tensor] = None, encoder_outputs: Optional[Tensor] = None, past_key_values: Optional[Cache] = None, - head_mask: Optional[Tensor] = None, decoder_inputs_embeds: Optional[Tensor] = None, - decoder_head_mask: Optional[Tensor] = None, - cross_attn_head_mask: Optional[Tensor] = None, use_cache=True, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1754,16 +1717,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., @@ -1815,7 +1768,6 @@ def forward( pixel_values=pixel_values, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1832,8 +1784,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1921,7 +1871,6 @@ def forward( attention_mask: Optional[Tensor] = None, pixel_values: Optional[Tensor] = None, visual_bbox: Optional[dict[str, Any]] = None, - head_mask: Optional[Tensor] = None, inputs_embeds: Optional[Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1985,7 +1934,6 @@ def forward( pixel_values=pixel_values, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/umt5/modeling_umt5.py b/src/transformers/models/umt5/modeling_umt5.py index 55dc3cf4ca84..8d9d74fc24e6 100644 --- a/src/transformers/models/umt5/modeling_umt5.py +++ b/src/transformers/models/umt5/modeling_umt5.py @@ -264,7 +264,6 @@ def forward( encoder_hidden_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, cache_position: Optional[torch.Tensor] = None, ): batch_size, seq_length = hidden_states.shape[:2] @@ -341,10 +340,6 @@ def forward( attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - # Mask heads if we want to - if layer_head_mask is not None: - attn_weights = attn_weights * layer_head_mask - attn_output = torch.matmul(attn_weights, value_states) attn_output = attn_output.transpose(1, 2).contiguous() @@ -366,7 +361,6 @@ def forward( self, hidden_states, attention_mask=None, - layer_head_mask=None, past_key_values=None, cache_position=None, ): @@ -374,7 +368,6 @@ def forward( attention_output = self.SelfAttention( normed_hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, cache_position=cache_position, ) @@ -396,7 +389,6 @@ def forward( hidden_states, encoder_hidden_states=None, attention_mask=None, - layer_head_mask=None, past_key_values=None, cache_position=None, ): @@ -405,7 +397,6 @@ def forward( normed_hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, cache_position=cache_position, ) @@ -432,8 +423,6 @@ def forward( attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, - layer_head_mask=None, - cross_attn_layer_head_mask=None, past_key_values=None, use_cache=False, output_attentions=False, @@ -442,7 +431,6 @@ def forward( hidden_states, self_attn_weights = self.layer[0]( hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, past_key_values=past_key_values, cache_position=cache_position, ) @@ -461,7 +449,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, cache_position=cache_position, ) @@ -645,8 +632,6 @@ def forward( encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, @@ -746,9 +731,6 @@ def forward( else: encoder_extended_attention_mask = None - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.num_layers) - cross_attn_head_mask = self.get_head_mask(cross_attn_head_mask, self.config.num_layers) all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.is_decoder else None @@ -756,9 +738,6 @@ def forward( hidden_states = self.dropout(inputs_embeds) for i, layer_module in enumerate(self.block): - layer_head_mask = head_mask[i] - cross_attn_layer_head_mask = cross_attn_head_mask[i] - if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) @@ -767,8 +746,6 @@ def forward( causal_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_extended_attention_mask, - layer_head_mask=layer_head_mask, - cross_attn_layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, @@ -1013,9 +990,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, @@ -1053,18 +1027,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. Example: @@ -1096,7 +1058,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1118,8 +1079,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1217,9 +1176,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1258,18 +1214,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for @@ -1305,7 +1249,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1331,8 +1274,6 @@ def forward( past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1442,7 +1383,6 @@ def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -1477,7 +1417,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1512,9 +1451,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[list[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, @@ -1551,18 +1487,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). @@ -1592,9 +1516,6 @@ def forward( attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, @@ -1676,7 +1597,6 @@ def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1702,7 +1622,6 @@ def forward( outputs = self.transformer( input_ids, attention_mask=attention_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1785,9 +1704,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - decoder_head_mask: Optional[torch.FloatTensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.Tensor]]] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1825,18 +1741,6 @@ def forward( decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - decoder_head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - cross_attn_head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in - `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict use_cache = use_cache if use_cache is not None else self.config.use_cache @@ -1864,7 +1768,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1886,8 +1789,6 @@ def forward( past_key_values=None, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/unispeech/modeling_unispeech.py b/src/transformers/models/unispeech/modeling_unispeech.py index ab0d77b5623e..2359ad1c9512 100755 --- a/src/transformers/models/unispeech/modeling_unispeech.py +++ b/src/transformers/models/unispeech/modeling_unispeech.py @@ -276,7 +276,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -288,9 +287,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -337,7 +333,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -376,7 +371,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -525,8 +519,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -693,8 +685,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": diff --git a/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py b/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py index 23fcd7c3227e..f880c960556b 100755 --- a/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py +++ b/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py @@ -281,7 +281,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -293,9 +292,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -342,7 +338,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -381,7 +376,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -530,8 +524,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -698,8 +690,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": diff --git a/src/transformers/models/videomae/modeling_videomae.py b/src/transformers/models/videomae/modeling_videomae.py index 951bdd774142..693bc82a0473 100755 --- a/src/transformers/models/videomae/modeling_videomae.py +++ b/src/transformers/models/videomae/modeling_videomae.py @@ -236,7 +236,7 @@ def __init__(self, config: VideoMAEConfig) -> None: self.q_bias = None self.v_bias = None - def forward(self, hidden_states, head_mask: Optional[torch.Tensor] = None) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: Optional[torch.Tensor] = None) -> tuple[torch.Tensor, torch.Tensor]: batch_size, seq_length, _ = hidden_states.shape k_bias = torch.zeros_like(self.v_bias, requires_grad=False) if self.q_bias is not None else None @@ -257,7 +257,7 @@ def forward(self, hidden_states, head_mask: Optional[torch.Tensor] = None) -> tu query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -313,8 +313,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -363,9 +363,9 @@ def __init__(self, config: VideoMAEConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -388,10 +388,9 @@ def __init__(self, config: VideoMAEConfig): self.layer = nn.ModuleList([VideoMAELayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -457,7 +456,6 @@ def forward( self, pixel_values: torch.FloatTensor, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutput: r""" @@ -540,16 +538,9 @@ def forward( [1, 1568, 768] ```""" - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values, bool_masked_pos) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state if self.layernorm is not None: sequence_output = self.layernorm(sequence_output) @@ -583,7 +574,7 @@ def __init__(self, config: VideoMAEConfig): def forward(self, hidden_states: torch.Tensor, return_token_num: int): # Apply transformer layers for layer_module in self.decoder_layers: - hidden_states = layer_module(hidden_states, head_mask=None) + hidden_states = layer_module(hidden_states) if return_token_num > 0: hidden_states = hidden_states[:, -return_token_num:] @@ -624,7 +615,6 @@ def forward( self, pixel_values: torch.FloatTensor, bool_masked_pos: torch.BoolTensor, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> VideoMAEForPreTrainingOutput: r""" @@ -654,9 +644,7 @@ def forward( >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) >>> loss = outputs.loss ```""" - outputs: BaseModelOutput = self.videomae( - pixel_values, bool_masked_pos=bool_masked_pos, head_mask=head_mask, **kwargs - ) + outputs: BaseModelOutput = self.videomae(pixel_values, bool_masked_pos=bool_masked_pos, **kwargs) sequence_output = outputs.last_hidden_state sequence_output = self.encoder_to_decoder(sequence_output) @@ -791,7 +779,6 @@ def __init__(self, config): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> ImageClassifierOutput: @@ -878,7 +865,7 @@ def forward( eating spaghetti ```""" - outputs: BaseModelOutput = self.videomae(pixel_values, head_mask=head_mask, **kwargs) + outputs: BaseModelOutput = self.videomae(pixel_values, **kwargs) sequence_output = outputs.last_hidden_state if self.fc_norm is not None: diff --git a/src/transformers/models/vilt/modeling_vilt.py b/src/transformers/models/vilt/modeling_vilt.py index 8535b3c747e2..386883969916 100755 --- a/src/transformers/models/vilt/modeling_vilt.py +++ b/src/transformers/models/vilt/modeling_vilt.py @@ -325,7 +325,7 @@ def __init__(self, config): self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, hidden_states, attention_mask=None, output_attentions=False): batch_size, seq_length, _ = hidden_states.shape query_layer = ( self.query(hidden_states) @@ -357,10 +357,6 @@ def forward(self, hidden_states, attention_mask=None, head_mask=None, output_att # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -415,8 +411,8 @@ def prune_heads(self, heads): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): - self_outputs = self.attention(hidden_states, attention_mask, head_mask, output_attentions) + def forward(self, hidden_states, attention_mask=None, output_attentions=False): + self_outputs = self.attention(hidden_states, attention_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) @@ -467,11 +463,10 @@ def __init__(self, config): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): + def forward(self, hidden_states, attention_mask=None, output_attentions=False): self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in ViLT, layernorm is applied before self-attention attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -503,7 +498,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -515,9 +509,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] @@ -599,7 +591,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, image_token_type_idx: Optional[int] = None, @@ -666,13 +657,6 @@ def forward( if pixel_mask is None: pixel_mask = torch.ones((image_batch_size, self.config.image_size, self.config.image_size), device=device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, attention_mask = self.embeddings( input_ids, attention_mask, @@ -691,7 +675,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -758,7 +741,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -831,7 +813,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, @@ -936,7 +917,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -985,7 +965,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, @@ -1042,7 +1021,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1091,7 +1069,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, @@ -1147,7 +1124,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1218,7 +1194,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values[:, i, :, :, :] if pixel_values is not None else None, pixel_mask=pixel_mask[:, i, :, :] if pixel_mask is not None else None, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds[:, i, :, :] if image_embeds is not None else None, image_token_type_idx=i + 1, @@ -1277,7 +1252,6 @@ def forward( token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, image_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1301,7 +1275,6 @@ def forward( token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, diff --git a/src/transformers/models/visual_bert/modeling_visual_bert.py b/src/transformers/models/visual_bert/modeling_visual_bert.py index f0277a7bd820..4ee8d2701738 100755 --- a/src/transformers/models/visual_bert/modeling_visual_bert.py +++ b/src/transformers/models/visual_bert/modeling_visual_bert.py @@ -193,7 +193,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): batch_size, seq_length, _ = hidden_states.shape @@ -228,10 +227,6 @@ def forward( # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(attention_probs) - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - context_layer = torch.matmul(attention_probs, value_layer) context_layer = context_layer.permute(0, 2, 1, 3).contiguous() @@ -287,13 +282,11 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): self_outputs = self.self( hidden_states, attention_mask, - head_mask, output_attentions, ) attention_output = self.output(self_outputs[0], hidden_states) @@ -345,13 +338,11 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, ): self_attention_outputs = self.attention( hidden_states, attention_mask, - head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -382,7 +373,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -394,9 +384,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, attention_mask, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, attention_mask, output_attentions) hidden_states = layer_outputs[0] if output_attentions: @@ -585,7 +573,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -682,13 +669,6 @@ def forward( attention_mask, (batch_size, input_shape) ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -722,7 +702,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -774,7 +753,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -862,7 +840,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, visual_embeds=visual_embeds, visual_attention_mask=visual_attention_mask, @@ -919,7 +896,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -1048,7 +1024,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, visual_embeds=visual_embeds, visual_attention_mask=visual_attention_mask, @@ -1107,7 +1082,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -1179,7 +1153,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, visual_embeds=visual_embeds, visual_attention_mask=visual_attention_mask, @@ -1245,7 +1218,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -1314,7 +1286,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, visual_embeds=visual_embeds, visual_attention_mask=visual_attention_mask, @@ -1416,7 +1387,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, visual_embeds: Optional[torch.FloatTensor] = None, visual_attention_mask: Optional[torch.LongTensor] = None, @@ -1495,7 +1465,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, visual_embeds=visual_embeds, visual_attention_mask=visual_attention_mask, diff --git a/src/transformers/models/vit/modeling_vit.py b/src/transformers/models/vit/modeling_vit.py index d9c01927ffc4..849085bc08b1 100644 --- a/src/transformers/models/vit/modeling_vit.py +++ b/src/transformers/models/vit/modeling_vit.py @@ -218,9 +218,7 @@ def __init__(self, config: ViTConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -237,7 +235,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -291,8 +289,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -338,9 +336,9 @@ def __init__(self, config: ViTConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -362,10 +360,9 @@ def __init__(self, config: ViTConfig): self.layer = nn.ModuleList([ViTLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -454,7 +451,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: @@ -466,13 +462,6 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - # TODO: maybe have a cleaner way to cast the input (from `ImageProcessor` side?) expected_dtype = self.embeddings.patch_embeddings.projection.weight.dtype if pixel_values.dtype != expected_dtype: @@ -482,7 +471,7 @@ def forward( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) @@ -542,7 +531,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ) -> MaskedImageModelingOutput: @@ -584,7 +572,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.vit( pixel_values, bool_masked_pos=bool_masked_pos, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) @@ -653,7 +640,6 @@ def __init__(self, config: ViTConfig): def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], @@ -667,7 +653,6 @@ def forward( outputs: BaseModelOutputWithPooling = self.vit( pixel_values, - head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs, ) diff --git a/src/transformers/models/vit_mae/modeling_vit_mae.py b/src/transformers/models/vit_mae/modeling_vit_mae.py index 72c90af31f81..2db4df13bc95 100755 --- a/src/transformers/models/vit_mae/modeling_vit_mae.py +++ b/src/transformers/models/vit_mae/modeling_vit_mae.py @@ -379,9 +379,7 @@ def __init__(self, config: ViTMAEConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -398,7 +396,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -454,8 +452,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -504,9 +502,9 @@ def __init__(self, config: ViTMAEConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -529,10 +527,9 @@ def __init__(self, config: ViTMAEConfig): self.layer = nn.ModuleList([ViTMAELayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -599,7 +596,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, noise: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> ViTMAEModelOutput: @@ -631,18 +627,11 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output, mask, ids_restore = self.embeddings( pixel_values, noise=noise, interpolate_pos_encoding=interpolate_pos_encoding ) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) @@ -746,7 +735,7 @@ def forward(self, hidden_states: torch.Tensor, ids_restore: torch.Tensor, interp # Apply Transformer layers (blocks) for layer_module in self.decoder_layers: - hidden_states = layer_module(hidden_states, head_mask=None) + hidden_states = layer_module(hidden_states) hidden_states = self.decoder_norm(hidden_states) @@ -907,7 +896,6 @@ def forward( self, pixel_values: Optional[torch.FloatTensor] = None, noise: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> ViTMAEForPreTrainingOutput: @@ -939,7 +927,7 @@ def forward( ```""" outputs: ViTMAEModelOutput = self.vit( - pixel_values, noise=noise, head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs + pixel_values, noise=noise, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs ) latent = outputs.last_hidden_state diff --git a/src/transformers/models/vit_msn/modeling_vit_msn.py b/src/transformers/models/vit_msn/modeling_vit_msn.py index d66d94fcf56e..1fd3f0ed473f 100644 --- a/src/transformers/models/vit_msn/modeling_vit_msn.py +++ b/src/transformers/models/vit_msn/modeling_vit_msn.py @@ -216,9 +216,7 @@ def __init__(self, config: ViTMSNConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -235,7 +233,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -291,8 +289,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -341,9 +339,9 @@ def __init__(self, config: ViTMSNConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -366,10 +364,9 @@ def __init__(self, config: ViTMSNConfig): self.layer = nn.ModuleList([ViTMSNLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -443,7 +440,6 @@ def forward( self, pixel_values: Optional[torch.Tensor] = None, bool_masked_pos: Optional[torch.BoolTensor] = None, - head_mask: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutput: @@ -473,17 +469,10 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding ) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) @@ -511,7 +500,6 @@ def __init__(self, config: ViTMSNConfig) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, interpolate_pos_encoding: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], @@ -543,9 +531,7 @@ def forward( ``` """ - outputs: BaseModelOutput = self.vit( - pixel_values, head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs - ) + outputs: BaseModelOutput = self.vit(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs) sequence_output = outputs.last_hidden_state logits = self.classifier(sequence_output[:, 0, :]) diff --git a/src/transformers/models/vitdet/modeling_vitdet.py b/src/transformers/models/vitdet/modeling_vitdet.py index f6702bc1a124..d5b38e0c48ae 100644 --- a/src/transformers/models/vitdet/modeling_vitdet.py +++ b/src/transformers/models/vitdet/modeling_vitdet.py @@ -475,7 +475,6 @@ def __init__( def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: hidden_states = hidden_states.permute(0, 2, 3, 1) @@ -541,7 +540,6 @@ def __init__(self, config: VitDetConfig) -> None: def forward( self, hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, @@ -553,9 +551,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, output_attentions) hidden_states = layer_outputs[0] @@ -667,7 +663,6 @@ class PreTrainedModel def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, @@ -700,18 +695,10 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values) encoder_outputs = self.encoder( embedding_output, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py b/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py index 1c61763d5e56..712293fcd0d1 100644 --- a/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py +++ b/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py @@ -148,9 +148,7 @@ def __init__(self, config: VitPoseBackboneConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -167,7 +165,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -223,8 +221,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -297,7 +295,6 @@ def forward( self, hidden_states: torch.Tensor, dataset_index: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: # Validate dataset_index when using multiple experts if self.num_experts > 1 and dataset_index is None: @@ -308,7 +305,7 @@ def forward( ) hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -338,13 +335,11 @@ def forward( self, hidden_states: torch.Tensor, dataset_index: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, ) -> BaseModelOutput: all_hidden_states = [hidden_states] if output_hidden_states else None for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, dataset_index, layer_head_mask) + hidden_states = layer_module(hidden_states, dataset_index) if all_hidden_states is not None: all_hidden_states.append(hidden_states) @@ -413,7 +408,6 @@ def forward( self, pixel_values: torch.Tensor, dataset_index: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_hidden_states: Optional[bool] = None, **kwargs, ): @@ -440,16 +434,9 @@ def forward( if output_hidden_states is None: output_hidden_states = self.config.output_hidden_states - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values) outputs: BaseModelOutput = self.encoder( - embedding_output, dataset_index=dataset_index, head_mask=head_mask, output_hidden_states=True + embedding_output, dataset_index=dataset_index, output_hidden_states=True ) hidden_states = outputs.hidden_states diff --git a/src/transformers/models/vits/modeling_vits.py b/src/transformers/models/vits/modeling_vits.py index 7300ea7f798e..bae8d44e0d13 100644 --- a/src/transformers/models/vits/modeling_vits.py +++ b/src/transformers/models/vits/modeling_vits.py @@ -875,7 +875,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor]]: """Input shape: Batch x Time x Channel""" @@ -922,15 +921,6 @@ def forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. diff --git a/src/transformers/models/vivit/modeling_vivit.py b/src/transformers/models/vivit/modeling_vivit.py index a18bcc49bf5c..7170d3ff7de3 100755 --- a/src/transformers/models/vivit/modeling_vivit.py +++ b/src/transformers/models/vivit/modeling_vivit.py @@ -209,9 +209,7 @@ def __init__(self, config: VivitConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -228,7 +226,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -284,8 +282,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -334,9 +332,9 @@ def __init__(self, config: VivitConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -358,10 +356,9 @@ def __init__(self, config: VivitConfig): self.layer = nn.ModuleList([VivitLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> BaseModelOutput: + def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) return BaseModelOutput(last_hidden_state=hidden_states) @@ -453,7 +450,6 @@ def _prune_heads(self, heads_to_prune): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: @@ -535,10 +531,8 @@ def forward( if pixel_values is None: raise ValueError("You have to specify pixel_values") - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) - encoder_outputs: BaseModelOutput = self.encoder(embedding_output, head_mask=head_mask) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) pooled_output = self.pooler(sequence_output) if self.pooler is not None else None @@ -578,7 +572,6 @@ def __init__(self, config: VivitConfig): def forward( self, pixel_values: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, interpolate_pos_encoding: bool = False, **kwargs: Unpack[TransformersKwargs], @@ -667,7 +660,7 @@ def forward( ```""" outputs: BaseModelOutput = self.vivit( - pixel_values, head_mask=head_mask, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs + pixel_values, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs ) sequence_output = outputs.last_hidden_state logits = self.classifier(sequence_output[:, 0, :]) diff --git a/src/transformers/models/vjepa2/modeling_vjepa2.py b/src/transformers/models/vjepa2/modeling_vjepa2.py index 714c3d92d827..0b309610aec7 100644 --- a/src/transformers/models/vjepa2/modeling_vjepa2.py +++ b/src/transformers/models/vjepa2/modeling_vjepa2.py @@ -299,7 +299,6 @@ def forward( hidden_states, position_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, - head_mask: Optional[torch.Tensor] = None, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: batch_size, seq_length, _ = hidden_states.shape query_layer = ( @@ -331,7 +330,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -419,7 +418,6 @@ def forward( self, hidden_states: torch.Tensor, position_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, ...]: # Self-Attention @@ -428,7 +426,6 @@ def forward( self_attention_outputs = self.attention( hidden_states, position_mask=position_mask, # position mask for context/target selection - head_mask=head_mask, # head mask is applied at F.scaled_dot_product_attention output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] @@ -476,7 +473,6 @@ def __init__(self, config: VJEPA2Config): def forward( self, pixel_values_videos: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, **kwargs, @@ -490,8 +486,7 @@ def forward( if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module(hidden_states, None, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, None, output_attentions) hidden_states = layer_outputs[0] if output_attentions: @@ -625,7 +620,7 @@ def __init__(self, config: VJEPA2Config): self.layernorm = nn.LayerNorm(config.pred_hidden_size, eps=config.layer_norm_eps) self.proj = nn.Linear(config.pred_hidden_size, config.hidden_size, bias=True) - def sort_tokens(self, hidden_states, position_masks, argsort, head_mask=None): + def sort_tokens(self, hidden_states, position_masks, argsort): # gather position masks argsort = argsort.to(position_masks.device) position_masks = torch.gather(position_masks, dim=1, index=argsort) @@ -635,28 +630,7 @@ def sort_tokens(self, hidden_states, position_masks, argsort, head_mask=None): hidden_states_argsort = argsort.unsqueeze(-1).expand(-1, -1, hidden_states.size(-1)) hidden_states = torch.gather(hidden_states, dim=1, index=hidden_states_argsort) - # gather head mask - if head_mask is not None and head_mask[0] is not None: - argsort = argsort.to(head_mask.device) - head_mask = head_mask.permute(1, 0, 2, 3, 4) - argsort_4d = ( - argsort.unsqueeze(1) - .unsqueeze(1) - .expand(-1, head_mask.size(1), head_mask.size(2), -1) - .unsqueeze(-1) - .expand(-1, -1, -1, -1, head_mask.size(-1)) - ) - head_mask = torch.gather(head_mask, dim=3, index=argsort_4d) - argsort_5d = ( - argsort.unsqueeze(1) - .unsqueeze(1) - .unsqueeze(1) - .expand(-1, head_mask.size(1), head_mask.size(2), head_mask.size(3), -1) - ) - head_mask = torch.gather(head_mask, dim=4, index=argsort_5d) - head_mask = head_mask.permute(1, 0, 2, 3, 4) - - return hidden_states, position_masks, head_mask + return hidden_states, position_masks def unsort_tokens(self, hidden_states, argsort): argsort = argsort.to(hidden_states.device) @@ -671,7 +645,6 @@ def forward( encoder_hidden_states: torch.Tensor, context_mask: list[torch.Tensor], target_mask: list[torch.Tensor], - head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, **kwargs, @@ -687,14 +660,13 @@ def forward( # Put tokens in sorted order argsort = torch.argsort(position_masks, dim=1) # [B, N] - hidden_states, position_masks, head_mask = self.sort_tokens(hidden_states, position_masks, argsort, head_mask) + hidden_states, position_masks = self.sort_tokens(hidden_states, position_masks, argsort) for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - layer_outputs = layer_module(hidden_states, position_masks, layer_head_mask, output_attentions) + layer_outputs = layer_module(hidden_states, position_masks, output_attentions) hidden_states = layer_outputs[0] if output_attentions: @@ -1005,21 +977,6 @@ def trunc_normal_f32_(weight, std): module.weight.data.fill_(1.0) -def _convert_head_mask_to_5d(head_mask, num_hidden_layers): - """ - Inputs: - - head_mask: bsz x seq_length x seq_length | None - Returns - - [num_hidden_layers x batch x num_heads x seq_length x seq_length] | [num_hidden_layers] - """ - if head_mask is not None: - head_mask = head_mask.unsqueeze(1).unsqueeze(0) - head_mask = head_mask.expand(num_hidden_layers, -1, -1, -1, -1) - else: - head_mask = [None] * num_hidden_layers - return head_mask - - @auto_docstring class VJEPA2Model(VJEPA2PreTrainedModel): def __init__(self, config: VJEPA2Config): @@ -1040,9 +997,7 @@ def get_input_embeddings(self) -> VJEPA2PatchEmbeddings3D: def forward( self, pixel_values_videos: torch.Tensor, - context_head_mask: Optional[torch.Tensor] = None, context_mask: Optional[list[torch.Tensor]] = None, - target_head_mask: Optional[torch.Tensor] = None, target_mask: Optional[list[torch.Tensor]] = None, skip_predictor: bool = False, output_attentions: Optional[bool] = None, @@ -1050,14 +1005,10 @@ def forward( **kwargs, ) -> VJEPA2WithMaskedInputModelOutput: r""" - context_head_mask (`torch.Tensor` with shape `[num_heads]` or `[num_hidden_layers x num_heads]`, *optional*): - The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard) for the context. context_mask (`torch.Tensor` with shape `[batch_size, patch_size, 1]`, *optional*): The mask position ids indicating which encoder output patches are going to be exposed to the predictor. By default, this mask is created as torch.arange(N).unsqueeze(0).repeat(B,1), indicating full context available to the predictor. - target_head_mask (`torch.Tensor` with shape `[num_heads]` or `[num_hidden_layers x num_heads]`, *optional*): - The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard) for the target. target_mask (`torch.Tensor` with shape `[batch_size, patch_size, 1]`, *optional*): The mask position ids indicating which encoder output patches are going to be used as a prediction target for the predictor. By default, this mask is created as torch.arange(N).unsqueeze(0).repeat(B,1), indicating @@ -1073,13 +1024,8 @@ def forward( if pixel_values_videos is None: raise ValueError("You have to specify pixel_values_videos") - # Prepare head mask if needed - context_head_mask = _convert_head_mask_to_5d(context_head_mask, self.config.num_hidden_layers) - target_head_mask = _convert_head_mask_to_5d(target_head_mask, self.config.pred_num_hidden_layers) - encoder_outputs: BaseModelOutput = self.encoder( pixel_values_videos=pixel_values_videos, - head_mask=context_head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) @@ -1096,7 +1042,6 @@ def forward( encoder_hidden_states=sequence_output, context_mask=context_mask, target_mask=target_mask, - head_mask=target_head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) diff --git a/src/transformers/models/voxtral/modeling_voxtral.py b/src/transformers/models/voxtral/modeling_voxtral.py index 15ef9e541c0b..da2e79ad62f6 100644 --- a/src/transformers/models/voxtral/modeling_voxtral.py +++ b/src/transformers/models/voxtral/modeling_voxtral.py @@ -50,7 +50,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -62,9 +61,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -122,7 +118,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, **kwargs, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -152,7 +147,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=1.0, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -185,7 +179,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -193,8 +186,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -204,7 +195,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -359,7 +349,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask=attention_mask, - layer_head_mask=None, ) hidden_states = layer_outputs[0] diff --git a/src/transformers/models/voxtral/modular_voxtral.py b/src/transformers/models/voxtral/modular_voxtral.py index c02e8ec58864..7e7da130daac 100644 --- a/src/transformers/models/voxtral/modular_voxtral.py +++ b/src/transformers/models/voxtral/modular_voxtral.py @@ -103,7 +103,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, attention_mask=attention_mask, - layer_head_mask=None, ) hidden_states = layer_outputs[0] diff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py index 00f31596e688..c517c26288c1 100755 --- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py +++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py @@ -467,7 +467,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -479,9 +478,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -528,7 +524,6 @@ def forward( hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, # TODO: we need a refactor so that the different attention modules can get their specific kwargs # ATM, we have mixed things encoder, decoder, and encoder-decoder attn @@ -567,7 +562,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=self.scaling, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -763,8 +757,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -861,8 +853,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py index 9ae3b33ebc6f..acbe3fa77b17 100644 --- a/src/transformers/models/whisper/modeling_whisper.py +++ b/src/transformers/models/whisper/modeling_whisper.py @@ -219,7 +219,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, **kwargs, ): if scaling is None: @@ -231,9 +230,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if head_mask is not None: - attn_weights = attn_weights * head_mask.view(1, -1, 1, 1) - attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -291,7 +287,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, # TODO: we need a refactor so that the different attention modules can get their specific kwargs @@ -358,7 +353,6 @@ def forward( dropout=0.0 if not self.training else self.dropout, scaling=1.0, output_attentions=output_attentions, - head_mask=layer_head_mask, **kwargs, ) @@ -392,7 +386,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, - layer_head_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ @@ -400,8 +393,6 @@ def forward( hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -411,7 +402,6 @@ def forward( hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, ) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) @@ -471,8 +461,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[EncoderDecoderCache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -487,10 +475,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -504,7 +488,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -520,7 +503,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, ) @@ -633,7 +615,6 @@ def forward( self, input_features, attention_mask=None, - head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, @@ -650,11 +631,6 @@ def forward( attention_mask (`torch.Tensor`)`, *optional*): Whisper does not support masking of the `input_features`, this argument is preserved for compatibility, but it is not used. By default the silence in the input log mel spectrogram are ignored. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. @@ -688,12 +664,6 @@ def forward( encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == (len(self.layers)), ( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) - for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) @@ -710,7 +680,6 @@ def forward( layer_outputs = encoder_layer( hidden_states, None, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), output_attentions=output_attentions, ) @@ -767,8 +736,6 @@ def forward( input_ids=None, attention_mask=None, encoder_hidden_states=None, - head_mask=None, - cross_attn_head_mask=None, past_key_values=None, inputs_embeds=None, position_ids=None, @@ -798,19 +765,6 @@ def forward( encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention - on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - past_key_values (`EncoderDecoderCache` or `tuple(tuple(torch.FloatTensor))`, *optional*): It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). @@ -910,13 +864,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - assert attn_mask.size()[0] == (len(self.layers)), ( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -930,8 +877,6 @@ def forward( hidden_states, attention_mask=causal_mask, encoder_hidden_states=encoder_hidden_states, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values if use_cache else None, output_attentions=output_attentions, use_cache=use_cache, @@ -1042,9 +987,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, decoder_inputs_embeds: Optional[tuple[torch.FloatTensor]] = None, @@ -1074,11 +1016,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_whisper._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the BART paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. decoder_position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. @@ -1113,7 +1050,6 @@ def forward( encoder_outputs = self.encoder( input_features, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -1131,8 +1067,6 @@ def forward( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, position_ids=decoder_position_ids, @@ -1205,9 +1139,6 @@ def forward( attention_mask: Optional[torch.LongTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[Cache] = None, decoder_inputs_embeds: Optional[tuple[torch.FloatTensor]] = None, @@ -1238,11 +1169,6 @@ def forward( If you want to change padding behavior, you should read [`modeling_whisper._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the BART paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. decoder_position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. @@ -1292,9 +1218,6 @@ def forward( decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_outputs, decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, decoder_inputs_embeds=decoder_inputs_embeds, decoder_position_ids=decoder_position_ids, @@ -1394,8 +1317,6 @@ def forward( input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[torch.FloatTensor]] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, @@ -1409,10 +1330,6 @@ def forward( encoder_outputs (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -1458,8 +1375,6 @@ def forward( input_ids=input_ids, attention_mask=attention_mask, encoder_hidden_states=encoder_outputs, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, @@ -1528,7 +1443,6 @@ def set_input_embeddings(self, value: nn.Module): def forward( self, input_features: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, @@ -1582,7 +1496,6 @@ def forward( if encoder_outputs is None: encoder_outputs = self.encoder( input_features, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, diff --git a/src/transformers/models/xglm/modeling_xglm.py b/src/transformers/models/xglm/modeling_xglm.py index 0f863f3f274f..d9102a0276a4 100755 --- a/src/transformers/models/xglm/modeling_xglm.py +++ b/src/transformers/models/xglm/modeling_xglm.py @@ -140,7 +140,6 @@ def forward( key_value_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: @@ -221,15 +220,6 @@ def forward( else: attn_weights = nn.functional.softmax(attn_weights, dim=-1) - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - if output_attentions: # this operation is a bit awkward, but it's required to # make sure that attn_weights keeps its gradient. @@ -301,8 +291,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, @@ -317,10 +305,6 @@ def forward( cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. past_key_values (`Cache`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under @@ -334,7 +318,6 @@ def forward( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, - layer_head_mask=layer_head_mask, output_attentions=output_attentions, cache_position=cache_position, ) @@ -351,7 +334,6 @@ def forward( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, past_key_values=past_key_values, output_attentions=output_attentions, cache_position=cache_position, @@ -436,8 +418,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, @@ -458,11 +438,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - cross_attn_head_mask (`torch.Tensor` of shape `(num_layers, attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( @@ -539,14 +514,6 @@ def forward( all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != len(self.layers): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: @@ -561,8 +528,6 @@ def forward( attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=(cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None), past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, @@ -623,8 +588,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, @@ -647,11 +610,6 @@ def forward( - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - cross_attn_head_mask (`torch.Tensor` of shape `(num_layers, attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored @@ -671,8 +629,6 @@ def forward( position_ids=position_ids, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, diff --git a/src/transformers/models/xlm/modeling_xlm.py b/src/transformers/models/xlm/modeling_xlm.py index fafdd770ce12..6fd21d0490de 100755 --- a/src/transformers/models/xlm/modeling_xlm.py +++ b/src/transformers/models/xlm/modeling_xlm.py @@ -530,7 +530,6 @@ def forward( mask, kv=None, cache=None, - head_mask=None, output_attentions=False, cache_position=None, ): @@ -582,10 +581,6 @@ def forward( weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) # (bs, n_heads, qlen, klen) weights = nn.functional.dropout(weights, p=self.dropout, training=self.training) # (bs, n_heads, qlen, klen) - # Mask heads if we want to - if head_mask is not None: - weights = weights * head_mask - context = torch.matmul(weights, v) # (bs, n_heads, qlen, head_dim) context = context.transpose(1, 2).contiguous().view(bs, -1, self.n_heads * self.head_dim) @@ -785,7 +780,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -852,9 +846,6 @@ def forward( if langs is not None: assert langs.size() == (bs, slen) # (slen, bs) - # Prepare head mask if needed - head_mask = self.get_head_mask(head_mask, self.config.n_layers) - # do not recompute cached elements if cache is not None and input_ids is not None: _slen = slen - cache.get_seq_length() @@ -890,7 +881,6 @@ def forward( tensor, attn_mask, cache=cache, - head_mask=head_mask[i], output_attentions=output_attentions, cache_position=cache_position, ) @@ -1016,7 +1006,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1056,7 +1045,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1107,7 +1095,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1145,7 +1132,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1216,7 +1202,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1251,7 +1236,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1316,7 +1300,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1380,7 +1363,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1437,7 +1419,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1473,7 +1454,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1524,7 +1504,6 @@ def forward( position_ids: Optional[torch.Tensor] = None, lengths: Optional[torch.Tensor] = None, cache: Optional[dict[str, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1605,7 +1584,6 @@ def forward( position_ids=position_ids, lengths=lengths, cache=cache, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py b/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py index ff9fad71d20f..00bbab96668d 100644 --- a/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py +++ b/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py @@ -65,7 +65,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -106,9 +105,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -150,7 +146,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -194,7 +189,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -238,7 +232,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -286,7 +279,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -344,7 +336,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -356,7 +347,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -419,7 +409,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -429,7 +418,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -446,7 +434,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -645,7 +632,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -654,12 +640,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -733,7 +716,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -790,17 +772,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -880,8 +854,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -905,8 +877,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -963,7 +933,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1011,7 +980,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1078,7 +1046,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1105,7 +1072,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1178,7 +1144,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1203,7 +1168,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1265,7 +1229,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -1317,7 +1280,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1367,7 +1329,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1390,7 +1351,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1436,7 +1396,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1458,7 +1417,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/xlm_roberta/modular_xlm_roberta.py b/src/transformers/models/xlm_roberta/modular_xlm_roberta.py index a00f1a8cd6e8..0df2c5e6a5ac 100644 --- a/src/transformers/models/xlm_roberta/modular_xlm_roberta.py +++ b/src/transformers/models/xlm_roberta/modular_xlm_roberta.py @@ -73,7 +73,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -121,7 +120,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -171,7 +169,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -198,7 +195,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -244,7 +240,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -269,7 +264,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -327,7 +321,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -379,7 +372,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -421,7 +413,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -444,7 +435,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -486,7 +476,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -508,7 +497,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py b/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py index 522d63aad884..7cb1af8a2e68 100644 --- a/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py +++ b/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py @@ -175,7 +175,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -216,9 +215,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -260,7 +256,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -304,7 +299,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -348,7 +342,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -396,7 +389,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -455,7 +447,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -468,7 +459,6 @@ def forward( intermediate, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -529,7 +519,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, @@ -539,7 +528,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -556,7 +544,6 @@ def forward( cross_attention_output, _ = self.crossattention( self_attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -587,7 +574,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -596,12 +582,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -721,7 +704,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -778,17 +760,9 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -868,8 +842,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -893,8 +865,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1004,7 +974,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1043,7 +1012,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1109,7 +1077,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1127,7 +1094,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1175,7 +1141,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1191,7 +1156,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1250,7 +1214,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, MultipleChoiceModelOutput]: @@ -1295,7 +1258,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1342,7 +1304,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1356,7 +1317,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1408,7 +1368,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1419,7 +1378,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/xlm_roberta_xl/modular_xlm_roberta_xl.py b/src/transformers/models/xlm_roberta_xl/modular_xlm_roberta_xl.py index d4937d424d31..a8fdf8433e29 100644 --- a/src/transformers/models/xlm_roberta_xl/modular_xlm_roberta_xl.py +++ b/src/transformers/models/xlm_roberta_xl/modular_xlm_roberta_xl.py @@ -146,7 +146,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -159,7 +158,6 @@ def forward( intermediate, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -202,7 +200,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Cache] = None, @@ -211,12 +208,9 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, attention_mask, - layer_head_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_value=past_key_values, @@ -311,7 +305,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -350,7 +343,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -416,7 +408,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -434,7 +425,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -482,7 +472,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -498,7 +487,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -557,7 +545,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple, MultipleChoiceModelOutput]: @@ -602,7 +589,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -649,7 +635,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -663,7 +648,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -715,7 +699,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -726,7 +709,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/xlnet/modeling_xlnet.py b/src/transformers/models/xlnet/modeling_xlnet.py index c5ede2870711..48fb1b41a61f 100755 --- a/src/transformers/models/xlnet/modeling_xlnet.py +++ b/src/transformers/models/xlnet/modeling_xlnet.py @@ -104,7 +104,6 @@ def rel_attn_core( k_head_r, seg_mat=None, attn_mask=None, - head_mask=None, output_attentions=False, ): """Core relative positional attention operations.""" @@ -136,10 +135,6 @@ def rel_attn_core( attn_prob = nn.functional.softmax(attn_score, dim=3) attn_prob = self.dropout(attn_prob) - # Mask heads if we want to - if head_mask is not None: - attn_prob = attn_prob * torch.einsum("ijbn->bnij", head_mask) - # attention output attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) @@ -170,7 +165,6 @@ def forward( seg_mat, mems=None, target_mapping=None, - head_mask=None, output_attentions=False, ): if g is not None: @@ -202,7 +196,6 @@ def forward( k_head_r, seg_mat=seg_mat, attn_mask=attn_mask_h, - head_mask=head_mask, output_attentions=output_attentions, ) @@ -226,7 +219,6 @@ def forward( k_head_r, seg_mat=seg_mat, attn_mask=attn_mask_g, - head_mask=head_mask, output_attentions=output_attentions, ) @@ -242,7 +234,6 @@ def forward( k_head_r, seg_mat=seg_mat, attn_mask=attn_mask_g, - head_mask=head_mask, output_attentions=output_attentions, ) @@ -279,7 +270,6 @@ def forward( k_head_r, seg_mat=seg_mat, attn_mask=attn_mask_h, - head_mask=head_mask, output_attentions=output_attentions, ) @@ -338,7 +328,6 @@ def forward( seg_mat, mems=None, target_mapping=None, - head_mask=None, output_attentions=False, ): outputs = self.rel_attn( @@ -350,7 +339,6 @@ def forward( seg_mat, mems=mems, target_mapping=target_mapping, - head_mask=head_mask, output_attentions=output_attentions, ) output_h, output_g = outputs[:2] @@ -1015,7 +1003,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, use_mems: Optional[bool] = None, output_attentions: Optional[bool] = None, @@ -1182,23 +1169,6 @@ def forward( pos_emb = pos_emb.to(output_h.device) pos_emb = self.dropout(pos_emb) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] (a head_mask for each layer) - # and head_mask is converted to shape [num_hidden_layers x qlen x klen x bsz x n_head] - if head_mask is not None: - if head_mask.dim() == 1: - head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0) - head_mask = head_mask.expand(self.n_layer, -1, -1, -1, -1) - elif head_mask.dim() == 2: - head_mask = head_mask.unsqueeze(1).unsqueeze(1).unsqueeze(1) - head_mask = head_mask.to( - dtype=next(self.parameters()).dtype - ) # switch to float if need + fp16 compatibility - else: - head_mask = [None] * self.n_layer - new_mems = () if mems is None: mems = [None] * len(self.layer) @@ -1221,7 +1191,6 @@ def forward( seg_mat=seg_mat, mems=mems[i], target_mapping=target_mapping, - head_mask=head_mask[i], output_attentions=output_attentions, ) output_h, output_g = outputs[:2] @@ -1351,7 +1320,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_mems: Optional[bool] = None, @@ -1468,7 +1436,6 @@ def forward( target_mapping=target_mapping, token_type_ids=token_type_ids, input_mask=input_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, @@ -1536,7 +1503,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_mems: Optional[bool] = None, @@ -1593,7 +1559,6 @@ def forward( target_mapping=target_mapping, token_type_ids=token_type_ids, input_mask=input_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, @@ -1664,7 +1629,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_mems: Optional[bool] = None, @@ -1722,7 +1686,6 @@ def forward( target_mapping=target_mapping, token_type_ids=token_type_ids, input_mask=input_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, @@ -1774,7 +1737,6 @@ def forward( mems: Optional[torch.Tensor] = None, perm_mask: Optional[torch.Tensor] = None, target_mapping: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, use_mems: Optional[bool] = None, @@ -1860,7 +1822,6 @@ def forward( mems=mems, perm_mask=perm_mask, target_mapping=target_mapping, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, @@ -1920,7 +1881,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1974,7 +1934,6 @@ def forward( target_mapping=target_mapping, token_type_ids=token_type_ids, input_mask=input_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, @@ -2046,7 +2005,6 @@ def forward( target_mapping: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, input_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -2129,7 +2087,6 @@ def forward( target_mapping=target_mapping, token_type_ids=token_type_ids, input_mask=input_mask, - head_mask=head_mask, inputs_embeds=inputs_embeds, use_mems=use_mems, output_attentions=output_attentions, diff --git a/src/transformers/models/xmod/modeling_xmod.py b/src/transformers/models/xmod/modeling_xmod.py index eaf1362d3664..5536eea30452 100644 --- a/src/transformers/models/xmod/modeling_xmod.py +++ b/src/transformers/models/xmod/modeling_xmod.py @@ -168,7 +168,6 @@ def eager_attention_forward( attention_mask: Optional[torch.Tensor], scaling: Optional[float] = None, dropout: float = 0.0, - head_mask: Optional[torch.Tensor] = None, use_cache: Optional[bool] = None, **kwargs: Unpack[TransformersKwargs], ): @@ -209,9 +208,6 @@ def eager_attention_forward( attn_weights = nn.functional.softmax(attn_weights, dim=-1) attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) - if head_mask is not None: - attn_weights = attn_weights * head_mask - attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() @@ -254,7 +250,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[Cache] = None, cache_position: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -298,7 +293,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -343,7 +337,6 @@ def forward( hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[EncoderDecoderCache] = None, **kwargs: Unpack[TransformersKwargs], ) -> tuple[torch.Tensor]: @@ -391,7 +384,6 @@ def forward( attention_mask, dropout=0.0 if not self.training else self.dropout.p, scaling=self.scaling, - head_mask=head_mask, # only for relevant for non-absolute positional embeddings use_cache=past_key_value is not None, **kwargs, @@ -452,7 +444,6 @@ def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -468,7 +459,6 @@ def forward( hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, - head_mask=head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -592,7 +582,6 @@ def forward( hidden_states: torch.Tensor, lang_ids: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_value: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -602,7 +591,6 @@ def forward( self_attention_output, _ = self.attention( hidden_states, attention_mask, - head_mask, past_key_value=past_key_value, cache_position=cache_position, **kwargs, @@ -619,7 +607,6 @@ def forward( cross_attention_output, _ = self.crossattention( attention_output, None, # attention_mask - head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value=past_key_value, @@ -660,7 +647,6 @@ def forward( hidden_states: torch.Tensor, lang_ids: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, @@ -669,13 +655,10 @@ def forward( **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module( hidden_states, lang_ids, attention_mask, - layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, @@ -829,7 +812,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, @@ -900,18 +882,10 @@ def forward( past_key_values=past_key_values, ) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - encoder_outputs = self.encoder( embedding_output, lang_ids=lang_ids, attention_mask=attention_mask, - head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, @@ -993,8 +967,6 @@ def _update_full_mask( if "flash" in self.config._attn_implementation: attention_mask = attention_mask if 0 in attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & head_mask can not be supported when using SDPA, fall back to - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask_for_sdpa(attention_mask, inputs_embeds.dtype) elif self.config._attn_implementation == "flex_attention": @@ -1019,8 +991,6 @@ def _update_cross_attn_mask( if "flash" in self.config._attn_implementation: encoder_attention_mask = encoder_attention_mask if 0 in encoder_attention_mask else None elif self.config._attn_implementation == "sdpa": - # output_attentions=True & cross_attn_head_mask can not be supported when using SDPA, and we fall back on - # the manual implementation that requires a 4D causal mask in all cases. # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa( encoder_attention_mask, @@ -1081,7 +1051,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1126,7 +1095,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1196,7 +1164,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, @@ -1218,7 +1185,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, @@ -1301,7 +1267,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1321,7 +1286,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1383,7 +1347,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]: @@ -1439,7 +1402,6 @@ def forward( position_ids=flat_position_ids, token_type_ids=flat_token_type_ids, attention_mask=flat_attention_mask, - head_mask=head_mask, inputs_embeds=flat_inputs_embeds, return_dict=True, **kwargs, @@ -1489,7 +1451,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, **kwargs: Unpack[TransformersKwargs], @@ -1507,7 +1468,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, @@ -1576,7 +1536,6 @@ def forward( attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, @@ -1593,7 +1552,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, return_dict=True, **kwargs, diff --git a/src/transformers/models/yolos/modeling_yolos.py b/src/transformers/models/yolos/modeling_yolos.py index 7677dcae64a7..4512a3f698ef 100755 --- a/src/transformers/models/yolos/modeling_yolos.py +++ b/src/transformers/models/yolos/modeling_yolos.py @@ -264,9 +264,7 @@ def __init__(self, config: YolosConfig): self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) - def forward( - self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None - ) -> tuple[torch.Tensor, torch.Tensor]: + def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: batch_size = hidden_states.shape[0] new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size @@ -283,7 +281,7 @@ def forward( query_layer, key_layer, value_layer, - head_mask, + None, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, @@ -339,8 +337,8 @@ def prune_heads(self, heads: set[int]): self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: - self_attn_output, _ = self.attention(hidden_states, head_mask) + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: + self_attn_output, _ = self.attention(hidden_states) output = self.output(self_attn_output, hidden_states) return output @@ -389,9 +387,9 @@ def __init__(self, config: YolosConfig): self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - def forward(self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None) -> torch.Tensor: + def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states_norm = self.layernorm_before(hidden_states) - attention_output = self.attention(hidden_states_norm, head_mask) + attention_output = self.attention(hidden_states_norm) # first residual connection hidden_states = attention_output + hidden_states @@ -436,14 +434,12 @@ def forward( hidden_states: torch.Tensor, height: int, width: int, - head_mask: Optional[torch.Tensor] = None, ) -> BaseModelOutput: if self.config.use_mid_position_embeddings: interpolated_mid_position_embeddings = self.interpolation(self.mid_position_embeddings, (height, width)) for i, layer_module in enumerate(self.layer): - layer_head_mask = head_mask[i] if head_mask is not None else None - hidden_states = layer_module(hidden_states, layer_head_mask) + hidden_states = layer_module(hidden_states) if self.config.use_mid_position_embeddings: if i < (self.config.num_hidden_layers - 1): @@ -518,25 +514,15 @@ def _prune_heads(self, heads_to_prune: dict[int, list[int]]) -> None: def forward( self, pixel_values: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, **kwargs: Unpack[TransformersKwargs], ) -> BaseModelOutputWithPooling: if pixel_values is None: raise ValueError("You have to specify pixel_values") - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings(pixel_values) height, width = pixel_values.shape[-2:] - encoder_outputs: BaseModelOutput = self.encoder( - embedding_output, height=height, width=width, head_mask=head_mask - ) + encoder_outputs: BaseModelOutput = self.encoder(embedding_output, height=height, width=width) sequence_output = encoder_outputs.last_hidden_state sequence_output = self.layernorm(sequence_output) pooled_output = self.pooler(sequence_output) if self.pooler is not None else None diff --git a/src/transformers/models/yoso/modeling_yoso.py b/src/transformers/models/yoso/modeling_yoso.py index b1d8e5e752a1..f830936cc7b7 100644 --- a/src/transformers/models/yoso/modeling_yoso.py +++ b/src/transformers/models/yoso/modeling_yoso.py @@ -551,7 +551,6 @@ def forward( self, hidden_states, attention_mask=None, - head_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, @@ -691,7 +690,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, @@ -727,13 +725,6 @@ def forward( else: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, @@ -743,7 +734,6 @@ def forward( encoder_outputs = self.encoder( embedding_output, attention_mask=attention_mask, - head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, @@ -788,7 +778,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -808,7 +797,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -879,7 +867,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -899,7 +886,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -962,7 +948,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1017,7 +1002,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1069,7 +1053,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, @@ -1087,7 +1070,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, @@ -1146,7 +1128,6 @@ def forward( attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, start_positions: Optional[torch.Tensor] = None, end_positions: Optional[torch.Tensor] = None, @@ -1161,7 +1142,6 @@ def forward( attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, - head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, diff --git a/src/transformers/pipelines/image_to_image.py b/src/transformers/pipelines/image_to_image.py index 957284d2ab17..fe344df53279 100644 --- a/src/transformers/pipelines/image_to_image.py +++ b/src/transformers/pipelines/image_to_image.py @@ -85,8 +85,6 @@ def _sanitize_parameters(self, **kwargs): if "timeout" in kwargs: preprocess_params["timeout"] = kwargs["timeout"] - if "head_mask" in kwargs: - forward_params["head_mask"] = kwargs["head_mask"] return preprocess_params, forward_params, postprocess_params diff --git a/src/transformers/utils/auto_docstring.py b/src/transformers/utils/auto_docstring.py index e051057f33e2..9bf44c8bb426 100644 --- a/src/transformers/utils/auto_docstring.py +++ b/src/transformers/utils/auto_docstring.py @@ -287,26 +287,6 @@ class ModelArgs: "shape": "of shape `(batch_size, sequence_length)`", } - head_mask = { - "description": """ - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - """, - "shape": "of shape `(num_heads,)` or `(num_layers, num_heads)`", - } - - cross_attn_head_mask = { - "description": """ - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - """, - "shape": "of shape `(num_layers, num_heads)`", - } - decoder_attention_mask = { "description": """ Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to @@ -315,16 +295,6 @@ class ModelArgs: "shape": "of shape `(batch_size, target_sequence_length)`", } - decoder_head_mask = { - "description": """ - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - """, - "shape": "of shape `(decoder_layers, decoder_attention_heads)`", - } - encoder_hidden_states = { "description": """ Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention diff --git a/tests/causal_lm_tester.py b/tests/causal_lm_tester.py index dc57c708829c..a6f98c2aca3f 100644 --- a/tests/causal_lm_tester.py +++ b/tests/causal_lm_tester.py @@ -287,7 +287,6 @@ def prepare_config_and_inputs_for_common(self): @require_torch class CausalLMModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin): - test_headmasking = False test_pruning = False model_tester_class = None all_model_classes = None diff --git a/tests/generation/test_utils.py b/tests/generation/test_utils.py index ed58403a53d0..dc940740ca36 100644 --- a/tests/generation/test_utils.py +++ b/tests/generation/test_utils.py @@ -123,10 +123,6 @@ def prepare_config_and_inputs_for_generate(self, batch_size=2): # We don't want a few model inputs in our model input dictionary for generation tests input_keys_to_ignore = [ - # we don't want to mask attention heads - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", # we don't want encoder-decoder models to start from filled decoder ids "decoder_input_ids", "decoder_attention_mask", @@ -2019,7 +2015,7 @@ def test_flash_attention_2_continue_generate_with_position_ids(self): .eval() ) - # Drop all keys except for `input_ids`. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for `input_ids`. Hard to manipulate with multimodals etc dummy_input_ids = inputs_dict["input_ids"] dummy_position_ids = torch.arange(dummy_input_ids.shape[1], device=torch_device) dummy_position_ids = dummy_position_ids.unsqueeze(0).repeat(dummy_input_ids.shape[0], 1) @@ -2113,7 +2109,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/aimv2/test_modeling_aimv2.py b/tests/models/aimv2/test_modeling_aimv2.py index 524cdc5e3016..4e1bbf75c507 100644 --- a/tests/models/aimv2/test_modeling_aimv2.py +++ b/tests/models/aimv2/test_modeling_aimv2.py @@ -183,7 +183,6 @@ class Aimv2VisionModelTest(Aimv2ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False def setUp(self): @@ -313,7 +312,6 @@ class Aimv2TextModelTest(Aimv2ModelTesterMixin, unittest.TestCase): all_model_classes = (Aimv2TextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False test_resize_embeddings = False test_torchscript = False @@ -392,7 +390,6 @@ class Aimv2ModelTest(Aimv2ModelTesterMixin, PipelineTesterMixin, unittest.TestCa else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_torchscript = False test_resize_embeddings = False diff --git a/tests/models/align/test_modeling_align.py b/tests/models/align/test_modeling_align.py index 167cb1ff7c2e..1e41e5fc9c1d 100644 --- a/tests/models/align/test_modeling_align.py +++ b/tests/models/align/test_modeling_align.py @@ -135,7 +135,6 @@ class AlignVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False def setUp(self): @@ -338,7 +337,6 @@ class AlignTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (AlignTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = AlignTextModelTester(self) @@ -443,7 +441,6 @@ class AlignModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (AlignModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": AlignModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/altclip/test_modeling_altclip.py b/tests/models/altclip/test_modeling_altclip.py index 2a36470051f8..9ed9a8331617 100755 --- a/tests/models/altclip/test_modeling_altclip.py +++ b/tests/models/altclip/test_modeling_altclip.py @@ -137,7 +137,6 @@ class AltCLIPVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = AltCLIPVisionModelTester(self) @@ -299,7 +298,6 @@ class AltCLIPTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (AltCLIPTextModel,) if is_torch_available() else () fx_compatible = False # Cannot support if `can_return_tuple` test_pruning = False - test_head_masking = False # TODO (@SunMarc): Fix me @unittest.skip(reason="It's broken.") @@ -414,7 +412,6 @@ class AltCLIPModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) all_model_classes = (AltCLIPModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": AltCLIPModel} if is_torch_available() else {} fx_compatible = False # Cannot support if `can_return_tuple` - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/aria/test_modeling_aria.py b/tests/models/aria/test_modeling_aria.py index 17259a5effa8..6ac9b216c24c 100644 --- a/tests/models/aria/test_modeling_aria.py +++ b/tests/models/aria/test_modeling_aria.py @@ -191,7 +191,6 @@ class AriaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTesterMi all_model_classes = (AriaModel, AriaForConditionalGeneration) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False _is_composite = True diff --git a/tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py b/tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py index 0e42be15c648..2345cd54c4ba 100644 --- a/tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py +++ b/tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py @@ -163,7 +163,6 @@ class ASTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False # TODO: Fix the failed tests when this model gets more usage def is_pipeline_test_to_skip( diff --git a/tests/models/autoformer/test_modeling_autoformer.py b/tests/models/autoformer/test_modeling_autoformer.py index 954f9f16622b..6f25b1865351 100644 --- a/tests/models/autoformer/test_modeling_autoformer.py +++ b/tests/models/autoformer/test_modeling_autoformer.py @@ -207,7 +207,6 @@ class AutoformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa all_model_classes = (AutoformerModel, AutoformerForPrediction) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": AutoformerModel} if is_torch_available() else {} test_pruning = False - test_head_masking = False test_missing_keys = False test_torchscript = False test_inputs_embeds = False @@ -294,9 +293,6 @@ def test_forward_signature(self): expected_arg_names.extend( [ "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", diff --git a/tests/models/aya_vision/test_modeling_aya_vision.py b/tests/models/aya_vision/test_modeling_aya_vision.py index b4a2f345b895..499c4a268e56 100644 --- a/tests/models/aya_vision/test_modeling_aya_vision.py +++ b/tests/models/aya_vision/test_modeling_aya_vision.py @@ -172,7 +172,6 @@ class AyaVisionModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTester fx_compatible = False test_pruning = False test_torchscript = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/bamba/test_modeling_bamba.py b/tests/models/bamba/test_modeling_bamba.py index a5906ea14109..a1c832948209 100644 --- a/tests/models/bamba/test_modeling_bamba.py +++ b/tests/models/bamba/test_modeling_bamba.py @@ -289,7 +289,6 @@ class BambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/bark/test_modeling_bark.py b/tests/models/bark/test_modeling_bark.py index ceefc65e2096..2e3f1948cd37 100644 --- a/tests/models/bark/test_modeling_bark.py +++ b/tests/models/bark/test_modeling_bark.py @@ -113,11 +113,8 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - inputs_dict = { "input_ids": input_ids, - "head_mask": head_mask, "attention_mask": input_mask, } @@ -249,11 +246,8 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - inputs_dict = { "input_ids": input_ids, - "head_mask": head_mask, "attention_mask": input_mask, } @@ -385,15 +379,12 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - # randint between self.n_codes_given - 1 and self.n_codes_total - 1 codebook_idx = ids_tensor((1,), self.n_codes_total - self.n_codes_given).item() + self.n_codes_given inputs_dict = { "codebook_idx": codebook_idx, "input_ids": input_ids, - "head_mask": head_mask, "attention_mask": input_mask, } diff --git a/tests/models/beit/test_modeling_beit.py b/tests/models/beit/test_modeling_beit.py index 48bfda0a4d85..f78ed65c3d93 100644 --- a/tests/models/beit/test_modeling_beit.py +++ b/tests/models/beit/test_modeling_beit.py @@ -262,7 +262,6 @@ class BeitModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/bert/test_modeling_bert.py b/tests/models/bert/test_modeling_bert.py index 19094754f8bb..65892b48fbaa 100644 --- a/tests/models/bert/test_modeling_bert.py +++ b/tests/models/bert/test_modeling_bert.py @@ -676,7 +676,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/bert_generation/test_modeling_bert_generation.py b/tests/models/bert_generation/test_modeling_bert_generation.py index eecb9205df3e..7b48c614e892 100644 --- a/tests/models/bert_generation/test_modeling_bert_generation.py +++ b/tests/models/bert_generation/test_modeling_bert_generation.py @@ -362,7 +362,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/big_bird/test_modeling_big_bird.py b/tests/models/big_bird/test_modeling_big_bird.py index 8ec874d0f7a8..d6a54407015b 100644 --- a/tests/models/big_bird/test_modeling_big_bird.py +++ b/tests/models/big_bird/test_modeling_big_bird.py @@ -412,8 +412,7 @@ def create_and_check_for_change_to_full_attn( @require_torch class BigBirdModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): - # head masking & pruning is currently not supported for big bird - test_head_masking = False + # pruning is currently not supported for big bird test_pruning = False # torchscript should be possible, but takes prohibitively long to test. diff --git a/tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py b/tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py index c14cc8b1d4b7..e2d3af51c356 100644 --- a/tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py +++ b/tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py @@ -266,7 +266,6 @@ class BigBirdPegasusModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineT is_encoder_decoder = True test_missing_keys = False test_pruning = False - test_head_masking = False # torchscript tests are not passing for now. # Also torchscript is not an important feature to have in the beginning. diff --git a/tests/models/bit/test_modeling_bit.py b/tests/models/bit/test_modeling_bit.py index aa616c85aa6c..06477f320477 100644 --- a/tests/models/bit/test_modeling_bit.py +++ b/tests/models/bit/test_modeling_bit.py @@ -168,7 +168,6 @@ class BitModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/bitnet/test_modeling_bitnet.py b/tests/models/bitnet/test_modeling_bitnet.py index 19bc0c45eb2e..58e3723e8317 100644 --- a/tests/models/bitnet/test_modeling_bitnet.py +++ b/tests/models/bitnet/test_modeling_bitnet.py @@ -141,7 +141,6 @@ class BitNetModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMix if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False # Broken by attention refactor cc @Cyrilvallez diff --git a/tests/models/blenderbot/test_modeling_blenderbot.py b/tests/models/blenderbot/test_modeling_blenderbot.py index a6dcaaecc412..13b12f58515c 100644 --- a/tests/models/blenderbot/test_modeling_blenderbot.py +++ b/tests/models/blenderbot/test_modeling_blenderbot.py @@ -51,28 +51,16 @@ def prepare_blenderbot_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -159,10 +147,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = BlenderbotModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/blenderbot_small/test_modeling_blenderbot_small.py b/tests/models/blenderbot_small/test_modeling_blenderbot_small.py index f4f29c6c0a75..d4a7121b5280 100644 --- a/tests/models/blenderbot_small/test_modeling_blenderbot_small.py +++ b/tests/models/blenderbot_small/test_modeling_blenderbot_small.py @@ -48,28 +48,17 @@ def prepare_blenderbot_small_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -150,10 +139,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = BlenderbotSmallModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/blip/test_modeling_blip.py b/tests/models/blip/test_modeling_blip.py index a59cf4fefffa..27b1b1202c11 100644 --- a/tests/models/blip/test_modeling_blip.py +++ b/tests/models/blip/test_modeling_blip.py @@ -154,7 +154,6 @@ class BlipVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = BlipVisionModelTester(self) @@ -315,7 +314,6 @@ class BlipTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (BlipTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = BlipTextModelTester(self) @@ -424,7 +422,6 @@ class BlipModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -804,7 +801,6 @@ class BlipVQAModelTest(ModelTesterMixin, unittest.TestCase): # Doesn't run generation tests due to custom generation logic -- won't fix all_generative_model_classes = () fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -883,7 +879,6 @@ def test_model_get_set_embeddings(self): class BlipTextRetrievalModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (BlipForImageTextRetrieval,) if is_torch_available() else () fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -928,11 +923,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["input_ids"] if model_class != BlipForConditionalGeneration else ["pixel_values"] @@ -1113,7 +1104,6 @@ class BlipTextImageModelTest(ModelTesterMixin, unittest.TestCase): # Doesn't run generation tests due to custom generation logic -- wont fix all_generative_model_classes = () fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -1158,11 +1148,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["input_ids"] if model_class != BlipForConditionalGeneration else ["pixel_values"] diff --git a/tests/models/blip/test_modeling_blip_text.py b/tests/models/blip/test_modeling_blip_text.py index f28219d25fdd..1a41c82119c1 100644 --- a/tests/models/blip/test_modeling_blip_text.py +++ b/tests/models/blip/test_modeling_blip_text.py @@ -127,7 +127,6 @@ class BlipTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (BlipTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = BlipTextModelTester(self) diff --git a/tests/models/blip_2/test_modeling_blip_2.py b/tests/models/blip_2/test_modeling_blip_2.py index 5667b1a3fe19..6523fddb47cc 100644 --- a/tests/models/blip_2/test_modeling_blip_2.py +++ b/tests/models/blip_2/test_modeling_blip_2.py @@ -161,7 +161,6 @@ class Blip2VisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = Blip2VisionModelTester(self) @@ -465,7 +464,6 @@ class Blip2ForConditionalGenerationDecoderOnlyTest(ModelTesterMixin, GenerationT all_model_classes = (Blip2ForConditionalGeneration,) if is_torch_available() else () additional_model_inputs = ["input_ids"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -797,7 +795,6 @@ class Blip2ModelTest(ModelTesterMixin, PipelineTesterMixin, GenerationTesterMixi else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -1096,7 +1093,6 @@ class Blip2TextModelWithProjectionTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Blip2TextModelWithProjection,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False test_resize_embeddings = True test_attention_outputs = False @@ -1256,7 +1252,6 @@ class Blip2VisionModelWithProjectionTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Blip2VisionModelWithProjection,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False test_resize_embeddings = False test_torchscript = False @@ -1406,7 +1401,6 @@ class Blip2TextRetrievalModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Blip2ForImageTextRetrieval,) if is_torch_available() else () additional_model_inputs = ["input_ids"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False diff --git a/tests/models/blt/test_modeling_blt.py b/tests/models/blt/test_modeling_blt.py index 34aab8f179c9..0817b80bc7b6 100644 --- a/tests/models/blt/test_modeling_blt.py +++ b/tests/models/blt/test_modeling_blt.py @@ -186,7 +186,6 @@ class BltModelTest(CausalLMModelTest, unittest.TestCase): if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False model_tester_class = BltModelTester diff --git a/tests/models/bridgetower/test_modeling_bridgetower.py b/tests/models/bridgetower/test_modeling_bridgetower.py index 59147a9d26a8..f1a3d94c20e4 100644 --- a/tests/models/bridgetower/test_modeling_bridgetower.py +++ b/tests/models/bridgetower/test_modeling_bridgetower.py @@ -308,7 +308,6 @@ class BridgeTowerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC pipeline_model_mapping = {"feature-extraction": BridgeTowerModel} if is_torch_available() else {} is_training = False - test_headmasking = False test_pruning = False test_torchscript = False test_resize_embeddings = False diff --git a/tests/models/canine/test_modeling_canine.py b/tests/models/canine/test_modeling_canine.py index d93342bf5d54..3e05e02e9e10 100644 --- a/tests/models/canine/test_modeling_canine.py +++ b/tests/models/canine/test_modeling_canine.py @@ -19,7 +19,7 @@ from transformers.testing_utils import require_torch, slow, torch_device from ...test_configuration_common import ConfigTester -from ...test_modeling_common import ModelTesterMixin, _config_zero_init, global_rng, ids_tensor, random_attention_mask +from ...test_modeling_common import ModelTesterMixin, ids_tensor, random_attention_mask from ...test_pipeline_mixin import PipelineTesterMixin @@ -446,63 +446,6 @@ def recursive_check(tuple_object, dict_object): model, tuple_inputs, dict_inputs, {"output_hidden_states": True, "output_attentions": True} ) - def test_headmasking(self): - if not self.test_head_masking: - self.skipTest(reason="test_head_masking is set to False") - - global_rng.seed(42) - config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() - global_rng.seed() - - inputs_dict["output_attentions"] = True - config.output_hidden_states = True - configs_no_init = _config_zero_init(config) # To be sure we have no Nan - for model_class in self.all_model_classes: - model = model_class(config=configs_no_init) - model.to(torch_device) - model.eval() - - # Prepare head_mask - # Set require_grad after having prepared the tensor to avoid error (leaf variable has been moved into the graph interior) - head_mask = torch.ones( - self.model_tester.num_hidden_layers, - self.model_tester.num_attention_heads, - device=torch_device, - ) - head_mask[0, 0] = 0 - head_mask[-1, :-1] = 0 - head_mask.requires_grad_(requires_grad=True) - inputs = self._prepare_for_class(inputs_dict, model_class).copy() - inputs["head_mask"] = head_mask - - outputs = model(**inputs, return_dict=True) - - # Test that we can get a gradient back for importance score computation - output = sum(t.sum() for t in outputs[0]) - output = output.sum() - output.backward() - multihead_outputs = head_mask.grad - - self.assertIsNotNone(multihead_outputs) - self.assertEqual(len(multihead_outputs), self.model_tester.num_hidden_layers) - - def check_attentions_validity(attentions): - # Remove Nan - for t in attentions: - self.assertLess( - torch.sum(torch.isnan(t)), t.numel() / 4 - ) # Check we don't have more than 25% nans (arbitrary) - attentions = [ - t.masked_fill(torch.isnan(t), 0.0) for t in attentions - ] # remove them (the test is less complete) - - self.assertAlmostEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0) - self.assertNotEqual(attentions[1][..., -1, :, :].flatten().sum().item(), 0.0) - self.assertAlmostEqual(attentions[-2][..., -2, :, :].flatten().sum().item(), 0.0) - self.assertNotEqual(attentions[-2][..., -1, :, :].flatten().sum().item(), 0.0) - - check_attentions_validity(outputs.attentions) - @unittest.skip(reason="CANINE does not have a get_input_embeddings() method.") def test_inputs_embeds(self): # ViT does not use inputs_embeds diff --git a/tests/models/chameleon/test_modeling_chameleon.py b/tests/models/chameleon/test_modeling_chameleon.py index 2f4c849a1e35..3472d253af10 100644 --- a/tests/models/chameleon/test_modeling_chameleon.py +++ b/tests/models/chameleon/test_modeling_chameleon.py @@ -205,7 +205,6 @@ class ChameleonModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTester if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False @@ -290,7 +289,6 @@ class ChameleonVision2SeqModelTest(ModelTesterMixin, GenerationTesterMixin, unit if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/chinese_clip/test_modeling_chinese_clip.py b/tests/models/chinese_clip/test_modeling_chinese_clip.py index dc8e9a145b08..337e31c67dbb 100644 --- a/tests/models/chinese_clip/test_modeling_chinese_clip.py +++ b/tests/models/chinese_clip/test_modeling_chinese_clip.py @@ -418,7 +418,6 @@ class ChineseCLIPVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = ChineseCLIPVisionModelTester(self) @@ -550,7 +549,6 @@ class ChineseCLIPModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC all_model_classes = (ChineseCLIPModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": ChineseCLIPModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/clap/test_modeling_clap.py b/tests/models/clap/test_modeling_clap.py index 0dab34123de4..b3914ee07fd5 100644 --- a/tests/models/clap/test_modeling_clap.py +++ b/tests/models/clap/test_modeling_clap.py @@ -163,7 +163,6 @@ class ClapAudioModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = ClapAudioModelTester(self) @@ -382,7 +381,6 @@ class ClapTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (ClapTextModel, ClapTextModelWithProjection) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = ClapTextModelTester(self) @@ -493,7 +491,6 @@ class ClapModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (ClapModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": ClapModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/clip/test_modeling_clip.py b/tests/models/clip/test_modeling_clip.py index 0217c5914300..60104222c096 100644 --- a/tests/models/clip/test_modeling_clip.py +++ b/tests/models/clip/test_modeling_clip.py @@ -213,7 +213,6 @@ class CLIPVisionModelTest(CLIPModelTesterMixin, unittest.TestCase): fx_compatible = True test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = CLIPVisionModelTester(self) @@ -401,7 +400,6 @@ class CLIPTextModelTest(CLIPModelTesterMixin, unittest.TestCase): all_model_classes = (CLIPTextModel, CLIPTextModelWithProjection) if is_torch_available() else () fx_compatible = True test_pruning = False - test_head_masking = False model_split_percents = [0.5, 0.8, 0.9] def setUp(self): @@ -529,7 +527,6 @@ class CLIPModelTest(CLIPModelTesterMixin, PipelineTesterMixin, unittest.TestCase ) additional_model_inputs = ["pixel_values"] fx_compatible = True - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -725,7 +722,6 @@ class CLIPForImageClassificationModelTest(CLIPModelTesterMixin, PipelineTesterMi all_model_classes = (CLIPForImageClassification,) if is_torch_available() else () pipeline_model_mapping = {"image-classification": CLIPForImageClassification} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/clipseg/test_modeling_clipseg.py b/tests/models/clipseg/test_modeling_clipseg.py index 788a60021a88..753815c5989a 100644 --- a/tests/models/clipseg/test_modeling_clipseg.py +++ b/tests/models/clipseg/test_modeling_clipseg.py @@ -141,7 +141,6 @@ class CLIPSegVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = CLIPSegVisionModelTester(self) @@ -298,7 +297,6 @@ class CLIPSegTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (CLIPSegTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False model_split_percents = [0.5, 0.8, 0.9] def setUp(self): @@ -425,7 +423,6 @@ class CLIPSegModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) all_model_classes = (CLIPSegModel, CLIPSegForImageSegmentation) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": CLIPSegModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/clvp/test_modeling_clvp.py b/tests/models/clvp/test_modeling_clvp.py index a33d787dc7cc..b8d559943c4e 100644 --- a/tests/models/clvp/test_modeling_clvp.py +++ b/tests/models/clvp/test_modeling_clvp.py @@ -163,7 +163,6 @@ def create_and_check_model(self, speech_config, input_ids, input_mask): class ClvpEncoderTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (ClvpEncoder,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False def setUp(self): @@ -412,7 +411,6 @@ class ClvpModelForConditionalGenerationTest(ModelTesterMixin, unittest.TestCase) # Doesn't run generation tests. There are interface mismatches when using `generate` -- TODO @gante all_generative_model_classes = () - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/codegen/test_modeling_codegen.py b/tests/models/codegen/test_modeling_codegen.py index 2c6ddafc5805..02926da154cb 100644 --- a/tests/models/codegen/test_modeling_codegen.py +++ b/tests/models/codegen/test_modeling_codegen.py @@ -114,13 +114,10 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -148,19 +145,19 @@ def get_config(self): rotary_dim=self.rotary_dim, ) - def create_and_check_codegen_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_codegen_model(self, config, input_ids, input_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.n_layer) - def create_and_check_codegen_model_past(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_codegen_model_past(self, config, input_ids, input_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() @@ -196,9 +193,7 @@ def create_and_check_codegen_model_past(self, config, input_ids, input_mask, hea # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_codegen_model_attention_mask_past( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_codegen_model_attention_mask_past(self, config, input_ids, input_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() @@ -238,9 +233,7 @@ def create_and_check_codegen_model_attention_mask_past( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_codegen_model_past_large_inputs( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_codegen_model_past_large_inputs(self, config, input_ids, input_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() @@ -276,7 +269,7 @@ def create_and_check_codegen_model_past_large_inputs( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, token_type_ids, *args): model = CodeGenForCausalLM(config) model.to(torch_device) model.eval() @@ -286,7 +279,7 @@ def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mas self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_forward_and_backwards( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args, gradient_checkpointing=False + self, config, input_ids, input_mask, token_type_ids, *args, gradient_checkpointing=False ): model = CodeGenForCausalLM(config) if gradient_checkpointing: @@ -305,7 +298,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -313,7 +305,7 @@ def prepare_config_and_inputs_for_common(self): choice_labels, ) = config_and_inputs - inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask} + inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids} return config, inputs_dict @@ -327,7 +319,6 @@ class CodeGenModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi fx_compatible = False test_pruning = False test_missing_keys = False - test_head_masking = False # special case for DoubleHeads model def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): diff --git a/tests/models/cohere/test_modeling_cohere.py b/tests/models/cohere/test_modeling_cohere.py index 436d1f9d4226..25d7107a6652 100644 --- a/tests/models/cohere/test_modeling_cohere.py +++ b/tests/models/cohere/test_modeling_cohere.py @@ -170,7 +170,6 @@ class CohereModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMix if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/cohere2_vision/test_modeling_cohere2_vision.py b/tests/models/cohere2_vision/test_modeling_cohere2_vision.py index 776b2b254f17..93169a34ca5e 100644 --- a/tests/models/cohere2_vision/test_modeling_cohere2_vision.py +++ b/tests/models/cohere2_vision/test_modeling_cohere2_vision.py @@ -160,7 +160,6 @@ class Cohere2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi fx_compatible = False test_pruning = False test_torchscript = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/colpali/test_modeling_colpali.py b/tests/models/colpali/test_modeling_colpali.py index f00566ccfc1e..c8571b1a5d74 100644 --- a/tests/models/colpali/test_modeling_colpali.py +++ b/tests/models/colpali/test_modeling_colpali.py @@ -189,7 +189,6 @@ class ColPaliForRetrievalModelTest(ModelTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = True - test_head_masking = False additional_model_inputs = ["token_type_ids"] def setUp(self): diff --git a/tests/models/colqwen2/test_modeling_colqwen2.py b/tests/models/colqwen2/test_modeling_colqwen2.py index 833b2ea408c2..bb18ee7681ef 100644 --- a/tests/models/colqwen2/test_modeling_colqwen2.py +++ b/tests/models/colqwen2/test_modeling_colqwen2.py @@ -204,7 +204,6 @@ class ColQwen2ForRetrievalModelTest(ModelTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = True - test_head_masking = False def setUp(self): self.model_tester = ColQwen2ForRetrievalModelTester(self) diff --git a/tests/models/conditional_detr/test_modeling_conditional_detr.py b/tests/models/conditional_detr/test_modeling_conditional_detr.py index a2d962a85a0f..c4d83e1b7bda 100644 --- a/tests/models/conditional_detr/test_modeling_conditional_detr.py +++ b/tests/models/conditional_detr/test_modeling_conditional_detr.py @@ -189,7 +189,6 @@ class ConditionalDetrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.T is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False zero_init_hidden_state = True test_torch_exportable = True @@ -431,12 +430,7 @@ def test_forward_signature(self): arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: - expected_arg_names = ["pixel_values", "pixel_mask"] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" in arg_names - else [] - ) + expected_arg_names = ["pixel_values", "pixel_mask", "decoder_attention_mask"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] diff --git a/tests/models/convbert/test_modeling_convbert.py b/tests/models/convbert/test_modeling_convbert.py index c3a8fca28ce9..1b5fa754a402 100644 --- a/tests/models/convbert/test_modeling_convbert.py +++ b/tests/models/convbert/test_modeling_convbert.py @@ -269,7 +269,6 @@ class ConvBertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase else {} ) test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = ConvBertModelTester(self) diff --git a/tests/models/convnext/test_modeling_convnext.py b/tests/models/convnext/test_modeling_convnext.py index 51a078abae02..4a83421e8d2b 100644 --- a/tests/models/convnext/test_modeling_convnext.py +++ b/tests/models/convnext/test_modeling_convnext.py @@ -178,7 +178,6 @@ class ConvNextModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/convnextv2/test_modeling_convnextv2.py b/tests/models/convnextv2/test_modeling_convnextv2.py index 7ea8e684988c..e9091a0ff506 100644 --- a/tests/models/convnextv2/test_modeling_convnextv2.py +++ b/tests/models/convnextv2/test_modeling_convnextv2.py @@ -157,7 +157,6 @@ class ConvNextV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/cpmant/test_modeling_cpmant.py b/tests/models/cpmant/test_modeling_cpmant.py index ebe72705bbd7..c53a8862dd6a 100644 --- a/tests/models/cpmant/test_modeling_cpmant.py +++ b/tests/models/cpmant/test_modeling_cpmant.py @@ -144,7 +144,6 @@ class CpmAntModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_missing_keys = False test_mismatched_shapes = False - test_head_masking = False test_resize_embeddings = False def setUp(self): diff --git a/tests/models/csm/test_modeling_csm.py b/tests/models/csm/test_modeling_csm.py index 204ef79831f3..341407795567 100644 --- a/tests/models/csm/test_modeling_csm.py +++ b/tests/models/csm/test_modeling_csm.py @@ -144,7 +144,6 @@ def prepare_config_and_inputs_for_common(self): class CsmForConditionalGenerationTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (CsmForConditionalGeneration,) if is_torch_available() else () test_pruning = False - test_headmasking = False test_resize_embeddings = False test_resize_embeddings_untied = False diff --git a/tests/models/ctrl/test_modeling_ctrl.py b/tests/models/ctrl/test_modeling_ctrl.py index 860693b5ccdf..b679ea32ea0e 100644 --- a/tests/models/ctrl/test_modeling_ctrl.py +++ b/tests/models/ctrl/test_modeling_ctrl.py @@ -110,13 +110,10 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -140,18 +137,18 @@ def get_config(self): pad_token_id=self.pad_token_id, ) - def create_and_check_ctrl_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_ctrl_model(self, config, input_ids, input_mask, token_type_ids, *args): model = CTRLModel(config=config) model.to(torch_device) model.eval() - model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + model(input_ids, token_type_ids=token_type_ids) model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.n_layer) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, token_type_ids, *args): model = CTRLLMHeadModel(config) model.to(torch_device) model.eval() @@ -167,7 +164,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -175,7 +171,7 @@ def prepare_config_and_inputs_for_common(self): choice_labels, ) = config_and_inputs - inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask} + inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids} return config, inputs_dict @@ -195,7 +191,6 @@ class CTRLModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin ) test_pruning = True test_resize_embeddings = False - test_head_masking = False # TODO: Fix the failed tests def is_pipeline_test_to_skip( diff --git a/tests/models/cvt/test_modeling_cvt.py b/tests/models/cvt/test_modeling_cvt.py index eb2940a75ac2..1300ece178ec 100644 --- a/tests/models/cvt/test_modeling_cvt.py +++ b/tests/models/cvt/test_modeling_cvt.py @@ -157,7 +157,6 @@ class CvtModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/d_fine/test_modeling_d_fine.py b/tests/models/d_fine/test_modeling_d_fine.py index 6ff4fc061b1b..1348226857e8 100644 --- a/tests/models/d_fine/test_modeling_d_fine.py +++ b/tests/models/d_fine/test_modeling_d_fine.py @@ -293,7 +293,6 @@ class DFineModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = True diff --git a/tests/models/dab_detr/test_modeling_dab_detr.py b/tests/models/dab_detr/test_modeling_dab_detr.py index 6f437ce7692d..93ee29b0d126 100644 --- a/tests/models/dab_detr/test_modeling_dab_detr.py +++ b/tests/models/dab_detr/test_modeling_dab_detr.py @@ -184,7 +184,6 @@ class DabDetrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False zero_init_hidden_state = True test_torch_exportable = True @@ -664,12 +663,7 @@ def test_forward_signature(self): arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: - expected_arg_names = ["pixel_values", "pixel_mask"] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" in arg_names - else [] - ) + expected_arg_names = ["pixel_values", "pixel_mask", "decoder_attention_mask"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] diff --git a/tests/models/dac/test_modeling_dac.py b/tests/models/dac/test_modeling_dac.py index cb7d6b388c19..eb5ad7225a8a 100644 --- a/tests/models/dac/test_modeling_dac.py +++ b/tests/models/dac/test_modeling_dac.py @@ -119,7 +119,6 @@ class DacModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (DacModel,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False pipeline_model_mapping = {"feature-extraction": DacModel} if is_torch_available() else {} diff --git a/tests/models/data2vec/test_modeling_data2vec_audio.py b/tests/models/data2vec/test_modeling_data2vec_audio.py index 630f6238e76e..be5f9d6a0c57 100644 --- a/tests/models/data2vec/test_modeling_data2vec_audio.py +++ b/tests/models/data2vec/test_modeling_data2vec_audio.py @@ -354,7 +354,6 @@ class Data2VecAudioModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Tes else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = Data2VecAudioModelTester(self) diff --git a/tests/models/data2vec/test_modeling_data2vec_text.py b/tests/models/data2vec/test_modeling_data2vec_text.py index 59f86c88cd6c..f8810685f0fc 100644 --- a/tests/models/data2vec/test_modeling_data2vec_text.py +++ b/tests/models/data2vec/test_modeling_data2vec_text.py @@ -569,7 +569,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/data2vec/test_modeling_data2vec_vision.py b/tests/models/data2vec/test_modeling_data2vec_vision.py index aebbe183cacf..c39baea90bc4 100644 --- a/tests/models/data2vec/test_modeling_data2vec_vision.py +++ b/tests/models/data2vec/test_modeling_data2vec_vision.py @@ -202,7 +202,6 @@ class Data2VecVisionModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Te test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = Data2VecVisionModelTester(self) diff --git a/tests/models/deberta/test_modeling_deberta.py b/tests/models/deberta/test_modeling_deberta.py index 8ec4eefc4f3f..b9f6acc1af3e 100644 --- a/tests/models/deberta/test_modeling_deberta.py +++ b/tests/models/deberta/test_modeling_deberta.py @@ -240,7 +240,6 @@ class DebertaModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) fx_compatible = True test_torchscript = False test_pruning = False - test_head_masking = False is_encoder_decoder = False def setUp(self): diff --git a/tests/models/deberta_v2/test_modeling_deberta_v2.py b/tests/models/deberta_v2/test_modeling_deberta_v2.py index 6de08d3f4bd7..9d67b3ccb16b 100644 --- a/tests/models/deberta_v2/test_modeling_deberta_v2.py +++ b/tests/models/deberta_v2/test_modeling_deberta_v2.py @@ -254,7 +254,6 @@ class DebertaV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas fx_compatible = True test_torchscript = False test_pruning = False - test_head_masking = False is_encoder_decoder = False def setUp(self): diff --git a/tests/models/decision_transformer/test_modeling_decision_transformer.py b/tests/models/decision_transformer/test_modeling_decision_transformer.py index b6bdeaa0837d..3e8548aa361c 100644 --- a/tests/models/decision_transformer/test_modeling_decision_transformer.py +++ b/tests/models/decision_transformer/test_modeling_decision_transformer.py @@ -133,7 +133,6 @@ class DecisionTransformerModelTest(ModelTesterMixin, PipelineTesterMixin, unitte # Ignoring of a failing tests from ModelTesterMixin, as the model does not implement these features test_pruning = False test_resize_embeddings = False - test_head_masking = False test_attention_outputs = False test_hidden_states_output = False test_inputs_embeds = False diff --git a/tests/models/deepseek_v3/test_modeling_deepseek_v3.py b/tests/models/deepseek_v3/test_modeling_deepseek_v3.py index 46a5cdd7bdd0..6277b07093db 100644 --- a/tests/models/deepseek_v3/test_modeling_deepseek_v3.py +++ b/tests/models/deepseek_v3/test_modeling_deepseek_v3.py @@ -234,7 +234,6 @@ class DeepseekV3ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTeste if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/deepseek_vl/test_modeling_deepseek_vl.py b/tests/models/deepseek_vl/test_modeling_deepseek_vl.py index 55ced08a09d3..7941695c6583 100644 --- a/tests/models/deepseek_vl/test_modeling_deepseek_vl.py +++ b/tests/models/deepseek_vl/test_modeling_deepseek_vl.py @@ -139,7 +139,6 @@ class DeepseekVLModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.Test ) _is_composite = True test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = DeepseekVLModelTester(self) diff --git a/tests/models/deepseek_vl_hybrid/test_modeling_deepseek_vl_hybrid.py b/tests/models/deepseek_vl_hybrid/test_modeling_deepseek_vl_hybrid.py index 02a934275012..5ce1679f5997 100644 --- a/tests/models/deepseek_vl_hybrid/test_modeling_deepseek_vl_hybrid.py +++ b/tests/models/deepseek_vl_hybrid/test_modeling_deepseek_vl_hybrid.py @@ -168,7 +168,6 @@ class DeepseekVLHybridModelTest(ModelTesterMixin, GenerationTesterMixin, unittes ) _is_composite = True test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = DeepseekVLHybridModelTester(self) diff --git a/tests/models/deformable_detr/test_modeling_deformable_detr.py b/tests/models/deformable_detr/test_modeling_deformable_detr.py index 14fa0994ebee..dcd95a009ead 100644 --- a/tests/models/deformable_detr/test_modeling_deformable_detr.py +++ b/tests/models/deformable_detr/test_modeling_deformable_detr.py @@ -195,7 +195,6 @@ class DeformableDetrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Te is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = True @@ -507,12 +506,7 @@ def test_forward_signature(self): arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: - expected_arg_names = ["pixel_values", "pixel_mask"] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" in arg_names - else [] - ) + expected_arg_names = ["pixel_values", "pixel_mask", "decoder_attention_mask"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] diff --git a/tests/models/deit/test_modeling_deit.py b/tests/models/deit/test_modeling_deit.py index 1b3a59559bf4..5796b5ad105d 100644 --- a/tests/models/deit/test_modeling_deit.py +++ b/tests/models/deit/test_modeling_deit.py @@ -221,7 +221,6 @@ class DeiTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/depth_anything/test_modeling_depth_anything.py b/tests/models/depth_anything/test_modeling_depth_anything.py index f0c638a76f22..4164e506a58d 100644 --- a/tests/models/depth_anything/test_modeling_depth_anything.py +++ b/tests/models/depth_anything/test_modeling_depth_anything.py @@ -147,7 +147,6 @@ class DepthAnythingModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Tes test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True test_torch_exportable_strictly = get_torch_major_and_minor_version() != "2.7" diff --git a/tests/models/depth_pro/test_modeling_depth_pro.py b/tests/models/depth_pro/test_modeling_depth_pro.py index 0e644c7c1892..6574923592a9 100644 --- a/tests/models/depth_pro/test_modeling_depth_pro.py +++ b/tests/models/depth_pro/test_modeling_depth_pro.py @@ -212,7 +212,6 @@ class DepthProModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/detr/test_modeling_detr.py b/tests/models/detr/test_modeling_detr.py index dcad75307691..b17d94bb031a 100644 --- a/tests/models/detr/test_modeling_detr.py +++ b/tests/models/detr/test_modeling_detr.py @@ -189,7 +189,6 @@ class DetrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False zero_init_hidden_state = True test_torch_exportable = True @@ -431,12 +430,7 @@ def test_forward_signature(self): arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: - expected_arg_names = ["pixel_values", "pixel_mask"] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" in arg_names - else [] - ) + expected_arg_names = ["pixel_values", "pixel_mask", "decoder_attention_mask", "encoder_outputs"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] diff --git a/tests/models/dia/test_modeling_dia.py b/tests/models/dia/test_modeling_dia.py index 5ac321c5a753..83eaf0e336f5 100644 --- a/tests/models/dia/test_modeling_dia.py +++ b/tests/models/dia/test_modeling_dia.py @@ -221,7 +221,6 @@ class DiaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, pipeline_model_mapping = {} # pipeline_model_mapping = {"text-to-audio": DiaForConditionalGeneration} if is_torch_available() else {} test_pruning = False - test_head_masking = False test_resize_embeddings = False is_encoder_decoder = True # Indicates VLMs usually but there are many audio models which are also composite @@ -242,7 +241,6 @@ def skip_non_greedy_generate(self): "test_prompt_lookup", "test_model_parallel_beam_search", "test_generate_without_input_ids", - "test_generate_with_head_masking", ] for test in skippable_tests: diff --git a/tests/models/diffllama/test_modeling_diffllama.py b/tests/models/diffllama/test_modeling_diffllama.py index 9938135281fe..b28f7c167b69 100644 --- a/tests/models/diffllama/test_modeling_diffllama.py +++ b/tests/models/diffllama/test_modeling_diffllama.py @@ -197,7 +197,6 @@ class DiffLlamaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTester if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/dinat/test_modeling_dinat.py b/tests/models/dinat/test_modeling_dinat.py index 4ffe5f6cd692..721af9e8cab7 100644 --- a/tests/models/dinat/test_modeling_dinat.py +++ b/tests/models/dinat/test_modeling_dinat.py @@ -215,7 +215,6 @@ class DinatModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/dinov2/test_modeling_dinov2.py b/tests/models/dinov2/test_modeling_dinov2.py index 2377bc1d2ee2..87a799df3709 100644 --- a/tests/models/dinov2/test_modeling_dinov2.py +++ b/tests/models/dinov2/test_modeling_dinov2.py @@ -232,7 +232,6 @@ class Dinov2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = Dinov2ModelTester(self) diff --git a/tests/models/dinov2_with_registers/test_modeling_dinov2_with_registers.py b/tests/models/dinov2_with_registers/test_modeling_dinov2_with_registers.py index b9f0f5fecfe0..c9696dedb2ae 100644 --- a/tests/models/dinov2_with_registers/test_modeling_dinov2_with_registers.py +++ b/tests/models/dinov2_with_registers/test_modeling_dinov2_with_registers.py @@ -236,7 +236,6 @@ class Dinov2WithRegistersModelTest(ModelTesterMixin, PipelineTesterMixin, unitte test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/dinov3_convnext/test_modeling_dinov3_convnext.py b/tests/models/dinov3_convnext/test_modeling_dinov3_convnext.py index 0ecbc3a6c4f5..bbb635c30971 100644 --- a/tests/models/dinov3_convnext/test_modeling_dinov3_convnext.py +++ b/tests/models/dinov3_convnext/test_modeling_dinov3_convnext.py @@ -124,7 +124,6 @@ class DINOv3ConvNextModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Te fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/dinov3_vit/test_modeling_dinov3_vit.py b/tests/models/dinov3_vit/test_modeling_dinov3_vit.py index f0b8c92d22a0..93af786e4c3b 100644 --- a/tests/models/dinov3_vit/test_modeling_dinov3_vit.py +++ b/tests/models/dinov3_vit/test_modeling_dinov3_vit.py @@ -154,7 +154,6 @@ class Dinov3ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/doge/test_modeling_doge.py b/tests/models/doge/test_modeling_doge.py index e87c24ffe63e..bf39f12f7afe 100644 --- a/tests/models/doge/test_modeling_doge.py +++ b/tests/models/doge/test_modeling_doge.py @@ -274,7 +274,6 @@ class DogeModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin else {} ) has_attentions = False - test_headmasking = False test_pruning = False test_torchscript = False fx_compatible = False diff --git a/tests/models/donut/test_modeling_donut_swin.py b/tests/models/donut/test_modeling_donut_swin.py index 456da8500799..1f26e3514843 100644 --- a/tests/models/donut/test_modeling_donut_swin.py +++ b/tests/models/donut/test_modeling_donut_swin.py @@ -170,7 +170,6 @@ class DonutSwinModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = DonutSwinModelTester(self) diff --git a/tests/models/dots1/test_modeling_dots1.py b/tests/models/dots1/test_modeling_dots1.py index 65cb64ee24ff..c500c8e5dc94 100644 --- a/tests/models/dots1/test_modeling_dots1.py +++ b/tests/models/dots1/test_modeling_dots1.py @@ -77,7 +77,6 @@ class Dots1ModelTest(CausalLMModelTest, unittest.TestCase): else {} ) - test_headmasking = False test_pruning = False model_tester_class = Dots1ModelTester diff --git a/tests/models/dpr/test_modeling_dpr.py b/tests/models/dpr/test_modeling_dpr.py index f7c4f6eb183e..ae8d83343e93 100644 --- a/tests/models/dpr/test_modeling_dpr.py +++ b/tests/models/dpr/test_modeling_dpr.py @@ -188,7 +188,6 @@ class DPRModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_resize_embeddings = False test_missing_keys = False # why? test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = DPRModelTester(self) diff --git a/tests/models/dpt/test_modeling_dpt.py b/tests/models/dpt/test_modeling_dpt.py index 1d693e7f408c..10453e59c906 100644 --- a/tests/models/dpt/test_modeling_dpt.py +++ b/tests/models/dpt/test_modeling_dpt.py @@ -172,7 +172,6 @@ class DPTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/dpt/test_modeling_dpt_auto_backbone.py b/tests/models/dpt/test_modeling_dpt_auto_backbone.py index 165da4be6be5..c8fd288bfbf5 100644 --- a/tests/models/dpt/test_modeling_dpt_auto_backbone.py +++ b/tests/models/dpt/test_modeling_dpt_auto_backbone.py @@ -139,7 +139,6 @@ class DPTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True test_torch_exportable_strictly = get_torch_major_and_minor_version() != "2.7" diff --git a/tests/models/dpt/test_modeling_dpt_hybrid.py b/tests/models/dpt/test_modeling_dpt_hybrid.py index e7a184c400a7..3e10d0014ada 100644 --- a/tests/models/dpt/test_modeling_dpt_hybrid.py +++ b/tests/models/dpt/test_modeling_dpt_hybrid.py @@ -184,7 +184,6 @@ class DPTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/edgetam/test_modeling_edgetam.py b/tests/models/edgetam/test_modeling_edgetam.py index 701642a43d41..687a895cc939 100644 --- a/tests/models/edgetam/test_modeling_edgetam.py +++ b/tests/models/edgetam/test_modeling_edgetam.py @@ -237,7 +237,6 @@ class EdgeTamModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False _is_composite = True diff --git a/tests/models/efficientloftr/test_modeling_efficientloftr.py b/tests/models/efficientloftr/test_modeling_efficientloftr.py index 4ea8a4d823c5..24716e924b7a 100644 --- a/tests/models/efficientloftr/test_modeling_efficientloftr.py +++ b/tests/models/efficientloftr/test_modeling_efficientloftr.py @@ -136,7 +136,6 @@ class EfficientLoFTRModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = True def setUp(self): diff --git a/tests/models/efficientnet/test_modeling_efficientnet.py b/tests/models/efficientnet/test_modeling_efficientnet.py index 5d7cc2ce5268..ac0e82d68719 100644 --- a/tests/models/efficientnet/test_modeling_efficientnet.py +++ b/tests/models/efficientnet/test_modeling_efficientnet.py @@ -137,7 +137,6 @@ class EfficientNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Test fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/electra/test_modeling_electra.py b/tests/models/electra/test_modeling_electra.py index 3a1823cc8c01..8019b01a767d 100644 --- a/tests/models/electra/test_modeling_electra.py +++ b/tests/models/electra/test_modeling_electra.py @@ -532,7 +532,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/emu3/test_modeling_emu3.py b/tests/models/emu3/test_modeling_emu3.py index 1bef4585414d..946226eaf7f6 100644 --- a/tests/models/emu3/test_modeling_emu3.py +++ b/tests/models/emu3/test_modeling_emu3.py @@ -132,7 +132,6 @@ class Emu3Text2TextModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTe if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False @@ -321,7 +320,6 @@ class Emu3Vision2TextModelTest(ModelTesterMixin, GenerationTesterMixin, Pipeline else () ) pipeline_model_mapping = {} - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/encodec/test_modeling_encodec.py b/tests/models/encodec/test_modeling_encodec.py index 05e13f9482d9..21091c164c9d 100644 --- a/tests/models/encodec/test_modeling_encodec.py +++ b/tests/models/encodec/test_modeling_encodec.py @@ -49,9 +49,6 @@ def prepare_inputs_dict( decoder_input_ids=None, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if input_ids is not None: encoder_dict = {"input_ids": input_ids} @@ -149,7 +146,6 @@ class EncodecModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) all_model_classes = (EncodecModel,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False pipeline_model_mapping = {"feature-extraction": EncodecModel} if is_torch_available() else {} diff --git a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py index a6b12a9f65ae..042bf5d79d43 100644 --- a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py +++ b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py @@ -1072,7 +1072,6 @@ def prepare_config_and_inputs(self): decoder_config, decoder_input_ids, decoder_input_mask, - decoder_head_mask, decoder_token_type_ids, decoder_sequence_labels, decoder_token_labels, diff --git a/tests/models/eomt/test_modeling_eomt.py b/tests/models/eomt/test_modeling_eomt.py index f0d4a7c1fa9e..367b29651f0d 100644 --- a/tests/models/eomt/test_modeling_eomt.py +++ b/tests/models/eomt/test_modeling_eomt.py @@ -106,7 +106,6 @@ class EomtForUniversalSegmentationTest(ModelTesterMixin, PipelineTesterMixin, un pipeline_model_mapping = {"image-segmentation": EomtForUniversalSegmentation} if is_torch_available() else {} is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = False diff --git a/tests/models/ernie/test_modeling_ernie.py b/tests/models/ernie/test_modeling_ernie.py index a500a32e3236..b38d7f8633eb 100644 --- a/tests/models/ernie/test_modeling_ernie.py +++ b/tests/models/ernie/test_modeling_ernie.py @@ -630,7 +630,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/esm/test_modeling_esmfold.py b/tests/models/esm/test_modeling_esmfold.py index b13e7fe58b1d..27f27105cc37 100644 --- a/tests/models/esm/test_modeling_esmfold.py +++ b/tests/models/esm/test_modeling_esmfold.py @@ -224,10 +224,6 @@ def test_head_pruning_save_load_from_config_init(self): def test_head_pruning_save_load_from_pretrained(self): pass - @unittest.skip(reason="ESMFold does not support head pruning.") - def test_headmasking(self): - pass - @unittest.skip(reason="ESMFold does not output hidden states in the normal way.") def test_hidden_states_output(self): pass diff --git a/tests/models/evolla/test_modeling_evolla.py b/tests/models/evolla/test_modeling_evolla.py index b518c0db956d..78716cf36a33 100644 --- a/tests/models/evolla/test_modeling_evolla.py +++ b/tests/models/evolla/test_modeling_evolla.py @@ -201,7 +201,6 @@ class EvollaModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (EvollaModel, EvollaForProteinText2Text) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": EvollaModel} if is_torch_available() else {} test_pruning = False - test_headmasking = False test_torchscript = False test_resize_embeddings = False maxDiff = None diff --git a/tests/models/falcon_h1/test_modeling_falcon_h1.py b/tests/models/falcon_h1/test_modeling_falcon_h1.py index 27eb8e32713b..84afa75ccc01 100644 --- a/tests/models/falcon_h1/test_modeling_falcon_h1.py +++ b/tests/models/falcon_h1/test_modeling_falcon_h1.py @@ -259,7 +259,6 @@ def create_and_check_decoder_model_past_large_inputs( @require_torch class FalconH1ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (FalconH1Model, FalconH1ForCausalLM) if is_torch_available() else () - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/falcon_mamba/test_modeling_falcon_mamba.py b/tests/models/falcon_mamba/test_modeling_falcon_mamba.py index 8543e7bf4147..7c0b6cf19aa9 100644 --- a/tests/models/falcon_mamba/test_modeling_falcon_mamba.py +++ b/tests/models/falcon_mamba/test_modeling_falcon_mamba.py @@ -274,7 +274,6 @@ class FalconMambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTest test_torchscript = False # FIXME let's try to support this @ArthurZucker test_missing_keys = False test_pruning = False - test_head_masking = False # FalconMamba does not have attention heads pipeline_model_mapping = ( {"feature-extraction": FalconMambaModel, "text-generation": FalconMambaForCausalLM} if is_torch_available() diff --git a/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py b/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py index 6ee0015b9a65..6e7b567c1d91 100644 --- a/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py +++ b/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py @@ -126,7 +126,6 @@ def prepare_config_and_inputs_for_common(self): class FastSpeech2ConformerModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (FastSpeech2ConformerModel,) if is_torch_available() else () test_pruning = False - test_headmasking = False test_torchscript = False test_resize_embeddings = False is_encoder_decoder = True @@ -563,7 +562,6 @@ def prepare_config_and_inputs_for_common(self): class FastSpeech2ConformerWithHifiGanTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (FastSpeech2ConformerWithHifiGan,) if is_torch_available() else () test_pruning = False - test_headmasking = False test_torchscript = False test_resize_embeddings = False is_encoder_decoder = True diff --git a/tests/models/flava/test_modeling_flava.py b/tests/models/flava/test_modeling_flava.py index 896ce256955a..5915d25ee39b 100644 --- a/tests/models/flava/test_modeling_flava.py +++ b/tests/models/flava/test_modeling_flava.py @@ -166,7 +166,6 @@ class FlavaImageModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = FlavaImageModelTester(self) @@ -437,7 +436,6 @@ def prepare_config_and_inputs_for_common(self): class FlavaTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (FlavaTextModel,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False def setUp(self): @@ -575,7 +573,6 @@ def prepare_config_and_inputs_for_common(self): class FlavaMultimodalModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (FlavaMultimodalModel,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_resize_embeddings = False test_torchscript = False @@ -690,7 +687,6 @@ def prepare_config_and_inputs_for_common(self): class FlavaImageCodebookTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (FlavaImageCodebook,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_resize_embeddings = False test_torchscript = False has_attentions = False @@ -890,7 +886,6 @@ class FlavaModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (FlavaModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": FlavaModel} if is_torch_available() else {} class_for_tester = FlavaModelTester - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/florence2/test_modeling_florence2.py b/tests/models/florence2/test_modeling_florence2.py index e191bf1032d6..1f1fba185aff 100644 --- a/tests/models/florence2/test_modeling_florence2.py +++ b/tests/models/florence2/test_modeling_florence2.py @@ -236,7 +236,6 @@ class Florence2ForConditionalGenerationModelTest(ModelTesterMixin, GenerationTes else {} ) test_pruning = False - test_head_masking = False test_attention_outputs = False _is_composite = True diff --git a/tests/models/fnet/test_modeling_fnet.py b/tests/models/fnet/test_modeling_fnet.py index f2bc7b099437..d3f145285728 100644 --- a/tests/models/fnet/test_modeling_fnet.py +++ b/tests/models/fnet/test_modeling_fnet.py @@ -255,7 +255,6 @@ class FNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): # Skip Tests test_pruning = False - test_head_masking = False # TODO: Fix the failed tests def is_pipeline_test_to_skip( diff --git a/tests/models/focalnet/test_modeling_focalnet.py b/tests/models/focalnet/test_modeling_focalnet.py index f4dac79f9ca0..f3ea480ed91d 100644 --- a/tests/models/focalnet/test_modeling_focalnet.py +++ b/tests/models/focalnet/test_modeling_focalnet.py @@ -245,7 +245,6 @@ class FocalNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/fsmt/test_modeling_fsmt.py b/tests/models/fsmt/test_modeling_fsmt.py index df57cc1dba83..1ba95346b162 100644 --- a/tests/models/fsmt/test_modeling_fsmt.py +++ b/tests/models/fsmt/test_modeling_fsmt.py @@ -139,23 +139,12 @@ def prepare_fsmt_inputs_dict( config, input_ids, attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) return { "input_ids": input_ids, "attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, } diff --git a/tests/models/funnel/test_modeling_funnel.py b/tests/models/funnel/test_modeling_funnel.py index 3d28924bee1c..a55d2565e787 100644 --- a/tests/models/funnel/test_modeling_funnel.py +++ b/tests/models/funnel/test_modeling_funnel.py @@ -352,7 +352,6 @@ def prepare_config_and_inputs_for_common(self): @require_torch class FunnelModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): - test_head_masking = False test_pruning = False all_model_classes = ( ( @@ -431,7 +430,6 @@ def _mock_init_weights(self, module): @require_torch class FunnelBaseModelTest(ModelTesterMixin, unittest.TestCase): - test_head_masking = False test_pruning = False all_model_classes = ( (FunnelBaseModel, FunnelForMultipleChoice, FunnelForSequenceClassification) if is_torch_available() else () diff --git a/tests/models/fuyu/test_modeling_fuyu.py b/tests/models/fuyu/test_modeling_fuyu.py index 205057a4b447..224d8c46d7dc 100644 --- a/tests/models/fuyu/test_modeling_fuyu.py +++ b/tests/models/fuyu/test_modeling_fuyu.py @@ -169,7 +169,6 @@ class FuyuModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin {"text-generation": FuyuForCausalLM, "image-text-to-text": FuyuForCausalLM} if is_torch_available() else {} ) - test_head_masking = False test_pruning = False test_cpu_offload = False test_disk_offload = False diff --git a/tests/models/gemma3/test_modeling_gemma3.py b/tests/models/gemma3/test_modeling_gemma3.py index 3745629ecd5e..27cfa83f7b33 100644 --- a/tests/models/gemma3/test_modeling_gemma3.py +++ b/tests/models/gemma3/test_modeling_gemma3.py @@ -267,7 +267,6 @@ class Gemma3Vision2TextModelTest(ModelTesterMixin, GenerationTesterMixin, unitte else () ) all_generative_model_classes = (Gemma3ForConditionalGeneration,) if is_torch_available() else () - test_headmasking = False test_pruning = False test_missing_keys = False _is_stateful = True diff --git a/tests/models/gemma3n/test_modeling_gemma3n.py b/tests/models/gemma3n/test_modeling_gemma3n.py index f94347e362e8..103009e953fe 100644 --- a/tests/models/gemma3n/test_modeling_gemma3n.py +++ b/tests/models/gemma3n/test_modeling_gemma3n.py @@ -143,7 +143,6 @@ def prepare_config_and_inputs_for_common(self): class Gemma3nAudioModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Gemma3nAudioEncoder,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_missing_keys = False is_generative = False _is_stateful = True @@ -668,7 +667,6 @@ def prepare_config_and_inputs_for_common(self): class Gemma3nVision2TextModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (Gemma3nModel, Gemma3nForConditionalGeneration) if is_torch_available() else () all_generative_model_classes = (Gemma3nForConditionalGeneration,) if is_torch_available() else () - test_headmasking = False test_pruning = False test_missing_keys = False _is_stateful = True diff --git a/tests/models/git/test_modeling_git.py b/tests/models/git/test_modeling_git.py index 931b3fcc8f07..3ade347881f4 100644 --- a/tests/models/git/test_modeling_git.py +++ b/tests/models/git/test_modeling_git.py @@ -128,7 +128,6 @@ class GitVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = True test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = GitVisionModelTester(self) diff --git a/tests/models/glm4v/test_modeling_glm4v.py b/tests/models/glm4v/test_modeling_glm4v.py index 4059fe2f9e99..c16d951171cf 100644 --- a/tests/models/glm4v/test_modeling_glm4v.py +++ b/tests/models/glm4v/test_modeling_glm4v.py @@ -172,7 +172,6 @@ def prepare_config_and_inputs_for_common(self): class Glm4vModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (Glm4vModel, Glm4vForConditionalGeneration) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False model_split_percents = [0.7, 0.9] # model too big to split at 0.5 _is_composite = True @@ -191,9 +190,6 @@ def prepare_config_and_inputs_for_generate(self, batch_size=2): # We don't want a few model inputs in our model input dictionary for generation tests input_keys_to_ignore = [ # we don't want to mask attention heads - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", # we don't want encoder-decoder models to start from filled decoder ids "decoder_input_ids", "decoder_attention_mask", diff --git a/tests/models/glm4v_moe/test_modeling_glm4v_moe.py b/tests/models/glm4v_moe/test_modeling_glm4v_moe.py index 1881fffa9dd9..ca5747425e7d 100644 --- a/tests/models/glm4v_moe/test_modeling_glm4v_moe.py +++ b/tests/models/glm4v_moe/test_modeling_glm4v_moe.py @@ -183,7 +183,6 @@ def prepare_config_and_inputs_for_common(self): class Glm4vMoeModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (Glm4vMoeModel, Glm4vMoeForConditionalGeneration) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False model_split_percents = [0.7, 0.9] # model too big to split at 0.5 _is_composite = True @@ -202,9 +201,6 @@ def prepare_config_and_inputs_for_generate(self, batch_size=2): # We don't want a few model inputs in our model input dictionary for generation tests input_keys_to_ignore = [ # we don't want to mask attention heads - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", # we don't want encoder-decoder models to start from filled decoder ids "decoder_input_ids", "decoder_attention_mask", diff --git a/tests/models/glpn/test_modeling_glpn.py b/tests/models/glpn/test_modeling_glpn.py index b98743de3572..26236f209a26 100644 --- a/tests/models/glpn/test_modeling_glpn.py +++ b/tests/models/glpn/test_modeling_glpn.py @@ -148,7 +148,6 @@ class GLPNModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) - test_head_masking = False test_pruning = False test_resize_embeddings = False test_torch_exportable = True diff --git a/tests/models/got_ocr2/test_modeling_got_ocr2.py b/tests/models/got_ocr2/test_modeling_got_ocr2.py index 3ece8d3aabaf..bfc1d6e26f41 100644 --- a/tests/models/got_ocr2/test_modeling_got_ocr2.py +++ b/tests/models/got_ocr2/test_modeling_got_ocr2.py @@ -153,7 +153,6 @@ class GotOcr2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi if is_torch_available() else {} ) - test_headmasking = False test_pruning = False def setUp(self): diff --git a/tests/models/gpt2/test_modeling_gpt2.py b/tests/models/gpt2/test_modeling_gpt2.py index 3c1d82d97805..89a4bf545310 100644 --- a/tests/models/gpt2/test_modeling_gpt2.py +++ b/tests/models/gpt2/test_modeling_gpt2.py @@ -70,12 +70,10 @@ def prepare_config_and_inputs( if extra_inputs: mc_token_ids = ids_tensor([self.batch_size, self.num_choices], self.seq_length) - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) config_and_inputs = ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -110,8 +108,8 @@ def get_config(self, scale_attn_by_inverse_layer_idx=False, reorder_and_upcast_a def prepare_config_and_inputs_for_common(self): # Overwritten: we want `token_type_ids` as part of the common inputs config_and_inputs = self.prepare_config_and_inputs(extra_inputs=True) - config, input_ids, _, head_mask, token_type_ids, _, _, _, _ = config_and_inputs - inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask} + config, input_ids, _, token_type_ids, _, _, _, _ = config_and_inputs + inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids} return config, inputs_dict def prepare_config_and_inputs_for_decoder(self): @@ -120,7 +118,6 @@ def prepare_config_and_inputs_for_decoder(self): config, input_ids, input_mask, - head_mask, token_type_ids, _, sequence_labels, @@ -135,7 +132,6 @@ def prepare_config_and_inputs_for_decoder(self): config, input_ids, input_mask, - head_mask, token_type_ids, sequence_labels, token_labels, @@ -202,7 +198,7 @@ def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): def test_gpt2_double_lm_head_model(self): # extra test: model-specific class config_and_inputs = self.model_tester.prepare_config_and_inputs(extra_inputs=True) - config, input_ids, input_mask, _, token_type_ids, mc_token_ids, _, _, _ = config_and_inputs + config, input_ids, input_mask, token_type_ids, mc_token_ids, _, _, _ = config_and_inputs model = GPT2DoubleHeadsModel(config) model.to(torch_device) model.eval() diff --git a/tests/models/gpt_neo/test_modeling_gpt_neo.py b/tests/models/gpt_neo/test_modeling_gpt_neo.py index d39aee3445a0..c0bda02eb285 100644 --- a/tests/models/gpt_neo/test_modeling_gpt_neo.py +++ b/tests/models/gpt_neo/test_modeling_gpt_neo.py @@ -122,13 +122,10 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -156,12 +153,12 @@ def get_pipeline_config(self): config.vocab_size = 300 return config - def create_and_check_gpt_neo_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_gpt_neo_model(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTNeoModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) @@ -169,7 +166,7 @@ def create_and_check_gpt_neo_model(self, config, input_ids, input_mask, head_mas # past_key_values is not implemented # self.parent.assertEqual(len(result.past_key_values), config.n_layer) - def create_and_check_gpt_neo_model_past(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_gpt_neo_model_past(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTNeoModel(config=config) model.to(torch_device) model.eval() @@ -205,9 +202,7 @@ def create_and_check_gpt_neo_model_past(self, config, input_ids, input_mask, hea # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_gpt_neo_model_attention_mask_past( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_gpt_neo_model_attention_mask_past(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTNeoModel(config=config) model.to(torch_device) model.eval() @@ -247,9 +242,7 @@ def create_and_check_gpt_neo_model_attention_mask_past( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_gpt_neo_model_past_large_inputs( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_gpt_neo_model_past_large_inputs(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTNeoModel(config=config) model.to(torch_device) model.eval() @@ -285,7 +278,7 @@ def create_and_check_gpt_neo_model_past_large_inputs( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTNeoForCausalLM(config) model.to(torch_device) model.eval() @@ -295,7 +288,7 @@ def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mas self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_gpt_neo_for_question_answering( - self, config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, *args + self, config, input_ids, input_mask, token_type_ids, mc_token_ids, sequence_labels, *args ): config.num_labels = self.num_labels model = GPTNeoForQuestionAnswering(config) @@ -306,7 +299,7 @@ def create_and_check_gpt_neo_for_question_answering( self.parent.assertEqual(result.end_logits.shape, (self.batch_size, self.seq_length)) def create_and_check_gpt_neo_for_sequence_classification( - self, config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, *args + self, config, input_ids, input_mask, token_type_ids, mc_token_ids, sequence_labels, *args ): config.num_labels = self.num_labels model = GPTNeoForSequenceClassification(config) @@ -316,7 +309,7 @@ def create_and_check_gpt_neo_for_sequence_classification( self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_labels)) def create_and_check_gpt_neo_for_token_classification( - self, config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, *args + self, config, input_ids, input_mask, token_type_ids, mc_token_ids, sequence_labels, *args ): config.num_labels = self.num_labels model = GPTNeoForTokenClassification(config) @@ -326,7 +319,7 @@ def create_and_check_gpt_neo_for_token_classification( self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.num_labels)) def create_and_check_forward_and_backwards( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args, gradient_checkpointing=False + self, config, input_ids, input_mask, token_type_ids, *args, gradient_checkpointing=False ): model = GPTNeoForCausalLM(config) if gradient_checkpointing: @@ -345,7 +338,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -356,7 +348,6 @@ def prepare_config_and_inputs_for_common(self): inputs_dict = { "input_ids": input_ids, "token_type_ids": token_type_ids, - "head_mask": head_mask, } return config, inputs_dict diff --git a/tests/models/gpt_neox/test_modeling_gpt_neox.py b/tests/models/gpt_neox/test_modeling_gpt_neox.py index 54282ab0ed60..5892ed450c00 100644 --- a/tests/models/gpt_neox/test_modeling_gpt_neox.py +++ b/tests/models/gpt_neox/test_modeling_gpt_neox.py @@ -287,7 +287,6 @@ class GPTNeoXModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi ) test_pruning = False test_missing_keys = False - test_head_masking = False def setUp(self): self.model_tester = GPTNeoXModelTester(self) diff --git a/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py b/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py index 0b4e5684f7f5..42115201da03 100644 --- a/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py +++ b/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py @@ -204,7 +204,6 @@ class GPTNeoXModelJapaneseTest(ModelTesterMixin, GenerationTesterMixin, Pipeline ) test_pruning = False test_missing_keys = False - test_head_masking = False def setUp(self): self.model_tester = GPTNeoXJapaneseModelTester(self) diff --git a/tests/models/gptj/test_modeling_gptj.py b/tests/models/gptj/test_modeling_gptj.py index 81f866ee4f57..bc834fcaf829 100644 --- a/tests/models/gptj/test_modeling_gptj.py +++ b/tests/models/gptj/test_modeling_gptj.py @@ -124,13 +124,10 @@ def prepare_config_and_inputs(self): config = self.get_config() - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -163,19 +160,19 @@ def get_pipeline_config(self): config.vocab_size = 300 return config - def create_and_check_gptj_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_gptj_model(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTJModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.n_layer) - def create_and_check_gptj_model_past(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_gptj_model_past(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTJModel(config=config) model.to(torch_device) model.eval() @@ -211,9 +208,7 @@ def create_and_check_gptj_model_past(self, config, input_ids, input_mask, head_m # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_gptj_model_attention_mask_past( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_gptj_model_attention_mask_past(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTJModel(config=config) model.to(torch_device) model.eval() @@ -253,9 +248,7 @@ def create_and_check_gptj_model_attention_mask_past( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_gptj_model_past_large_inputs( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args - ): + def create_and_check_gptj_model_past_large_inputs(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTJModel(config=config) model.to(torch_device) model.eval() @@ -291,7 +284,7 @@ def create_and_check_gptj_model_past_large_inputs( # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, token_type_ids, *args): model = GPTJForCausalLM(config) model.to(torch_device) model.eval() @@ -301,7 +294,7 @@ def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mas self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_forward_and_backwards( - self, config, input_ids, input_mask, head_mask, token_type_ids, *args, gradient_checkpointing=False + self, config, input_ids, input_mask, token_type_ids, *args, gradient_checkpointing=False ): model = GPTJForCausalLM(config) if gradient_checkpointing: @@ -320,7 +313,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -328,7 +320,7 @@ def prepare_config_and_inputs_for_common(self): choice_labels, ) = config_and_inputs - inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask} + inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids} return config, inputs_dict @@ -354,7 +346,6 @@ class GPTJModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin fx_compatible = True test_pruning = False test_missing_keys = False - test_head_masking = False def test_torch_fx(self): super().test_torch_fx() diff --git a/tests/models/granite/test_modeling_granite.py b/tests/models/granite/test_modeling_granite.py index 12100a8f3e6c..4e6a9cfc5ab8 100644 --- a/tests/models/granite/test_modeling_granite.py +++ b/tests/models/granite/test_modeling_granite.py @@ -179,7 +179,6 @@ class GraniteModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/granite_speech/test_modeling_granite_speech.py b/tests/models/granite_speech/test_modeling_granite_speech.py index adb925934548..87e23a81915e 100644 --- a/tests/models/granite_speech/test_modeling_granite_speech.py +++ b/tests/models/granite_speech/test_modeling_granite_speech.py @@ -220,7 +220,6 @@ class GraniteSpeechForConditionalGenerationModelTest(ModelTesterMixin, Generatio all_model_classes = (GraniteSpeechForConditionalGeneration,) if is_torch_available() else () test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/granitemoe/test_modeling_granitemoe.py b/tests/models/granitemoe/test_modeling_granitemoe.py index 7a27474b1b49..da553d720bd0 100644 --- a/tests/models/granitemoe/test_modeling_granitemoe.py +++ b/tests/models/granitemoe/test_modeling_granitemoe.py @@ -178,7 +178,6 @@ class GraniteMoeModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.Test if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/granitemoeshared/test_modeling_granitemoeshared.py b/tests/models/granitemoeshared/test_modeling_granitemoeshared.py index 2186985408b4..4d3f8c4e45be 100644 --- a/tests/models/granitemoeshared/test_modeling_granitemoeshared.py +++ b/tests/models/granitemoeshared/test_modeling_granitemoeshared.py @@ -181,7 +181,6 @@ class GraniteMoeSharedModelTest(ModelTesterMixin, GenerationTesterMixin, unittes if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False diff --git a/tests/models/grounding_dino/test_modeling_grounding_dino.py b/tests/models/grounding_dino/test_modeling_grounding_dino.py index 1821262b2dec..e5684462e449 100644 --- a/tests/models/grounding_dino/test_modeling_grounding_dino.py +++ b/tests/models/grounding_dino/test_modeling_grounding_dino.py @@ -249,7 +249,6 @@ class GroundingDinoModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Tes is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False pipeline_model_mapping = ( {"image-feature-extraction": GroundingDinoModel, "zero-shot-object-detection": GroundingDinoForObjectDetection} diff --git a/tests/models/groupvit/test_modeling_groupvit.py b/tests/models/groupvit/test_modeling_groupvit.py index a4d521ff2a7b..dc5238a9275a 100644 --- a/tests/models/groupvit/test_modeling_groupvit.py +++ b/tests/models/groupvit/test_modeling_groupvit.py @@ -146,7 +146,6 @@ class GroupViTVisionModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = GroupViTVisionModelTester(self) @@ -431,7 +430,6 @@ def prepare_config_and_inputs_for_common(self): class GroupViTTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (GroupViTTextModel,) if is_torch_available() else () test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = GroupViTTextModelTester(self) @@ -530,7 +528,6 @@ def prepare_config_and_inputs_for_common(self): class GroupViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (GroupViTModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": GroupViTModel} if is_torch_available() else {} - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/hgnet_v2/test_modeling_hgnet_v2.py b/tests/models/hgnet_v2/test_modeling_hgnet_v2.py index 403eb5a5c71f..d6e783677d49 100644 --- a/tests/models/hgnet_v2/test_modeling_hgnet_v2.py +++ b/tests/models/hgnet_v2/test_modeling_hgnet_v2.py @@ -182,7 +182,6 @@ class HGNetV2ForImageClassificationTest(ModelTesterMixin, PipelineTesterMixin, u fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True has_attentions = False diff --git a/tests/models/hiera/test_modeling_hiera.py b/tests/models/hiera/test_modeling_hiera.py index e4c43237584f..c2f933704c9d 100644 --- a/tests/models/hiera/test_modeling_hiera.py +++ b/tests/models/hiera/test_modeling_hiera.py @@ -247,7 +247,6 @@ class HieraModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/hubert/test_modeling_hubert.py b/tests/models/hubert/test_modeling_hubert.py index feec7a1de48d..79c4b4e02118 100644 --- a/tests/models/hubert/test_modeling_hubert.py +++ b/tests/models/hubert/test_modeling_hubert.py @@ -314,7 +314,6 @@ class HubertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): ) fx_compatible = True test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = HubertModelTester(self) @@ -575,7 +574,6 @@ def test_model_from_pretrained(self): class HubertRobustModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (HubertForCTC, HubertForSequenceClassification, HubertModel) if is_torch_available() else () test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = HubertModelTester( diff --git a/tests/models/ibert/test_modeling_ibert.py b/tests/models/ibert/test_modeling_ibert.py index 9065a7046b6d..b227f3a25147 100644 --- a/tests/models/ibert/test_modeling_ibert.py +++ b/tests/models/ibert/test_modeling_ibert.py @@ -226,7 +226,6 @@ def prepare_config_and_inputs_for_common(self): class IBertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False - test_head_masking = False test_resize_embeddings = False all_model_classes = ( diff --git a/tests/models/idefics/test_modeling_idefics.py b/tests/models/idefics/test_modeling_idefics.py index a517d69e18a6..3d724e7ba7d2 100644 --- a/tests/models/idefics/test_modeling_idefics.py +++ b/tests/models/idefics/test_modeling_idefics.py @@ -324,7 +324,6 @@ class IdeficsModelTest(ModelTesterMixin, PipelineTesterMixin, GenerationTesterMi else {} ) test_pruning = False - test_headmasking = False test_torchscript = False def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): diff --git a/tests/models/idefics2/test_modeling_idefics2.py b/tests/models/idefics2/test_modeling_idefics2.py index 6c1f1686515c..96ce8763bad7 100644 --- a/tests/models/idefics2/test_modeling_idefics2.py +++ b/tests/models/idefics2/test_modeling_idefics2.py @@ -180,7 +180,6 @@ class Idefics2ModelTest(ModelTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = True - test_head_masking = False _is_composite = True def setUp(self): @@ -372,7 +371,6 @@ class Idefics2ForConditionalGenerationModelTest(GenerationTesterMixin, ModelTest fx_compatible = False test_pruning = False test_resize_embeddings = True - test_head_masking = False test_torchscript = False def setUp(self): diff --git a/tests/models/idefics3/test_modeling_idefics3.py b/tests/models/idefics3/test_modeling_idefics3.py index 73417318658b..83b0c2b5d052 100644 --- a/tests/models/idefics3/test_modeling_idefics3.py +++ b/tests/models/idefics3/test_modeling_idefics3.py @@ -170,7 +170,6 @@ class Idefics3ModelTest(ModelTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = True - test_head_masking = False def setUp(self): self.model_tester = Idefics3VisionText2TextModelTester(self) @@ -337,7 +336,6 @@ class Idefics3ForConditionalGenerationModelTest(GenerationTesterMixin, ModelTest fx_compatible = False test_pruning = False test_resize_embeddings = True - test_head_masking = False test_torchscript = False def setUp(self): diff --git a/tests/models/ijepa/test_modeling_ijepa.py b/tests/models/ijepa/test_modeling_ijepa.py index cdfaa0ebca35..bb6c1db57797 100644 --- a/tests/models/ijepa/test_modeling_ijepa.py +++ b/tests/models/ijepa/test_modeling_ijepa.py @@ -205,7 +205,6 @@ class IJepaModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/imagegpt/test_modeling_imagegpt.py b/tests/models/imagegpt/test_modeling_imagegpt.py index 9a43671ad975..853e0bb8fd3e 100644 --- a/tests/models/imagegpt/test_modeling_imagegpt.py +++ b/tests/models/imagegpt/test_modeling_imagegpt.py @@ -127,13 +127,10 @@ def prepare_config_and_inputs( reorder_and_upcast_attn=reorder_and_upcast_attn, ) - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -168,19 +165,19 @@ def get_pipeline_config(self): config.max_position_embeddings = 1024 return config - def create_and_check_imagegpt_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_imagegpt_model(self, config, input_ids, input_mask, token_type_ids, *args): model = ImageGPTModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.n_layer) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, token_type_ids, *args): model = ImageGPTForCausalImageModeling(config) model.to(torch_device) model.eval() @@ -192,7 +189,7 @@ def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mas self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size - 1)) def create_and_check_imagegpt_for_image_classification( - self, config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, *args + self, config, input_ids, input_mask, token_type_ids, mc_token_ids, sequence_labels, *args ): config.num_labels = self.num_labels model = ImageGPTForImageClassification(config) @@ -208,7 +205,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -219,7 +215,6 @@ def prepare_config_and_inputs_for_common(self): inputs_dict = { "input_ids": input_ids, "token_type_ids": token_type_ids, - "head_mask": head_mask, } return config, inputs_dict diff --git a/tests/models/informer/test_modeling_informer.py b/tests/models/informer/test_modeling_informer.py index 22e6217c72c1..0743b46dab12 100644 --- a/tests/models/informer/test_modeling_informer.py +++ b/tests/models/informer/test_modeling_informer.py @@ -195,7 +195,6 @@ class InformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase pipeline_model_mapping = {"feature-extraction": InformerModel} if is_torch_available() else {} is_encoder_decoder = True test_pruning = False - test_head_masking = False test_missing_keys = False test_torchscript = False test_inputs_embeds = False @@ -343,9 +342,6 @@ def test_forward_signature(self): [ "future_observed_mask", "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", @@ -356,9 +352,6 @@ def test_forward_signature(self): if "future_observed_mask" in arg_names else [ "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", diff --git a/tests/models/instructblip/test_modeling_instructblip.py b/tests/models/instructblip/test_modeling_instructblip.py index 17a54da482a2..1d99067c376e 100644 --- a/tests/models/instructblip/test_modeling_instructblip.py +++ b/tests/models/instructblip/test_modeling_instructblip.py @@ -151,7 +151,6 @@ class InstructBlipVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = InstructBlipVisionModelTester(self) @@ -476,7 +475,6 @@ class InstructBlipForConditionalGenerationDecoderOnlyTest(ModelTesterMixin, Gene pipeline_model_mapping = {"image-text-to-text": InstructBlipForConditionalGeneration} additional_model_inputs = ["qformer_input_ids", "input_ids"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False diff --git a/tests/models/instructblipvideo/test_modeling_instructblipvideo.py b/tests/models/instructblipvideo/test_modeling_instructblipvideo.py index d6336c8c6840..bc231905715e 100644 --- a/tests/models/instructblipvideo/test_modeling_instructblipvideo.py +++ b/tests/models/instructblipvideo/test_modeling_instructblipvideo.py @@ -155,7 +155,6 @@ class InstructBlipVideoVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = InstructBlipVideoVisionModelTester(self) @@ -491,7 +490,6 @@ class InstructBlipVideoForConditionalGenerationDecoderOnlyTest( ) additional_model_inputs = ["qformer_input_ids", "input_ids"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False diff --git a/tests/models/internvl/test_modeling_internvl.py b/tests/models/internvl/test_modeling_internvl.py index 8704fccb6a1c..69ed30495524 100644 --- a/tests/models/internvl/test_modeling_internvl.py +++ b/tests/models/internvl/test_modeling_internvl.py @@ -192,7 +192,6 @@ class InternVLModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterM if is_torch_available() else {} ) - test_headmasking = False test_pruning = False def setUp(self): diff --git a/tests/models/jamba/test_modeling_jamba.py b/tests/models/jamba/test_modeling_jamba.py index 65e6decf4b29..c00396c697fd 100644 --- a/tests/models/jamba/test_modeling_jamba.py +++ b/tests/models/jamba/test_modeling_jamba.py @@ -340,7 +340,6 @@ class JambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi if is_torch_available() else {} ) - test_headmasking = False test_pruning = False def _check_past_key_values_for_generate(self, batch_size, decoder_past_key_values, cache_length, config): diff --git a/tests/models/janus/test_modeling_janus.py b/tests/models/janus/test_modeling_janus.py index 8c184531f33e..35a7b1c98431 100644 --- a/tests/models/janus/test_modeling_janus.py +++ b/tests/models/janus/test_modeling_janus.py @@ -195,7 +195,6 @@ class JanusVisionText2TextModelTest(ModelTesterMixin, GenerationTesterMixin, uni all_generative_model_classes = (JanusForConditionalGeneration,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): @@ -354,7 +353,6 @@ def prepare_config_and_inputs_for_common(self): @require_torch class JanusVQModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (JanusVQVAE,) if is_torch_available() else () - test_head_masking = False test_pruning = False fx_compatible = False has_attentions = False diff --git a/tests/models/kosmos2/test_modeling_kosmos2.py b/tests/models/kosmos2/test_modeling_kosmos2.py index 38a769229952..48dae2ec820b 100644 --- a/tests/models/kosmos2/test_modeling_kosmos2.py +++ b/tests/models/kosmos2/test_modeling_kosmos2.py @@ -271,7 +271,6 @@ class Kosmos2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/kosmos2_5/test_modeling_kosmos2_5.py b/tests/models/kosmos2_5/test_modeling_kosmos2_5.py index b3155915b03d..8a7811c766c1 100644 --- a/tests/models/kosmos2_5/test_modeling_kosmos2_5.py +++ b/tests/models/kosmos2_5/test_modeling_kosmos2_5.py @@ -302,7 +302,6 @@ class Kosmos2_5ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTester else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/kyutai_speech_to_text/test_modeling_kyutai_speech_to_text.py b/tests/models/kyutai_speech_to_text/test_modeling_kyutai_speech_to_text.py index c7ab9f9dc6dd..1bc92b9afac5 100644 --- a/tests/models/kyutai_speech_to_text/test_modeling_kyutai_speech_to_text.py +++ b/tests/models/kyutai_speech_to_text/test_modeling_kyutai_speech_to_text.py @@ -253,7 +253,6 @@ class KyutaiSpeechToTextModelTest(ModelTesterMixin, GenerationTesterMixin, Pipel if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False # Broken by attention refactor cc @Cyrilvallez diff --git a/tests/models/led/test_modeling_led.py b/tests/models/led/test_modeling_led.py index bb27f126d98c..a57f8d82e18c 100644 --- a/tests/models/led/test_modeling_led.py +++ b/tests/models/led/test_modeling_led.py @@ -55,28 +55,17 @@ def prepare_led_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -184,10 +173,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = LEDModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/levit/test_modeling_levit.py b/tests/models/levit/test_modeling_levit.py index 0f12d0b14e2d..c9fb6bc1802f 100644 --- a/tests/models/levit/test_modeling_levit.py +++ b/tests/models/levit/test_modeling_levit.py @@ -185,7 +185,6 @@ class LevitModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False has_attentions = False def setUp(self): diff --git a/tests/models/lfm2_vl/test_modeling_lfm2_vl.py b/tests/models/lfm2_vl/test_modeling_lfm2_vl.py index 42c732887af0..c9d39c3ef9ee 100644 --- a/tests/models/lfm2_vl/test_modeling_lfm2_vl.py +++ b/tests/models/lfm2_vl/test_modeling_lfm2_vl.py @@ -160,7 +160,6 @@ class Lfm2VlModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase if is_torch_available() else {} ) - test_headmasking = False test_pruning = False fx_compatible = False model_tester_class = Lfm2VlModelTester diff --git a/tests/models/lightglue/test_modeling_lightglue.py b/tests/models/lightglue/test_modeling_lightglue.py index 9342b9a58fb8..5bc482a8780c 100644 --- a/tests/models/lightglue/test_modeling_lightglue.py +++ b/tests/models/lightglue/test_modeling_lightglue.py @@ -129,7 +129,6 @@ class LightGlueModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = True def setUp(self): diff --git a/tests/models/llava/test_modeling_llava.py b/tests/models/llava/test_modeling_llava.py index d1e599fa4e00..389d896147a7 100644 --- a/tests/models/llava/test_modeling_llava.py +++ b/tests/models/llava/test_modeling_llava.py @@ -184,7 +184,6 @@ class LlavaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTesterM else {} ) test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/llava_next/test_modeling_llava_next.py b/tests/models/llava_next/test_modeling_llava_next.py index a476d34ffc39..bd119be91be7 100644 --- a/tests/models/llava_next/test_modeling_llava_next.py +++ b/tests/models/llava_next/test_modeling_llava_next.py @@ -193,7 +193,6 @@ class LlavaNextForConditionalGenerationModelTest(ModelTesterMixin, GenerationTes ) pipeline_model_mapping = {"image-text-to-text": LlavaNextForConditionalGeneration} if is_torch_available() else {} test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/llava_next_video/test_modeling_llava_next_video.py b/tests/models/llava_next_video/test_modeling_llava_next_video.py index 332fdfa59e75..ed9cdb12ab01 100644 --- a/tests/models/llava_next_video/test_modeling_llava_next_video.py +++ b/tests/models/llava_next_video/test_modeling_llava_next_video.py @@ -206,7 +206,6 @@ class LlavaNextVideoForConditionalGenerationModelTest(ModelTesterMixin, Generati else () ) test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/llava_onevision/test_modeling_llava_onevision.py b/tests/models/llava_onevision/test_modeling_llava_onevision.py index e270220dc1a3..c8ce5c364daf 100644 --- a/tests/models/llava_onevision/test_modeling_llava_onevision.py +++ b/tests/models/llava_onevision/test_modeling_llava_onevision.py @@ -197,7 +197,6 @@ class LlavaOnevisionForConditionalGenerationModelTest(ModelTesterMixin, Generati {"image-text-to-text": LlavaOnevisionForConditionalGeneration} if is_torch_available() else {} ) test_pruning = False - test_head_masking = False # MP works but offload doesn't work when the MultiheadAttention is offloaded # TODO: One potential solution would be to add to set preload_module_classes = ["Siglip2MultiheadAttentionPoolingHead"] # in the dispatch_model function diff --git a/tests/models/luke/test_modeling_luke.py b/tests/models/luke/test_modeling_luke.py index 5959b1877c82..e228aa9d3e25 100644 --- a/tests/models/luke/test_modeling_luke.py +++ b/tests/models/luke/test_modeling_luke.py @@ -616,7 +616,6 @@ class LukeModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = True - test_head_masking = True # TODO: Fix the failed tests def is_pipeline_test_to_skip( diff --git a/tests/models/lxmert/test_modeling_lxmert.py b/tests/models/lxmert/test_modeling_lxmert.py index 3d9a88d561ce..61d8bf909303 100644 --- a/tests/models/lxmert/test_modeling_lxmert.py +++ b/tests/models/lxmert/test_modeling_lxmert.py @@ -530,7 +530,6 @@ class LxmertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): ) fx_compatible = True - test_head_masking = False test_pruning = False test_torchscript = False diff --git a/tests/models/mamba/test_modeling_mamba.py b/tests/models/mamba/test_modeling_mamba.py index 6a77e0f9e866..a78a89868229 100644 --- a/tests/models/mamba/test_modeling_mamba.py +++ b/tests/models/mamba/test_modeling_mamba.py @@ -243,7 +243,6 @@ class MambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi test_torchscript = False # FIXME let's try to support this @ArthurZucker test_missing_keys = False test_pruning = False - test_head_masking = False # Mamba does not have attention heads pipeline_model_mapping = ( {"feature-extraction": MambaModel, "text-generation": MambaForCausalLM} if is_torch_available() else {} ) diff --git a/tests/models/mamba2/test_modeling_mamba2.py b/tests/models/mamba2/test_modeling_mamba2.py index 603e30d6b076..03a91dcbe559 100644 --- a/tests/models/mamba2/test_modeling_mamba2.py +++ b/tests/models/mamba2/test_modeling_mamba2.py @@ -246,7 +246,6 @@ class Mamba2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMix test_torchscript = False # FIXME I think this should be doable @molbap @ArthurZucker test_missing_keys = False test_pruning = False - test_head_masking = False # Mamba does not have attention heads pipeline_model_mapping = ( {"feature-extraction": Mamba2Model, "text-generation": Mamba2ForCausalLM} if is_torch_available() else {} diff --git a/tests/models/marian/test_modeling_marian.py b/tests/models/marian/test_modeling_marian.py index 3387e785ccbe..ab624d3b5264 100644 --- a/tests/models/marian/test_modeling_marian.py +++ b/tests/models/marian/test_modeling_marian.py @@ -58,28 +58,17 @@ def prepare_marian_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -163,10 +152,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = MarianModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/mask2former/test_modeling_mask2former.py b/tests/models/mask2former/test_modeling_mask2former.py index 07a0744dd249..4f78399a2865 100644 --- a/tests/models/mask2former/test_modeling_mask2former.py +++ b/tests/models/mask2former/test_modeling_mask2former.py @@ -204,7 +204,6 @@ class Mask2FormerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = True diff --git a/tests/models/maskformer/test_modeling_maskformer.py b/tests/models/maskformer/test_modeling_maskformer.py index 0501df3b9409..42e565a9e16a 100644 --- a/tests/models/maskformer/test_modeling_maskformer.py +++ b/tests/models/maskformer/test_modeling_maskformer.py @@ -206,7 +206,6 @@ class MaskFormerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = False zero_init_hidden_state = True test_torch_exportable = True diff --git a/tests/models/maskformer/test_modeling_maskformer_swin.py b/tests/models/maskformer/test_modeling_maskformer_swin.py index 978596bf6aba..f1b1c9747438 100644 --- a/tests/models/maskformer/test_modeling_maskformer_swin.py +++ b/tests/models/maskformer/test_modeling_maskformer_swin.py @@ -178,7 +178,6 @@ class MaskFormerSwinModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Te test_torchscript = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/megatron_bert/test_modeling_megatron_bert.py b/tests/models/megatron_bert/test_modeling_megatron_bert.py index f8cb9088a655..f795afab605c 100644 --- a/tests/models/megatron_bert/test_modeling_megatron_bert.py +++ b/tests/models/megatron_bert/test_modeling_megatron_bert.py @@ -290,7 +290,6 @@ class MegatronBertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Test ) fx_compatible = True # test_resize_embeddings = False - test_head_masking = False # special case for ForPreTraining model def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): diff --git a/tests/models/metaclip_2/test_modeling_metaclip_2.py b/tests/models/metaclip_2/test_modeling_metaclip_2.py index 19823ba4ac73..b6be33fa8e8b 100644 --- a/tests/models/metaclip_2/test_modeling_metaclip_2.py +++ b/tests/models/metaclip_2/test_modeling_metaclip_2.py @@ -212,7 +212,6 @@ class MetaClip2VisionModelTest(MetaClip2ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = MetaClip2VisionModelTester(self) @@ -407,7 +406,6 @@ class MetaClip2TextModelTest(MetaClip2ModelTesterMixin, unittest.TestCase): all_model_classes = (MetaClip2TextModel, MetaClip2TextModelWithProjection) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False model_split_percents = [0.5, 0.8, 0.9] def setUp(self): @@ -539,7 +537,6 @@ class MetaClip2ModelTest(MetaClip2ModelTesterMixin, PipelineTesterMixin, unittes ) additional_model_inputs = ["pixel_values"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -736,7 +733,6 @@ class MetaClip2ForImageClassificationModelTest(MetaClip2ModelTesterMixin, Pipeli all_model_classes = (MetaClip2ForImageClassification,) if is_torch_available() else () pipeline_model_mapping = {"image-classification": MetaClip2ForImageClassification} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/mgp_str/test_modeling_mgp_str.py b/tests/models/mgp_str/test_modeling_mgp_str.py index 1ff9927f89ed..ecb84f144932 100644 --- a/tests/models/mgp_str/test_modeling_mgp_str.py +++ b/tests/models/mgp_str/test_modeling_mgp_str.py @@ -126,7 +126,6 @@ class MgpstrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_attention_outputs = False def setUp(self): diff --git a/tests/models/mimi/test_modeling_mimi.py b/tests/models/mimi/test_modeling_mimi.py index 33ba9fe17744..54ce4763af5f 100644 --- a/tests/models/mimi/test_modeling_mimi.py +++ b/tests/models/mimi/test_modeling_mimi.py @@ -52,9 +52,6 @@ def prepare_inputs_dict( decoder_input_ids=None, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if input_ids is not None: encoder_dict = {"input_ids": input_ids} @@ -167,7 +164,6 @@ class MimiModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (MimiModel,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/ministral/test_modeling_ministral.py b/tests/models/ministral/test_modeling_ministral.py index 32c7ef206f14..1ff15799afff 100644 --- a/tests/models/ministral/test_modeling_ministral.py +++ b/tests/models/ministral/test_modeling_ministral.py @@ -157,7 +157,6 @@ def test_model_8b_long_prompt(self): assistant_model.generation_config.num_assistant_tokens = 2 assistant_model.generation_config.num_assistant_tokens_schedule = "constant" generated_ids = model.generate(input_ids, max_new_tokens=4, temperature=0) - print(generated_ids[0][-2:].tolist()) self.assertEqual(EXPECTED_OUTPUT_TOKEN_IDS, generated_ids[0][-2:].tolist()) del assistant_model @@ -247,7 +246,6 @@ def test_past_sliding_window_generation(self): input_length = inputs.input_ids.shape[1] # around 33k tokens > 32k sliding window outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False) output_text = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=True) - print(output_text) self.assertEqual( output_text, " H. Gammarus lives on the continental shelf at depths of 0 – 150 metres ( 0 – 492 ft ) , although not normally deeper than 50 m ( 160 ft ) .", diff --git a/tests/models/mistral3/test_modeling_mistral3.py b/tests/models/mistral3/test_modeling_mistral3.py index ab07dbdf7d9f..5cd0c023e165 100644 --- a/tests/models/mistral3/test_modeling_mistral3.py +++ b/tests/models/mistral3/test_modeling_mistral3.py @@ -176,7 +176,6 @@ class Mistral3ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterM else {} ) _is_composite = True - test_headmasking = False test_pruning = False def setUp(self): diff --git a/tests/models/mlcd/test_modeling_mlcd.py b/tests/models/mlcd/test_modeling_mlcd.py index 9f864ebaf234..81fe4df9051b 100644 --- a/tests/models/mlcd/test_modeling_mlcd.py +++ b/tests/models/mlcd/test_modeling_mlcd.py @@ -124,7 +124,6 @@ class MLCDVisionModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (MLCDVisionModel,) if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False test_resize_embeddings = False test_torch_exportable = True diff --git a/tests/models/mllama/test_modeling_mllama.py b/tests/models/mllama/test_modeling_mllama.py index 2330684d0d71..cec10deb6ee6 100644 --- a/tests/models/mllama/test_modeling_mllama.py +++ b/tests/models/mllama/test_modeling_mllama.py @@ -126,7 +126,6 @@ class MllamaForCausalLMModelTest(ModelTesterMixin, GenerationTesterMixin, unitte all_model_classes = (MllamaForCausalLM,) if is_torch_available() else () test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = MllamaText2TextModelTester(self) @@ -281,7 +280,6 @@ class MllamaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTester ) pipeline_model_mapping = {"image-text-to-text": MllamaForConditionalGeneration} if is_torch_available() else () test_pruning = False - test_head_masking = False test_torchscript = False _is_composite = True diff --git a/tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py b/tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py index 22c8f939f704..fed96e23384b 100644 --- a/tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py +++ b/tests/models/mm_grounding_dino/test_modeling_mm_grounding_dino.py @@ -251,7 +251,6 @@ class MMGroundingDinoModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.T is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False pipeline_model_mapping = ( { diff --git a/tests/models/mobilenet_v1/test_modeling_mobilenet_v1.py b/tests/models/mobilenet_v1/test_modeling_mobilenet_v1.py index 41a0bdb7e5d1..8f144b95bec1 100644 --- a/tests/models/mobilenet_v1/test_modeling_mobilenet_v1.py +++ b/tests/models/mobilenet_v1/test_modeling_mobilenet_v1.py @@ -152,7 +152,6 @@ class MobileNetV1ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/mobilenet_v2/test_modeling_mobilenet_v2.py b/tests/models/mobilenet_v2/test_modeling_mobilenet_v2.py index 2abcf6aa8f87..9d483ef2f001 100644 --- a/tests/models/mobilenet_v2/test_modeling_mobilenet_v2.py +++ b/tests/models/mobilenet_v2/test_modeling_mobilenet_v2.py @@ -203,7 +203,6 @@ class MobileNetV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/mobilevit/test_modeling_mobilevit.py b/tests/models/mobilevit/test_modeling_mobilevit.py index 92a2ad87f01c..81e060b923d9 100644 --- a/tests/models/mobilevit/test_modeling_mobilevit.py +++ b/tests/models/mobilevit/test_modeling_mobilevit.py @@ -196,7 +196,6 @@ class MobileViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/mobilevitv2/test_modeling_mobilevitv2.py b/tests/models/mobilevitv2/test_modeling_mobilevitv2.py index c6ae351c4858..1f5c8bf2d607 100644 --- a/tests/models/mobilevitv2/test_modeling_mobilevitv2.py +++ b/tests/models/mobilevitv2/test_modeling_mobilevitv2.py @@ -205,7 +205,6 @@ class MobileViTV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/modernbert/test_modeling_modernbert.py b/tests/models/modernbert/test_modeling_modernbert.py index 4787ad8b8535..2e2fbe77417e 100644 --- a/tests/models/modernbert/test_modeling_modernbert.py +++ b/tests/models/modernbert/test_modeling_modernbert.py @@ -263,7 +263,6 @@ class ModernBertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa else {} ) fx_compatible = False - test_head_masking = False test_pruning = False model_split_percents = [0.5, 0.8, 0.9] diff --git a/tests/models/moonshine/test_modeling_moonshine.py b/tests/models/moonshine/test_modeling_moonshine.py index 1924be5b0713..ee30b99b011f 100644 --- a/tests/models/moonshine/test_modeling_moonshine.py +++ b/tests/models/moonshine/test_modeling_moonshine.py @@ -144,7 +144,6 @@ class MoonshineModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = MoonshineModelTester(self) diff --git a/tests/models/moshi/test_modeling_moshi.py b/tests/models/moshi/test_modeling_moshi.py index d4815a140d69..a724711e1c22 100644 --- a/tests/models/moshi/test_modeling_moshi.py +++ b/tests/models/moshi/test_modeling_moshi.py @@ -157,7 +157,6 @@ class MoshiDecoderTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi all_model_classes = (MoshiModel, MoshiForCausalLM) if is_torch_available() else () test_pruning = False test_resize_embeddings = True - test_head_masking = False pipeline_model_mapping = ( { "feature-extraction": MoshiModel, @@ -533,7 +532,6 @@ def prepare_config_and_inputs_for_common(self, batch_size=None): class MoshiTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (MoshiForConditionalGeneration,) if is_torch_available() else () test_pruning = False # training is not supported yet for Moshi - test_headmasking = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/mpt/test_modeling_mpt.py b/tests/models/mpt/test_modeling_mpt.py index 39da9e27d32e..7b39a6eb7970 100644 --- a/tests/models/mpt/test_modeling_mpt.py +++ b/tests/models/mpt/test_modeling_mpt.py @@ -357,7 +357,6 @@ class MptModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, test_missing_keys = False test_pruning = False test_torchscript = False - test_head_masking = False pipeline_model_mapping = ( { "feature-extraction": MptModel, diff --git a/tests/models/mra/test_modeling_mra.py b/tests/models/mra/test_modeling_mra.py index ccb725baa78f..12b7725e6129 100644 --- a/tests/models/mra/test_modeling_mra.py +++ b/tests/models/mra/test_modeling_mra.py @@ -261,7 +261,6 @@ class MraModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else () ) test_pruning = False - test_headmasking = False test_torchscript = False has_attentions = False diff --git a/tests/models/musicgen/test_modeling_musicgen.py b/tests/models/musicgen/test_modeling_musicgen.py index b05eb1a91236..25726fabeecd 100644 --- a/tests/models/musicgen/test_modeling_musicgen.py +++ b/tests/models/musicgen/test_modeling_musicgen.py @@ -573,7 +573,6 @@ class MusicgenTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, # Addition keys that are required for forward. MusicGen isn't encoder-decoder in config so we have to pass decoder ids as additional additional_model_inputs = ["decoder_input_ids"] test_pruning = False # training is not supported yet for MusicGen - test_headmasking = False test_resize_embeddings = False # not to test torchscript as the model tester doesn't prepare `input_values` and `padding_mask` # (and `torchscript` hates `None` values). @@ -756,11 +755,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) # override since changing `gradient_checkpointing` from the top-level model config won't work diff --git a/tests/models/musicgen_melody/test_modeling_musicgen_melody.py b/tests/models/musicgen_melody/test_modeling_musicgen_melody.py index f8e1a0969e92..8700e06388a7 100644 --- a/tests/models/musicgen_melody/test_modeling_musicgen_melody.py +++ b/tests/models/musicgen_melody/test_modeling_musicgen_melody.py @@ -594,7 +594,6 @@ class MusicgenMelodyTest(ModelTesterMixin, GenerationTesterMixin, PipelineTester # Addition keys that are required for forward. MusicGen isn't encoder-decoder in config so we have to pass decoder ids as additional additional_model_inputs = ["decoder_input_ids"] test_pruning = False # training is not supported yet for MusicGen - test_headmasking = False test_resize_embeddings = False # not to test torchscript as the model tester doesn't prepare `input_features` and `padding_mask` # (and `torchscript` hates `None` values). @@ -760,8 +759,6 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - if "head_mask" and "decoder_head_mask" in arg_names: - expected_arg_names.extend(["head_mask", "decoder_head_mask"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) diff --git a/tests/models/mvp/test_modeling_mvp.py b/tests/models/mvp/test_modeling_mvp.py index f44f9ef87256..cfc865389843 100644 --- a/tests/models/mvp/test_modeling_mvp.py +++ b/tests/models/mvp/test_modeling_mvp.py @@ -56,28 +56,17 @@ def prepare_mvp_inputs_dict( decoder_input_ids=None, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -165,10 +154,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = MvpModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/nllb_moe/test_modeling_nllb_moe.py b/tests/models/nllb_moe/test_modeling_nllb_moe.py index 37fdd51e8478..d25e06510953 100644 --- a/tests/models/nllb_moe/test_modeling_nllb_moe.py +++ b/tests/models/nllb_moe/test_modeling_nllb_moe.py @@ -101,30 +101,16 @@ def prepare_nllb_moe_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones( - config.decoder_layers, config.decoder_attention_heads, device=torch_device - ) return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } def prepare_config_and_inputs(self): @@ -180,10 +166,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = NllbMoeModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/nystromformer/test_modeling_nystromformer.py b/tests/models/nystromformer/test_modeling_nystromformer.py index 7966b03c0f60..18214582962e 100644 --- a/tests/models/nystromformer/test_modeling_nystromformer.py +++ b/tests/models/nystromformer/test_modeling_nystromformer.py @@ -240,7 +240,6 @@ class NystromformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Tes else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = NystromformerModelTester(self) diff --git a/tests/models/olmo/test_modeling_olmo.py b/tests/models/olmo/test_modeling_olmo.py index d8b26fe02228..2631823ba2f8 100644 --- a/tests/models/olmo/test_modeling_olmo.py +++ b/tests/models/olmo/test_modeling_olmo.py @@ -190,10 +190,6 @@ def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) - @unittest.skip(reason="OLMo does not support head pruning.") - def test_headmasking(self): - pass - def test_model_various_embeddings(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() for type in ["absolute", "relative_key", "relative_key_query"]: diff --git a/tests/models/olmo2/test_modeling_olmo2.py b/tests/models/olmo2/test_modeling_olmo2.py index eddf63ae1e05..f90e45cdc858 100644 --- a/tests/models/olmo2/test_modeling_olmo2.py +++ b/tests/models/olmo2/test_modeling_olmo2.py @@ -191,10 +191,6 @@ def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) - @unittest.skip(reason="OLMo2 does not support head pruning.") - def test_headmasking(self): - pass - def test_model_various_embeddings(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() for type in ["absolute", "relative_key", "relative_key_query"]: diff --git a/tests/models/olmoe/test_modeling_olmoe.py b/tests/models/olmoe/test_modeling_olmoe.py index e9d6bb8df8ba..ad02154567c2 100644 --- a/tests/models/olmoe/test_modeling_olmoe.py +++ b/tests/models/olmoe/test_modeling_olmoe.py @@ -202,10 +202,6 @@ def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) - @unittest.skip(reason="OLMoE does not support head pruning.") - def test_headmasking(self): - pass - def test_model_various_embeddings(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() for type in ["absolute", "relative_key", "relative_key_query"]: diff --git a/tests/models/omdet_turbo/test_modeling_omdet_turbo.py b/tests/models/omdet_turbo/test_modeling_omdet_turbo.py index 224ebd1c6cee..a69ef2b3d6fe 100644 --- a/tests/models/omdet_turbo/test_modeling_omdet_turbo.py +++ b/tests/models/omdet_turbo/test_modeling_omdet_turbo.py @@ -196,7 +196,6 @@ class OmDetTurboModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa all_model_classes = (OmDetTurboForObjectDetection,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_head_masking = False pipeline_model_mapping = ( {"zero-shot-object-detection": OmDetTurboForObjectDetection} if is_torch_available() else {} ) diff --git a/tests/models/oneformer/test_modeling_oneformer.py b/tests/models/oneformer/test_modeling_oneformer.py index 5269b1d155cf..4ae6bba12033 100644 --- a/tests/models/oneformer/test_modeling_oneformer.py +++ b/tests/models/oneformer/test_modeling_oneformer.py @@ -234,7 +234,6 @@ class OneFormerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = False # TODO: Fix the failed tests when this model gets more usage diff --git a/tests/models/openai/test_modeling_openai.py b/tests/models/openai/test_modeling_openai.py index bba4ad8660fb..c27f35656d7e 100644 --- a/tests/models/openai/test_modeling_openai.py +++ b/tests/models/openai/test_modeling_openai.py @@ -114,30 +114,27 @@ def prepare_config_and_inputs(self): pad_token_id=self.pad_token_id, ) - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, - head_mask, token_type_ids, sequence_labels, token_labels, choice_labels, ) - def create_and_check_openai_gpt_model(self, config, input_ids, head_mask, token_type_ids, *args): + def create_and_check_openai_gpt_model(self, config, input_ids, token_type_ids, *args): model = OpenAIGPTModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) + result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) - def create_and_check_lm_head_model(self, config, input_ids, head_mask, token_type_ids, *args): + def create_and_check_lm_head_model(self, config, input_ids, token_type_ids, *args): model = OpenAIGPTLMHeadModel(config) model.to(torch_device) model.eval() @@ -146,7 +143,7 @@ def create_and_check_lm_head_model(self, config, input_ids, head_mask, token_typ self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) - def create_and_check_double_lm_head_model(self, config, input_ids, head_mask, token_type_ids, *args): + def create_and_check_double_lm_head_model(self, config, input_ids, token_type_ids, *args): model = OpenAIGPTDoubleHeadsModel(config) model.to(torch_device) model.eval() @@ -155,9 +152,7 @@ def create_and_check_double_lm_head_model(self, config, input_ids, head_mask, to self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) - def create_and_check_openai_gpt_for_sequence_classification( - self, config, input_ids, head_mask, token_type_ids, *args - ): + def create_and_check_openai_gpt_for_sequence_classification(self, config, input_ids, token_type_ids, *args): config.num_labels = self.num_labels model = OpenAIGPTForSequenceClassification(config) model.to(torch_device) @@ -172,7 +167,6 @@ def prepare_config_and_inputs_for_common(self): ( config, input_ids, - head_mask, token_type_ids, sequence_labels, token_labels, @@ -181,7 +175,6 @@ def prepare_config_and_inputs_for_common(self): inputs_dict = { "input_ids": input_ids, "token_type_ids": token_type_ids, - "head_mask": head_mask, } return config, inputs_dict diff --git a/tests/models/opt/test_modeling_opt.py b/tests/models/opt/test_modeling_opt.py index 331d1aba498b..d4497559125e 100644 --- a/tests/models/opt/test_modeling_opt.py +++ b/tests/models/opt/test_modeling_opt.py @@ -218,7 +218,6 @@ class OPTModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, fx_compatible = False # Broken by attention refactor cc @Cyrilvallez test_pruning = False test_missing_keys = False - test_head_masking = False # new attn API doesn't support head mask # TODO: Fix the failed tests def is_pipeline_test_to_skip( diff --git a/tests/models/ovis2/test_modeling_ovis2.py b/tests/models/ovis2/test_modeling_ovis2.py index 1d96b3b6b070..9fabd8819430 100644 --- a/tests/models/ovis2/test_modeling_ovis2.py +++ b/tests/models/ovis2/test_modeling_ovis2.py @@ -177,7 +177,6 @@ class Ovis2ModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase) _is_composite = True test_pruning = False test_torchscript = False - test_head_masking = False def setUp(self): self.model_tester = Ovis2VisionText2TextModelTester(self) diff --git a/tests/models/owlv2/test_modeling_owlv2.py b/tests/models/owlv2/test_modeling_owlv2.py index a5e57f5572a1..ba0d041801de 100644 --- a/tests/models/owlv2/test_modeling_owlv2.py +++ b/tests/models/owlv2/test_modeling_owlv2.py @@ -146,7 +146,6 @@ class Owlv2VisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = Owlv2VisionModelTester(self) @@ -309,7 +308,6 @@ class Owlv2TextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Owlv2TextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = Owlv2TextModelTester(self) @@ -427,7 +425,6 @@ class Owlv2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -644,7 +641,6 @@ def prepare_config_and_inputs_for_common(self): class Owlv2ForObjectDetectionTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Owlv2ForObjectDetection,) if is_torch_available() else () fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/owlvit/test_modeling_owlvit.py b/tests/models/owlvit/test_modeling_owlvit.py index 005236564791..fa87b57072c6 100644 --- a/tests/models/owlvit/test_modeling_owlvit.py +++ b/tests/models/owlvit/test_modeling_owlvit.py @@ -144,7 +144,6 @@ class OwlViTVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = OwlViTVisionModelTester(self) @@ -305,7 +304,6 @@ class OwlViTTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (OwlViTTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = OwlViTTextModelTester(self) @@ -422,7 +420,6 @@ class OwlViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -637,7 +634,6 @@ def prepare_config_and_inputs_for_common(self): class OwlViTForObjectDetectionTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (OwlViTForObjectDetection,) if is_torch_available() else () fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/paligemma/test_modeling_paligemma.py b/tests/models/paligemma/test_modeling_paligemma.py index 6a02a3f31e0e..d052bfb97f81 100644 --- a/tests/models/paligemma/test_modeling_paligemma.py +++ b/tests/models/paligemma/test_modeling_paligemma.py @@ -193,7 +193,6 @@ class PaliGemmaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTes fx_compatible = False test_pruning = False test_torchscript = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/paligemma2/test_modeling_paligemma2.py b/tests/models/paligemma2/test_modeling_paligemma2.py index ffb61c2146b2..536eb95bef24 100644 --- a/tests/models/paligemma2/test_modeling_paligemma2.py +++ b/tests/models/paligemma2/test_modeling_paligemma2.py @@ -172,7 +172,6 @@ class PaliGemma2ForConditionalGenerationModelTest(ModelTesterMixin, GenerationTe fx_compatible = False test_pruning = False test_torchscript = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/parakeet/test_modeling_parakeet.py b/tests/models/parakeet/test_modeling_parakeet.py index 8b845b213f91..39de3ae9f2ac 100644 --- a/tests/models/parakeet/test_modeling_parakeet.py +++ b/tests/models/parakeet/test_modeling_parakeet.py @@ -167,7 +167,6 @@ class ParakeetEncoderModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): @@ -252,7 +251,6 @@ class ParakeetForCTCModelTest(ModelTesterMixin, unittest.TestCase): test_attention_outputs = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True _is_composite = True diff --git a/tests/models/patchtsmixer/test_modeling_patchtsmixer.py b/tests/models/patchtsmixer/test_modeling_patchtsmixer.py index d7a7a4950ac1..1542c44324bc 100644 --- a/tests/models/patchtsmixer/test_modeling_patchtsmixer.py +++ b/tests/models/patchtsmixer/test_modeling_patchtsmixer.py @@ -222,7 +222,6 @@ class PatchTSMixerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Test pipeline_model_mapping = {"feature-extraction": PatchTSMixerModel} if is_torch_available() else {} is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torchscript = False test_inputs_embeds = False diff --git a/tests/models/patchtst/test_modeling_patchtst.py b/tests/models/patchtst/test_modeling_patchtst.py index 960c146d2855..33c47433e1f0 100644 --- a/tests/models/patchtst/test_modeling_patchtst.py +++ b/tests/models/patchtst/test_modeling_patchtst.py @@ -161,7 +161,6 @@ class PatchTSTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase pipeline_model_mapping = {"feature-extraction": PatchTSTModel} if is_torch_available() else {} is_encoder_decoder = False test_pruning = False - test_head_masking = False test_missing_keys = True test_torchscript = False test_inputs_embeds = False diff --git a/tests/models/pegasus/test_modeling_pegasus.py b/tests/models/pegasus/test_modeling_pegasus.py index a8aa93adedbb..9d07acb81723 100644 --- a/tests/models/pegasus/test_modeling_pegasus.py +++ b/tests/models/pegasus/test_modeling_pegasus.py @@ -47,28 +47,17 @@ def prepare_pegasus_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -168,10 +157,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = PegasusModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/pegasus_x/test_modeling_pegasus_x.py b/tests/models/pegasus_x/test_modeling_pegasus_x.py index 241fe66f25e3..7ba4598f7837 100644 --- a/tests/models/pegasus_x/test_modeling_pegasus_x.py +++ b/tests/models/pegasus_x/test_modeling_pegasus_x.py @@ -214,7 +214,6 @@ class PegasusXModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterM ) is_encoder_decoder = True test_pruning = False - test_head_masking = False test_missing_keys = False def setUp(self): @@ -851,7 +850,6 @@ class PegasusXStandaloneDecoderModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (PegasusXDecoder,) if is_torch_available() else () test_pruning = False is_encoder_decoder = False - test_head_masking = False def setUp( self, diff --git a/tests/models/perceiver/test_modeling_perceiver.py b/tests/models/perceiver/test_modeling_perceiver.py index 3966f4a3a0d6..c51b1a37baa3 100644 --- a/tests/models/perceiver/test_modeling_perceiver.py +++ b/tests/models/perceiver/test_modeling_perceiver.py @@ -306,7 +306,6 @@ class PerceiverModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas else {} ) test_pruning = False - test_head_masking = False test_torchscript = False maxDiff = None diff --git a/tests/models/perception_lm/test_modeling_perception_lm.py b/tests/models/perception_lm/test_modeling_perception_lm.py index 79c74c93a682..15738c55a3ea 100644 --- a/tests/models/perception_lm/test_modeling_perception_lm.py +++ b/tests/models/perception_lm/test_modeling_perception_lm.py @@ -178,7 +178,6 @@ class PerceptionLMForConditionalGenerationModelTest(ModelTesterMixin, Generation else () ) test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/phi4_multimodal/test_modeling_phi4_multimodal.py b/tests/models/phi4_multimodal/test_modeling_phi4_multimodal.py index b8e3232dc005..0363476e0fc6 100644 --- a/tests/models/phi4_multimodal/test_modeling_phi4_multimodal.py +++ b/tests/models/phi4_multimodal/test_modeling_phi4_multimodal.py @@ -202,7 +202,6 @@ class Phi4MultimodalModelTest(ModelTesterMixin, GenerationTesterMixin, unittest. all_model_classes = (Phi4MultimodalForCausalLM, Phi4MultimodalModel) if is_torch_available() else () test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/pix2struct/test_modeling_pix2struct.py b/tests/models/pix2struct/test_modeling_pix2struct.py index 0acda0ddac3d..efeefa6bc155 100644 --- a/tests/models/pix2struct/test_modeling_pix2struct.py +++ b/tests/models/pix2struct/test_modeling_pix2struct.py @@ -147,7 +147,6 @@ class Pix2StructVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = Pix2StructVisionModelTester(self) @@ -315,7 +314,6 @@ class Pix2StructTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (Pix2StructTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = Pix2StructTextModelTester(self) @@ -414,7 +412,6 @@ class Pix2StructModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTeste else {} ) fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False @@ -478,9 +475,6 @@ def test_forward_signature(self): "attention_mask", "decoder_input_ids", "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "labels", diff --git a/tests/models/pixtral/test_modeling_pixtral.py b/tests/models/pixtral/test_modeling_pixtral.py index 1a7b2ad01d32..9c97e278c77e 100644 --- a/tests/models/pixtral/test_modeling_pixtral.py +++ b/tests/models/pixtral/test_modeling_pixtral.py @@ -111,7 +111,6 @@ class PixtralVisionModelModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (PixtralVisionModel,) if is_torch_available() else () additional_model_inputs = ["image_sizes"] test_pruning = False - test_head_masking = False test_torchscript = False test_resize_embeddings = False diff --git a/tests/models/plbart/test_modeling_plbart.py b/tests/models/plbart/test_modeling_plbart.py index 75f72a19028f..145bbeac0f92 100644 --- a/tests/models/plbart/test_modeling_plbart.py +++ b/tests/models/plbart/test_modeling_plbart.py @@ -53,28 +53,17 @@ def prepare_plbart_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -154,10 +143,9 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = PLBartModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] - head_mask = inputs_dict["head_mask"] # first forward pass - outputs = model(input_ids, attention_mask=attention_mask, head_mask=head_mask, use_cache=True) + outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() diff --git a/tests/models/poolformer/test_modeling_poolformer.py b/tests/models/poolformer/test_modeling_poolformer.py index 3964d42631ef..a6a20b12a6cc 100644 --- a/tests/models/poolformer/test_modeling_poolformer.py +++ b/tests/models/poolformer/test_modeling_poolformer.py @@ -126,7 +126,6 @@ class PoolFormerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa else {} ) - test_head_masking = False test_pruning = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/prompt_depth_anything/test_modeling_prompt_depth_anything.py b/tests/models/prompt_depth_anything/test_modeling_prompt_depth_anything.py index e0aad3d5d9ef..8fad231091bf 100644 --- a/tests/models/prompt_depth_anything/test_modeling_prompt_depth_anything.py +++ b/tests/models/prompt_depth_anything/test_modeling_prompt_depth_anything.py @@ -146,7 +146,6 @@ class PromptDepthAnythingModelTest(ModelTesterMixin, PipelineTesterMixin, unitte test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = PromptDepthAnythingModelTester(self) diff --git a/tests/models/pvt/test_modeling_pvt.py b/tests/models/pvt/test_modeling_pvt.py index 637a21a9d2b1..bea696a36568 100644 --- a/tests/models/pvt/test_modeling_pvt.py +++ b/tests/models/pvt/test_modeling_pvt.py @@ -143,7 +143,6 @@ class PvtModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) - test_head_masking = False test_pruning = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/pvt_v2/test_modeling_pvt_v2.py b/tests/models/pvt_v2/test_modeling_pvt_v2.py index 91ec40973938..7cfa0d7bbad5 100644 --- a/tests/models/pvt_v2/test_modeling_pvt_v2.py +++ b/tests/models/pvt_v2/test_modeling_pvt_v2.py @@ -149,7 +149,6 @@ class PvtV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) - test_head_masking = False test_pruning = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py b/tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py index 61fa18153902..6e40886abb41 100644 --- a/tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py +++ b/tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py @@ -257,7 +257,6 @@ class Qwen2_5OmniThinkerForConditionalGenerationModelTest(ModelTesterMixin, Gene all_model_classes = (Qwen2_5OmniThinkerForConditionalGeneration,) if is_torch_available() else () all_generative_model_classes = (Qwen2_5OmniThinkerForConditionalGeneration,) if is_torch_available() else () test_pruning = False - test_head_masking = False _is_composite = True model_split_percents = [0.5, 0.9] diff --git a/tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py b/tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py index e2f0e8581837..961486c5db0b 100644 --- a/tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py +++ b/tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py @@ -212,7 +212,6 @@ class Qwen2_5_VLModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.Test else () ) test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = Qwen2_5_VLVisionText2TextModelTester(self) diff --git a/tests/models/qwen2_audio/test_modeling_qwen2_audio.py b/tests/models/qwen2_audio/test_modeling_qwen2_audio.py index 4d26443f63d6..a22338a8d584 100644 --- a/tests/models/qwen2_audio/test_modeling_qwen2_audio.py +++ b/tests/models/qwen2_audio/test_modeling_qwen2_audio.py @@ -141,7 +141,6 @@ class Qwen2AudioForConditionalGenerationModelTest(ModelTesterMixin, GenerationTe all_model_classes = (Qwen2AudioForConditionalGeneration,) if is_torch_available() else () test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/qwen2_vl/test_modeling_qwen2_vl.py b/tests/models/qwen2_vl/test_modeling_qwen2_vl.py index 898b98658ecc..977161ec58a4 100644 --- a/tests/models/qwen2_vl/test_modeling_qwen2_vl.py +++ b/tests/models/qwen2_vl/test_modeling_qwen2_vl.py @@ -201,7 +201,6 @@ class Qwen2VLModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCas ) pipeline_model_mapping = {"image-text-to-text": Qwen2VLForConditionalGeneration} test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/qwen3_omni_moe/test_modeling_qwen3_omni_moe.py b/tests/models/qwen3_omni_moe/test_modeling_qwen3_omni_moe.py index c0870bceda8d..5c72fae9b3d2 100644 --- a/tests/models/qwen3_omni_moe/test_modeling_qwen3_omni_moe.py +++ b/tests/models/qwen3_omni_moe/test_modeling_qwen3_omni_moe.py @@ -264,7 +264,6 @@ class Qwen2_5OmniThinkerForConditionalGenerationModelTest(ModelTesterMixin, Gene all_model_classes = (Qwen3OmniMoeThinkerForConditionalGeneration,) if is_torch_available() else () all_generative_model_classes = (Qwen3OmniMoeThinkerForConditionalGeneration,) if is_torch_available() else () test_pruning = False - test_head_masking = False _is_composite = True model_split_percents = [0.5, 0.9] diff --git a/tests/models/qwen3_vl/test_modeling_qwen3_vl.py b/tests/models/qwen3_vl/test_modeling_qwen3_vl.py index 888d9eb76618..77a679ba4e4d 100644 --- a/tests/models/qwen3_vl/test_modeling_qwen3_vl.py +++ b/tests/models/qwen3_vl/test_modeling_qwen3_vl.py @@ -183,7 +183,6 @@ class Qwen3VLModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCas else () ) test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = Qwen3VLVisionText2TextModelTester(self) diff --git a/tests/models/qwen3_vl_moe/test_modeling_qwen3_vl_moe.py b/tests/models/qwen3_vl_moe/test_modeling_qwen3_vl_moe.py index d5e971041931..e08e184e671a 100644 --- a/tests/models/qwen3_vl_moe/test_modeling_qwen3_vl_moe.py +++ b/tests/models/qwen3_vl_moe/test_modeling_qwen3_vl_moe.py @@ -184,7 +184,6 @@ class Qwen3VLMoeModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.Test else () ) test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = Qwen3VLMoeVisionText2TextModelTester(self) diff --git a/tests/models/reformer/test_modeling_reformer.py b/tests/models/reformer/test_modeling_reformer.py index 48df1559e991..9b11981331da 100644 --- a/tests/models/reformer/test_modeling_reformer.py +++ b/tests/models/reformer/test_modeling_reformer.py @@ -603,7 +603,6 @@ class ReformerLocalAttnModelTest(ReformerTesterMixin, GenerationTesterMixin, Mod else () ) test_pruning = False - test_headmasking = False test_torchscript = False test_sequence_classification_problem_types = True @@ -727,7 +726,6 @@ class ReformerLSHAttnModelTest( else {} ) test_pruning = False - test_headmasking = False test_torchscript = False # TODO: Fix the failed tests diff --git a/tests/models/regnet/test_modeling_regnet.py b/tests/models/regnet/test_modeling_regnet.py index bc7be198d145..97702a108dba 100644 --- a/tests/models/regnet/test_modeling_regnet.py +++ b/tests/models/regnet/test_modeling_regnet.py @@ -131,7 +131,6 @@ class RegNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/resnet/test_modeling_resnet.py b/tests/models/resnet/test_modeling_resnet.py index 42c5aba10446..c764ff6b2e25 100644 --- a/tests/models/resnet/test_modeling_resnet.py +++ b/tests/models/resnet/test_modeling_resnet.py @@ -176,7 +176,6 @@ class ResNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = True test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/roberta/test_modeling_roberta.py b/tests/models/roberta/test_modeling_roberta.py index 009e9dfc22c1..e2e1b8e7b0f1 100644 --- a/tests/models/roberta/test_modeling_roberta.py +++ b/tests/models/roberta/test_modeling_roberta.py @@ -578,7 +578,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py b/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py index 7605be9e2c84..541f6ba2d8e7 100644 --- a/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py +++ b/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py @@ -583,7 +583,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/roc_bert/test_modeling_roc_bert.py b/tests/models/roc_bert/test_modeling_roc_bert.py index 23a6017168a3..09ad188b17b6 100644 --- a/tests/models/roc_bert/test_modeling_roc_bert.py +++ b/tests/models/roc_bert/test_modeling_roc_bert.py @@ -763,7 +763,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/rt_detr/test_modeling_rt_detr.py b/tests/models/rt_detr/test_modeling_rt_detr.py index 746d98c138f9..d1969c3c0b6e 100644 --- a/tests/models/rt_detr/test_modeling_rt_detr.py +++ b/tests/models/rt_detr/test_modeling_rt_detr.py @@ -260,7 +260,6 @@ class RTDetrModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = True diff --git a/tests/models/rt_detr_v2/test_modeling_rt_detr_v2.py b/tests/models/rt_detr_v2/test_modeling_rt_detr_v2.py index de7414ba6536..80c40195bfd5 100644 --- a/tests/models/rt_detr_v2/test_modeling_rt_detr_v2.py +++ b/tests/models/rt_detr_v2/test_modeling_rt_detr_v2.py @@ -264,7 +264,6 @@ class RTDetrV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False test_torch_exportable = True diff --git a/tests/models/rwkv/test_modeling_rwkv.py b/tests/models/rwkv/test_modeling_rwkv.py index 891c01315376..9a61160a275f 100644 --- a/tests/models/rwkv/test_modeling_rwkv.py +++ b/tests/models/rwkv/test_modeling_rwkv.py @@ -122,7 +122,6 @@ def prepare_config_and_inputs( config, input_ids, input_mask, - None, token_type_ids, mc_token_ids, sequence_labels, @@ -157,7 +156,7 @@ def get_pipeline_config(self): config.vocab_size = 300 return config - def create_and_check_rwkv_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_rwkv_model(self, config, input_ids, input_mask, token_type_ids, *args): config.output_hidden_states = True model = RwkvModel(config=config) model.to(torch_device) @@ -168,7 +167,7 @@ def create_and_check_rwkv_model(self, config, input_ids, input_mask, head_mask, self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.hidden_states), config.num_hidden_layers + 1) - def create_and_check_causl_lm(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_causl_lm(self, config, input_ids, input_mask, token_type_ids, *args): model = RwkvForCausalLM(config) model.to(torch_device) model.eval() @@ -177,7 +176,7 @@ def create_and_check_causl_lm(self, config, input_ids, input_mask, head_mask, to self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) - def create_and_check_state_equivalency(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): + def create_and_check_state_equivalency(self, config, input_ids, input_mask, token_type_ids, *args): model = RwkvModel(config=config) model.to(torch_device) model.eval() @@ -201,7 +200,6 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, token_type_ids, mc_token_ids, sequence_labels, @@ -223,7 +221,6 @@ class RwkvModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin fx_compatible = False test_missing_keys = False test_pruning = False - test_head_masking = False # Rwkv does not support head masking def setUp(self): self.model_tester = RwkvModelTester(self) diff --git a/tests/models/sam/test_modeling_sam.py b/tests/models/sam/test_modeling_sam.py index 5923ce5bc8a5..67a29d8819b7 100644 --- a/tests/models/sam/test_modeling_sam.py +++ b/tests/models/sam/test_modeling_sam.py @@ -161,7 +161,6 @@ class SamVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True @@ -517,7 +516,6 @@ class SamModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False _is_composite = True diff --git a/tests/models/sam2/test_modeling_sam2.py b/tests/models/sam2/test_modeling_sam2.py index dcacd3920a7a..5e6c03aae074 100644 --- a/tests/models/sam2/test_modeling_sam2.py +++ b/tests/models/sam2/test_modeling_sam2.py @@ -144,7 +144,6 @@ class Sam2VisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True @@ -468,7 +467,6 @@ class Sam2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False _is_composite = True diff --git a/tests/models/sam_hq/test_modeling_sam_hq.py b/tests/models/sam_hq/test_modeling_sam_hq.py index d008b788f6ad..f91aa0b096a8 100644 --- a/tests/models/sam_hq/test_modeling_sam_hq.py +++ b/tests/models/sam_hq/test_modeling_sam_hq.py @@ -169,7 +169,6 @@ class SamHQVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True @@ -549,7 +548,6 @@ class SamHQModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_cpu_offload = False test_disk_offload_bin = False diff --git a/tests/models/seamless_m4t/test_modeling_seamless_m4t.py b/tests/models/seamless_m4t/test_modeling_seamless_m4t.py index 4690407c7b51..596df6daeda2 100644 --- a/tests/models/seamless_m4t/test_modeling_seamless_m4t.py +++ b/tests/models/seamless_m4t/test_modeling_seamless_m4t.py @@ -343,7 +343,6 @@ class SeamlessM4TModelWithSpeechInputTest(ModelTesterMixin, unittest.TestCase): test_missing_keys = False test_pruning = False test_resize_embeddings = False - test_headmasking = False test_torchscript = False all_model_classes = ( @@ -576,7 +575,6 @@ class SeamlessM4TModelWithTextInputTest(ModelTesterMixin, PipelineTesterMixin, u test_missing_keys = False test_pruning = False test_resize_embeddings = True - test_headmasking = False test_torchscript = False all_model_classes = ( diff --git a/tests/models/seamless_m4t_v2/test_modeling_seamless_m4t_v2.py b/tests/models/seamless_m4t_v2/test_modeling_seamless_m4t_v2.py index 9c73a6ba3b4d..521e5f864e98 100644 --- a/tests/models/seamless_m4t_v2/test_modeling_seamless_m4t_v2.py +++ b/tests/models/seamless_m4t_v2/test_modeling_seamless_m4t_v2.py @@ -369,7 +369,6 @@ class SeamlessM4Tv2ModelWithSpeechInputTest(ModelTesterMixin, unittest.TestCase) test_missing_keys = False test_pruning = False test_resize_embeddings = False - test_headmasking = False test_torchscript = False all_model_classes = ( @@ -601,7 +600,6 @@ class SeamlessM4Tv2ModelWithTextInputTest(ModelTesterMixin, unittest.TestCase): test_missing_keys = False test_pruning = False test_resize_embeddings = True - test_headmasking = False test_torchscript = False all_model_classes = ( diff --git a/tests/models/segformer/test_modeling_segformer.py b/tests/models/segformer/test_modeling_segformer.py index fcd6594217cf..fc1d4cc750c6 100644 --- a/tests/models/segformer/test_modeling_segformer.py +++ b/tests/models/segformer/test_modeling_segformer.py @@ -176,7 +176,6 @@ class SegformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCas ) fx_compatible = True - test_head_masking = False test_pruning = False test_resize_embeddings = False test_torch_exportable = True diff --git a/tests/models/seggpt/test_modeling_seggpt.py b/tests/models/seggpt/test_modeling_seggpt.py index 4a30b5cbd8bb..0b4080aade86 100644 --- a/tests/models/seggpt/test_modeling_seggpt.py +++ b/tests/models/seggpt/test_modeling_seggpt.py @@ -171,7 +171,6 @@ class SegGptModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True diff --git a/tests/models/sew/test_modeling_sew.py b/tests/models/sew/test_modeling_sew.py index 270f91bdf628..70eb4e25095d 100644 --- a/tests/models/sew/test_modeling_sew.py +++ b/tests/models/sew/test_modeling_sew.py @@ -284,7 +284,6 @@ class SEWModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = SEWModelTester(self) diff --git a/tests/models/sew_d/test_modeling_sew_d.py b/tests/models/sew_d/test_modeling_sew_d.py index 86064250b8f6..c4d97b43a09d 100644 --- a/tests/models/sew_d/test_modeling_sew_d.py +++ b/tests/models/sew_d/test_modeling_sew_d.py @@ -305,7 +305,6 @@ class SEWDModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) test_pruning = False - test_headmasking = False test_torchscript = False def setUp(self): diff --git a/tests/models/siglip/test_modeling_siglip.py b/tests/models/siglip/test_modeling_siglip.py index 0005c44e634a..047f96bbdf1c 100644 --- a/tests/models/siglip/test_modeling_siglip.py +++ b/tests/models/siglip/test_modeling_siglip.py @@ -178,7 +178,6 @@ class SiglipVisionModelTest(SiglipModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False # MP works but offload doesn't work when the MultiheadAttention is offloaded # TODO: One potential solution would be to add to set preload_module_classes = ["SiglipMultiheadAttentionPoolingHead"] # in the dispatch_model function @@ -348,7 +347,6 @@ class SiglipTextModelTest(SiglipModelTesterMixin, unittest.TestCase): all_model_classes = (SiglipTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False model_split_percents = [0.5, 0.8, 0.9] # Copied from tests.models.clip.test_modeling_clip.CLIPTextModelTest.setUp with CLIP->Siglip @@ -454,7 +452,6 @@ class SiglipModelTest(SiglipModelTesterMixin, PipelineTesterMixin, unittest.Test all_model_classes = (SiglipModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": SiglipModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -623,7 +620,6 @@ class SiglipForImageClassificationModelTest(SiglipModelTesterMixin, PipelineTest all_model_classes = (SiglipForImageClassification,) if is_torch_available() else () pipeline_model_mapping = {"image-classification": SiglipForImageClassification} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/siglip2/test_modeling_siglip2.py b/tests/models/siglip2/test_modeling_siglip2.py index d6054dd8d15d..9524ea1d6221 100644 --- a/tests/models/siglip2/test_modeling_siglip2.py +++ b/tests/models/siglip2/test_modeling_siglip2.py @@ -270,7 +270,6 @@ class Siglip2VisionModelTest(Siglip2ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False # MP works but offload doesn't work when the MultiheadAttention is offloaded # TODO: One potential solution would be to add to set preload_module_classes = ["Siglip2MultiheadAttentionPoolingHead"] # in the dispatch_model function @@ -440,7 +439,6 @@ class Siglip2TextModelTest(Siglip2ModelTesterMixin, unittest.TestCase): fx_compatible = False test_resize_embeddings = False test_pruning = False - test_head_masking = False model_split_percents = [0.5, 0.8, 0.9] def setUp(self): @@ -552,7 +550,6 @@ class Siglip2ModelTest(Siglip2ModelTesterMixin, PipelineTesterMixin, unittest.Te "spatial_shapes", ] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False @@ -654,7 +651,6 @@ class Siglip2ForImageClassificationModelTest(Siglip2ModelTesterMixin, PipelineTe pipeline_model_mapping = {"image-classification": Siglip2ForImageClassification} if is_torch_available() else {} additional_model_inputs = ["pixel_values", "pixel_attention_mask", "spatial_shapes"] fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/smolvlm/test_modeling_smolvlm.py b/tests/models/smolvlm/test_modeling_smolvlm.py index dd449672551b..b1fd02c5e6e0 100644 --- a/tests/models/smolvlm/test_modeling_smolvlm.py +++ b/tests/models/smolvlm/test_modeling_smolvlm.py @@ -171,7 +171,6 @@ class SmolVLMModelTest(ModelTesterMixin, unittest.TestCase): test_torchscript = False test_pruning = False test_resize_embeddings = True - test_head_masking = False def setUp(self): self.model_tester = SmolVLMVisionText2TextModelTester(self) @@ -335,7 +334,6 @@ class SmolVLMForConditionalGenerationModelTest(GenerationTesterMixin, ModelTeste fx_compatible = False test_pruning = False test_resize_embeddings = True - test_head_masking = False test_torchscript = False def setUp(self): diff --git a/tests/models/speech_to_text/test_modeling_speech_to_text.py b/tests/models/speech_to_text/test_modeling_speech_to_text.py index f8ac098f9296..12aa8b6817a4 100644 --- a/tests/models/speech_to_text/test_modeling_speech_to_text.py +++ b/tests/models/speech_to_text/test_modeling_speech_to_text.py @@ -51,29 +51,18 @@ def prepare_speech_to_text_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_features.ne(0) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) + return { # "input_ids": input_features, "input_features": input_features, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -360,11 +349,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): diff --git a/tests/models/speecht5/test_modeling_speecht5.py b/tests/models/speecht5/test_modeling_speecht5.py index 440d64723995..bfb305c38d17 100644 --- a/tests/models/speecht5/test_modeling_speecht5.py +++ b/tests/models/speecht5/test_modeling_speecht5.py @@ -64,9 +64,6 @@ def prepare_inputs_dict( decoder_input_values=None, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if input_ids is not None: encoder_dict = {"input_ids": input_ids} @@ -78,21 +75,11 @@ def prepare_inputs_dict( else: decoder_dict = {"decoder_input_values": decoder_input_values} - if head_mask is None: - head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) - return { **encoder_dict, **decoder_dict, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } @@ -174,7 +161,6 @@ class SpeechT5ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase ) is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False def setUp(self): @@ -203,11 +189,7 @@ def test_forward_signature(self): "decoder_input_values", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) @unittest.skip(reason="Model has no input_embeds") @@ -373,7 +355,6 @@ class SpeechT5ForSpeechToTextTest(ModelTesterMixin, unittest.TestCase, Generatio all_model_classes = (SpeechT5ForSpeechToText,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = SpeechT5ForSpeechToTextTester(self) @@ -519,11 +500,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): @@ -905,7 +882,6 @@ class SpeechT5ForTextToSpeechTest(ModelTesterMixin, unittest.TestCase): all_generative_model_classes = () is_encoder_decoder = True test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = SpeechT5ForTextToSpeechTester(self) @@ -978,11 +954,7 @@ def test_forward_signature(self): "decoder_input_values", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_initialization(self): @@ -1454,7 +1426,6 @@ class SpeechT5ForSpeechToSpeechTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (SpeechT5ForSpeechToSpeech,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False def setUp(self): @@ -1622,11 +1593,7 @@ def test_forward_signature(self): "decoder_input_values", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): @@ -1856,7 +1823,6 @@ class SpeechT5HifiGanTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False test_resize_position_embeddings = False - test_head_masking = False test_mismatched_shapes = False test_missing_keys = False is_encoder_decoder = False diff --git a/tests/models/splinter/test_modeling_splinter.py b/tests/models/splinter/test_modeling_splinter.py index f8a8121c40d1..fbb9d4e7c210 100644 --- a/tests/models/splinter/test_modeling_splinter.py +++ b/tests/models/splinter/test_modeling_splinter.py @@ -347,12 +347,6 @@ def test_multi_gpu_data_parallel_forward(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() - # some params shouldn't be scattered by nn.DataParallel - # so just remove them if they are present. - blacklist_non_batched_params = ["head_mask", "decoder_head_mask", "cross_attn_head_mask"] - for k in blacklist_non_batched_params: - inputs_dict.pop(k, None) - # move input tensors to cuda:O for k, v in inputs_dict.items(): if torch.is_tensor(v): diff --git a/tests/models/squeezebert/test_modeling_squeezebert.py b/tests/models/squeezebert/test_modeling_squeezebert.py index 8da8626d7dec..65c6801fd903 100644 --- a/tests/models/squeezebert/test_modeling_squeezebert.py +++ b/tests/models/squeezebert/test_modeling_squeezebert.py @@ -240,7 +240,6 @@ class SqueezeBertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC ) test_pruning = False test_resize_embeddings = True - test_head_masking = False def setUp(self): self.model_tester = SqueezeBertModelTester(self) diff --git a/tests/models/superglue/test_modeling_superglue.py b/tests/models/superglue/test_modeling_superglue.py index cc916c55b826..04f6181f1e9c 100644 --- a/tests/models/superglue/test_modeling_superglue.py +++ b/tests/models/superglue/test_modeling_superglue.py @@ -123,7 +123,6 @@ class SuperGlueModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = True def setUp(self): diff --git a/tests/models/superpoint/test_modeling_superpoint.py b/tests/models/superpoint/test_modeling_superpoint.py index 0f49a8f00bcc..017d746257e3 100644 --- a/tests/models/superpoint/test_modeling_superpoint.py +++ b/tests/models/superpoint/test_modeling_superpoint.py @@ -117,7 +117,6 @@ class SuperPointModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False from_pretrained_id = "magic-leap-community/superpoint" diff --git a/tests/models/swiftformer/test_modeling_swiftformer.py b/tests/models/swiftformer/test_modeling_swiftformer.py index e17114793b49..91a508b97559 100644 --- a/tests/models/swiftformer/test_modeling_swiftformer.py +++ b/tests/models/swiftformer/test_modeling_swiftformer.py @@ -144,7 +144,6 @@ class SwiftFormerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/swin/test_modeling_swin.py b/tests/models/swin/test_modeling_swin.py index 17dac09168b1..9f602438bcd3 100644 --- a/tests/models/swin/test_modeling_swin.py +++ b/tests/models/swin/test_modeling_swin.py @@ -239,7 +239,6 @@ class SwinModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/swin2sr/test_modeling_swin2sr.py b/tests/models/swin2sr/test_modeling_swin2sr.py index f1a143a47e99..0099f3cdc644 100644 --- a/tests/models/swin2sr/test_modeling_swin2sr.py +++ b/tests/models/swin2sr/test_modeling_swin2sr.py @@ -169,7 +169,6 @@ class Swin2SRModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True diff --git a/tests/models/swinv2/test_modeling_swinv2.py b/tests/models/swinv2/test_modeling_swinv2.py index 0779236859e7..fefe7bb7f841 100644 --- a/tests/models/swinv2/test_modeling_swinv2.py +++ b/tests/models/swinv2/test_modeling_swinv2.py @@ -225,7 +225,6 @@ class Swinv2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/t5gemma/test_modeling_t5gemma.py b/tests/models/t5gemma/test_modeling_t5gemma.py index c102c2c273ca..a62a338dc61a 100644 --- a/tests/models/t5gemma/test_modeling_t5gemma.py +++ b/tests/models/t5gemma/test_modeling_t5gemma.py @@ -592,7 +592,6 @@ class T5GemmaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMi else {} ) - test_headmasking = False test_pruning = False _is_stateful = True is_encoder_decoder = True @@ -1461,7 +1460,6 @@ class T5GemmaEncoderOnlyModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (T5GemmaEncoderModel, T5GemmaForTokenClassification) if is_torch_available() else () test_pruning = False test_resize_embeddings = False - test_headmasking = False _is_stateful = True is_encoder_decoder = False diff --git a/tests/models/table_transformer/test_modeling_table_transformer.py b/tests/models/table_transformer/test_modeling_table_transformer.py index 7d4eb4be4bb8..a9d7d4772961 100644 --- a/tests/models/table_transformer/test_modeling_table_transformer.py +++ b/tests/models/table_transformer/test_modeling_table_transformer.py @@ -204,7 +204,6 @@ class TableTransformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest. is_encoder_decoder = True test_torchscript = False test_pruning = False - test_head_masking = False test_missing_keys = False zero_init_hidden_state = True test_torch_exportable = True @@ -443,12 +442,7 @@ def test_forward_signature(self): arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: - expected_arg_names = ["pixel_values", "pixel_mask"] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" in arg_names - else [] - ) + expected_arg_names = ["pixel_values", "pixel_mask", "decoder_attention_mask", "encoder_outputs"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] diff --git a/tests/models/tapas/test_modeling_tapas.py b/tests/models/tapas/test_modeling_tapas.py index 65e5e4d2758a..9bda124a9360 100644 --- a/tests/models/tapas/test_modeling_tapas.py +++ b/tests/models/tapas/test_modeling_tapas.py @@ -432,7 +432,6 @@ class TapasModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): ) test_pruning = False test_resize_embeddings = True - test_head_masking = False def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) diff --git a/tests/models/textnet/test_modeling_textnet.py b/tests/models/textnet/test_modeling_textnet.py index bf91b360392f..b1a324c7a660 100644 --- a/tests/models/textnet/test_modeling_textnet.py +++ b/tests/models/textnet/test_modeling_textnet.py @@ -215,7 +215,6 @@ class TextNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True has_attentions = False diff --git a/tests/models/time_series_transformer/test_modeling_time_series_transformer.py b/tests/models/time_series_transformer/test_modeling_time_series_transformer.py index 02c7a1111c08..12e7b9abe5bc 100644 --- a/tests/models/time_series_transformer/test_modeling_time_series_transformer.py +++ b/tests/models/time_series_transformer/test_modeling_time_series_transformer.py @@ -182,7 +182,6 @@ class TimeSeriesTransformerModelTest(ModelTesterMixin, PipelineTesterMixin, unit pipeline_model_mapping = {"feature-extraction": TimeSeriesTransformerModel} if is_torch_available() else {} is_encoder_decoder = True test_pruning = False - test_head_masking = False test_missing_keys = False test_torchscript = False test_inputs_embeds = False @@ -247,9 +246,6 @@ def test_forward_signature(self): [ "future_observed_mask", "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", @@ -260,9 +256,6 @@ def test_forward_signature(self): if "future_observed_mask" in arg_names else [ "decoder_attention_mask", - "head_mask", - "decoder_head_mask", - "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", diff --git a/tests/models/timesfm/test_modeling_timesfm.py b/tests/models/timesfm/test_modeling_timesfm.py index e77fbe65ebb5..aa3451227644 100644 --- a/tests/models/timesfm/test_modeling_timesfm.py +++ b/tests/models/timesfm/test_modeling_timesfm.py @@ -149,10 +149,6 @@ def test_sdpa_can_dispatch_on_flash(self): def test_model_get_set_embeddings(self): pass - @unittest.skip(reason="Model does not have head mask") - def test_headmasking(self): - pass - # the main input name is `inputs` def test_model_main_input_name(self): model_signature = inspect.signature(getattr(TimesFmModelForPrediction, "forward")) diff --git a/tests/models/timesformer/test_modeling_timesformer.py b/tests/models/timesformer/test_modeling_timesformer.py index 10aef612fdae..d6ccaf96092b 100644 --- a/tests/models/timesformer/test_modeling_timesformer.py +++ b/tests/models/timesformer/test_modeling_timesformer.py @@ -166,7 +166,6 @@ class TimesformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/timm_backbone/test_modeling_timm_backbone.py b/tests/models/timm_backbone/test_modeling_timm_backbone.py index d8fc0d53a4cd..f274fd65c956 100644 --- a/tests/models/timm_backbone/test_modeling_timm_backbone.py +++ b/tests/models/timm_backbone/test_modeling_timm_backbone.py @@ -86,7 +86,6 @@ class TimmBackboneModelTest(ModelTesterMixin, BackboneTesterMixin, PipelineTeste all_model_classes = (TimmBackbone,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": TimmBackbone} if is_torch_available() else {} test_resize_embeddings = False - test_head_masking = False test_pruning = False has_attentions = False diff --git a/tests/models/timm_wrapper/test_modeling_timm_wrapper.py b/tests/models/timm_wrapper/test_modeling_timm_wrapper.py index 3ed21af6507e..33592276a640 100644 --- a/tests/models/timm_wrapper/test_modeling_timm_wrapper.py +++ b/tests/models/timm_wrapper/test_modeling_timm_wrapper.py @@ -94,7 +94,6 @@ class TimmWrapperModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC ) test_resize_embeddings = False - test_head_masking = False test_pruning = False has_attentions = False diff --git a/tests/models/udop/test_modeling_udop.py b/tests/models/udop/test_modeling_udop.py index fe47a6251975..0dc0d970877c 100644 --- a/tests/models/udop/test_modeling_udop.py +++ b/tests/models/udop/test_modeling_udop.py @@ -278,7 +278,6 @@ class UdopModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin fx_compatible = False test_pruning = False test_torchscript = False - test_head_masking = False test_resize_embeddings = True is_encoder_decoder = True test_cpu_offload = False @@ -348,13 +347,10 @@ def test_forward_signature(self): "attention_mask", "bbox", "cache_position", - "cross_attn_head_mask", "decoder_attention_mask", - "decoder_head_mask", "decoder_input_ids", "decoder_inputs_embeds", "encoder_outputs", - "head_mask", "input_ids", "inputs_embeds", ] @@ -553,7 +549,6 @@ class UdopEncoderOnlyModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (UdopEncoderModel,) if is_torch_available() else () test_pruning = False test_torchscript = False - test_head_masking = False test_resize_embeddings = False def setUp(self): diff --git a/tests/models/umt5/test_modeling_umt5.py b/tests/models/umt5/test_modeling_umt5.py index f8f8beb7fa9c..f97945371f5d 100644 --- a/tests/models/umt5/test_modeling_umt5.py +++ b/tests/models/umt5/test_modeling_umt5.py @@ -109,30 +109,16 @@ def prepare_inputs_dict( decoder_input_ids, attention_mask=None, decoder_attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) - if head_mask is None: - head_mask = torch.ones(config.num_hidden_layers, config.num_attention_heads, device=torch_device) - if decoder_head_mask is None: - decoder_head_mask = torch.ones(config.num_decoder_layers, config.num_attention_heads, device=torch_device) - if cross_attn_head_mask is None: - cross_attn_head_mask = torch.ones( - config.num_decoder_layers, config.num_attention_heads, device=torch_device - ) return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, } def prepare_config_and_inputs(self): diff --git a/tests/models/unispeech/test_modeling_unispeech.py b/tests/models/unispeech/test_modeling_unispeech.py index 00614bca7c84..65e108df0dbb 100644 --- a/tests/models/unispeech/test_modeling_unispeech.py +++ b/tests/models/unispeech/test_modeling_unispeech.py @@ -313,7 +313,6 @@ class UniSpeechRobustModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.T else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = UniSpeechModelTester( diff --git a/tests/models/unispeech_sat/test_modeling_unispeech_sat.py b/tests/models/unispeech_sat/test_modeling_unispeech_sat.py index 2c5001fbbc58..cc0c8772969d 100644 --- a/tests/models/unispeech_sat/test_modeling_unispeech_sat.py +++ b/tests/models/unispeech_sat/test_modeling_unispeech_sat.py @@ -364,7 +364,6 @@ class UniSpeechSatModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Test else {} ) test_pruning = False - test_headmasking = False test_torchscript = False def setUp(self): @@ -574,7 +573,6 @@ class UniSpeechSatRobustModelTest(ModelTesterMixin, unittest.TestCase): else () ) test_pruning = False - test_headmasking = False test_torchscript = False def setUp(self): diff --git a/tests/models/univnet/test_modeling_univnet.py b/tests/models/univnet/test_modeling_univnet.py index 441df35c236d..e2cd26ed0f26 100644 --- a/tests/models/univnet/test_modeling_univnet.py +++ b/tests/models/univnet/test_modeling_univnet.py @@ -107,7 +107,6 @@ class UnivNetModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False test_resize_position_embeddings = False - test_head_masking = False # UnivNetModel is not a sequence classification model. test_mismatched_shapes = False # UnivNetModel does not have a base_model_prefix attribute. diff --git a/tests/models/upernet/test_modeling_upernet.py b/tests/models/upernet/test_modeling_upernet.py index 349766fe575e..4bc68977fff2 100644 --- a/tests/models/upernet/test_modeling_upernet.py +++ b/tests/models/upernet/test_modeling_upernet.py @@ -152,7 +152,6 @@ class UperNetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase) fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False has_attentions = False test_torch_exportable = True diff --git a/tests/models/video_llava/test_modeling_video_llava.py b/tests/models/video_llava/test_modeling_video_llava.py index 8bdb87884373..fdf8982c3417 100644 --- a/tests/models/video_llava/test_modeling_video_llava.py +++ b/tests/models/video_llava/test_modeling_video_llava.py @@ -203,7 +203,6 @@ class VideoLlavaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTe fx_compatible = False test_pruning = False test_resize_embeddings = True - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/videomae/test_modeling_videomae.py b/tests/models/videomae/test_modeling_videomae.py index af5b96acad63..9af58504d12e 100644 --- a/tests/models/videomae/test_modeling_videomae.py +++ b/tests/models/videomae/test_modeling_videomae.py @@ -197,7 +197,6 @@ class VideoMAEModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vilt/test_modeling_vilt.py b/tests/models/vilt/test_modeling_vilt.py index faffcfccabed..0ac1891a887e 100644 --- a/tests/models/vilt/test_modeling_vilt.py +++ b/tests/models/vilt/test_modeling_vilt.py @@ -230,7 +230,6 @@ class ViltModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) test_pruning = False - test_headmasking = False test_torchscript = False model_split_percents = [0.5, 0.8, 0.9] diff --git a/tests/models/vipllava/test_modeling_vipllava.py b/tests/models/vipllava/test_modeling_vipllava.py index bf7b43dd4580..02cf3eeb29d9 100644 --- a/tests/models/vipllava/test_modeling_vipllava.py +++ b/tests/models/vipllava/test_modeling_vipllava.py @@ -179,7 +179,6 @@ class VipLlavaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTest fx_compatible = False test_pruning = False test_resize_embeddings = True - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py b/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py index 8272b7e48fe4..3f03247f2f18 100644 --- a/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py +++ b/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py @@ -908,9 +908,7 @@ def prepare_config_and_inputs(self): encoder_config_and_inputs = model_tester_encoder.prepare_config_and_inputs() decoder_config_and_inputs = model_tester_decoder.prepare_config_and_inputs(extra_inputs=True) config, pixel_values, labels = encoder_config_and_inputs - decoder_config, decoder_input_ids, decoder_attention_mask, decoder_head_mask, _, _, _, _, _ = ( - decoder_config_and_inputs - ) + decoder_config, decoder_input_ids, decoder_attention_mask, _, _, _, _, _ = decoder_config_and_inputs # make sure that cross attention layers are added decoder_config.add_cross_attention = True @@ -922,7 +920,6 @@ def prepare_config_and_inputs(self): "decoder_config": decoder_config, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, - "decoder_head_mask": decoder_head_mask, "labels": decoder_input_ids, } @@ -1022,9 +1019,7 @@ def prepare_config_and_inputs(self): encoder_config_and_inputs = model_tester_encoder.prepare_config_and_inputs() decoder_config_and_inputs = model_tester_decoder.prepare_config_and_inputs(extra_inputs=True) config, pixel_values, labels = encoder_config_and_inputs - decoder_config, decoder_input_ids, decoder_attention_mask, decoder_head_mask, _, _, _, _, _ = ( - decoder_config_and_inputs - ) + decoder_config, decoder_input_ids, decoder_attention_mask, _, _, _, _, _ = decoder_config_and_inputs # make sure that cross attention layers are added decoder_config.add_cross_attention = True @@ -1036,7 +1031,6 @@ def prepare_config_and_inputs(self): "decoder_config": decoder_config, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, - "decoder_head_mask": decoder_head_mask, "labels": decoder_input_ids, } diff --git a/tests/models/vit/test_modeling_vit.py b/tests/models/vit/test_modeling_vit.py index 9094e6898804..6bbc0fea1046 100644 --- a/tests/models/vit/test_modeling_vit.py +++ b/tests/models/vit/test_modeling_vit.py @@ -206,7 +206,6 @@ class ViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vit_mae/test_modeling_vit_mae.py b/tests/models/vit_mae/test_modeling_vit_mae.py index 689256de2d0d..d06524305a4e 100644 --- a/tests/models/vit_mae/test_modeling_vit_mae.py +++ b/tests/models/vit_mae/test_modeling_vit_mae.py @@ -182,7 +182,6 @@ class ViTMAEModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vit_msn/test_modeling_vit_msn.py b/tests/models/vit_msn/test_modeling_vit_msn.py index 8bd6850f1bb1..002345c1ef30 100644 --- a/tests/models/vit_msn/test_modeling_vit_msn.py +++ b/tests/models/vit_msn/test_modeling_vit_msn.py @@ -164,7 +164,6 @@ class ViTMSNModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vitdet/test_modeling_vitdet.py b/tests/models/vitdet/test_modeling_vitdet.py index c81fe2415c16..e0d7c9344dc9 100644 --- a/tests/models/vitdet/test_modeling_vitdet.py +++ b/tests/models/vitdet/test_modeling_vitdet.py @@ -167,7 +167,6 @@ class VitDetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vitmatte/test_modeling_vitmatte.py b/tests/models/vitmatte/test_modeling_vitmatte.py index 10c36a2dd86f..95d19bd93777 100644 --- a/tests/models/vitmatte/test_modeling_vitmatte.py +++ b/tests/models/vitmatte/test_modeling_vitmatte.py @@ -142,7 +142,6 @@ class VitMatteModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True test_torch_exportable_strictly = get_torch_major_and_minor_version() != "2.7" diff --git a/tests/models/vitpose/test_modeling_vitpose.py b/tests/models/vitpose/test_modeling_vitpose.py index d5dddc74a3bc..8ace557f52cd 100644 --- a/tests/models/vitpose/test_modeling_vitpose.py +++ b/tests/models/vitpose/test_modeling_vitpose.py @@ -154,7 +154,6 @@ class VitPoseModelTest(ModelTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True test_torch_exportable_strictly = get_torch_major_and_minor_version() != "2.7" diff --git a/tests/models/vitpose_backbone/test_modeling_vitpose_backbone.py b/tests/models/vitpose_backbone/test_modeling_vitpose_backbone.py index 6f8ee5eb9ed4..2876d95e1f3e 100644 --- a/tests/models/vitpose_backbone/test_modeling_vitpose_backbone.py +++ b/tests/models/vitpose_backbone/test_modeling_vitpose_backbone.py @@ -127,7 +127,6 @@ class VitPoseBackboneModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): diff --git a/tests/models/vits/test_modeling_vits.py b/tests/models/vits/test_modeling_vits.py index acf9b13dca6d..a6661d6946e6 100644 --- a/tests/models/vits/test_modeling_vits.py +++ b/tests/models/vits/test_modeling_vits.py @@ -161,9 +161,7 @@ class VitsModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): ) is_encoder_decoder = False test_pruning = False - test_headmasking = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False has_attentions = False diff --git a/tests/models/vivit/test_modeling_vivit.py b/tests/models/vivit/test_modeling_vivit.py index f1bd8da01e9b..357ed5b3d551 100644 --- a/tests/models/vivit/test_modeling_vivit.py +++ b/tests/models/vivit/test_modeling_vivit.py @@ -174,7 +174,6 @@ class VivitModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_torchscript = False test_resize_embeddings = False - test_head_masking = False test_torch_exportable = True def setUp(self): @@ -217,8 +216,7 @@ def test_forward_signature(self): # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] - expected_arg_names = ["pixel_values", "head_mask"] - self.assertListEqual(arg_names[:2], expected_arg_names) + self.assertEqual(arg_names[0], "pixel_values") def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() diff --git a/tests/models/vjepa2/test_modeling_vjepa2.py b/tests/models/vjepa2/test_modeling_vjepa2.py index c61cb72bc0a0..191dbc54848e 100644 --- a/tests/models/vjepa2/test_modeling_vjepa2.py +++ b/tests/models/vjepa2/test_modeling_vjepa2.py @@ -162,7 +162,6 @@ class VJEPA2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = VJEPA2ModelTester(self) diff --git a/tests/models/voxtral/test_modeling_voxtral.py b/tests/models/voxtral/test_modeling_voxtral.py index d6662ebd5532..9256bf83f983 100644 --- a/tests/models/voxtral/test_modeling_voxtral.py +++ b/tests/models/voxtral/test_modeling_voxtral.py @@ -140,7 +140,6 @@ class VoxtralForConditionalGenerationModelTest(ModelTesterMixin, GenerationTeste else {} ) test_pruning = False - test_head_masking = False _is_composite = True def setUp(self): diff --git a/tests/models/wav2vec2/test_modeling_wav2vec2.py b/tests/models/wav2vec2/test_modeling_wav2vec2.py index 796b2e8d7527..e41d37aa86a6 100644 --- a/tests/models/wav2vec2/test_modeling_wav2vec2.py +++ b/tests/models/wav2vec2/test_modeling_wav2vec2.py @@ -499,7 +499,6 @@ class Wav2Vec2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase ) fx_compatible = True test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = Wav2Vec2ModelTester(self) @@ -831,7 +830,6 @@ class Wav2Vec2RobustModelTest(ModelTesterMixin, unittest.TestCase): else () ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = Wav2Vec2ModelTester( diff --git a/tests/models/wav2vec2_bert/test_modeling_wav2vec2_bert.py b/tests/models/wav2vec2_bert/test_modeling_wav2vec2_bert.py index 253daa736ea0..2ada8472b09e 100644 --- a/tests/models/wav2vec2_bert/test_modeling_wav2vec2_bert.py +++ b/tests/models/wav2vec2_bert/test_modeling_wav2vec2_bert.py @@ -421,7 +421,6 @@ class Wav2Vec2BertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Test ) test_pruning = False - test_headmasking = False test_torchscript = False def setUp(self): diff --git a/tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py b/tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py index 9fdfcb8e11ea..4135f2a0e1ac 100644 --- a/tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py +++ b/tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py @@ -411,7 +411,6 @@ class Wav2Vec2ConformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest else {} ) test_pruning = False - test_headmasking = False test_torchscript = False def setUp(self): diff --git a/tests/models/wavlm/test_modeling_wavlm.py b/tests/models/wavlm/test_modeling_wavlm.py index 84855613dd6e..82edabc0bf17 100644 --- a/tests/models/wavlm/test_modeling_wavlm.py +++ b/tests/models/wavlm/test_modeling_wavlm.py @@ -307,7 +307,6 @@ class WavLMModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else {} ) test_pruning = False - test_headmasking = False def setUp(self): self.model_tester = WavLMModelTester(self) diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py index 83fbfce52b4b..a5e8e5345a85 100644 --- a/tests/models/whisper/test_modeling_whisper.py +++ b/tests/models/whisper/test_modeling_whisper.py @@ -586,11 +586,7 @@ def test_forward_signature(self): "decoder_input_ids", "decoder_attention_mask", ] - expected_arg_names.extend( - ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] - if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names - else ["encoder_outputs"] - ) + expected_arg_names.extend(["encoder_outputs"]) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): @@ -3298,7 +3294,7 @@ def test_forward_signature(self): # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] - expected_arg_names = ["input_features", "head_mask", "encoder_outputs"] + expected_arg_names = ["input_features", "encoder_outputs"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_forward_pass(self): diff --git a/tests/models/x_clip/test_modeling_x_clip.py b/tests/models/x_clip/test_modeling_x_clip.py index 76e53295620f..78c089af4957 100644 --- a/tests/models/x_clip/test_modeling_x_clip.py +++ b/tests/models/x_clip/test_modeling_x_clip.py @@ -152,7 +152,6 @@ class XCLIPVisionModelTest(ModelTesterMixin, unittest.TestCase): fx_compatible = False test_pruning = False test_resize_embeddings = False - test_head_masking = False def setUp(self): self.model_tester = XCLIPVisionModelTester(self) @@ -290,12 +289,6 @@ def test_attention_outputs(self): def test_multi_gpu_data_parallel_forward(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() - # some params shouldn't be scattered by nn.DataParallel - # so just remove them if they are present. - blacklist_non_batched_params = ["head_mask", "decoder_head_mask", "cross_attn_head_mask"] - for k in blacklist_non_batched_params: - inputs_dict.pop(k, None) - # move input tensors to cuda:O for k, v in inputs_dict.items(): if torch.is_tensor(v): @@ -408,7 +401,6 @@ class XCLIPTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (XCLIPTextModel,) if is_torch_available() else () fx_compatible = False test_pruning = False - test_head_masking = False def setUp(self): self.model_tester = XCLIPTextModelTester(self) @@ -529,7 +521,6 @@ class XCLIPModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (XCLIPModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": XCLIPModel} if is_torch_available() else {} fx_compatible = False - test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False diff --git a/tests/models/xcodec/test_modeling_xcodec.py b/tests/models/xcodec/test_modeling_xcodec.py index f1769415f1bc..6e00bf19062e 100644 --- a/tests/models/xcodec/test_modeling_xcodec.py +++ b/tests/models/xcodec/test_modeling_xcodec.py @@ -111,7 +111,6 @@ class XcodecModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (XcodecModel,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False - test_headmasking = False test_resize_embeddings = False test_torchscript = False diff --git a/tests/models/xglm/test_modeling_xglm.py b/tests/models/xglm/test_modeling_xglm.py index 411602148bd1..9f100cb2b738 100644 --- a/tests/models/xglm/test_modeling_xglm.py +++ b/tests/models/xglm/test_modeling_xglm.py @@ -95,13 +95,10 @@ def prepare_config_and_inputs( config = self.get_config(gradient_checkpointing=gradient_checkpointing) - head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) - return ( config, input_ids, input_mask, - head_mask, ) def get_config( @@ -125,18 +122,18 @@ def get_config( gradient_checkpointing=gradient_checkpointing, ) - def create_and_check_xglm_model(self, config, input_ids, input_mask, head_mask, *args): + def create_and_check_xglm_model(self, config, input_ids, input_mask, *args): model = XGLMModel(config=config) model.to(torch_device) model.eval() - result = model(input_ids, head_mask=head_mask) + result = model(input_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.num_hidden_layers) - def create_and_check_xglm_model_past(self, config, input_ids, input_mask, head_mask, *args): + def create_and_check_xglm_model_past(self, config, input_ids, input_mask, *args): model = XGLMModel(config=config) model.to(torch_device) model.eval() @@ -166,7 +163,7 @@ def create_and_check_xglm_model_past(self, config, input_ids, input_mask, head_m # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_xglm_model_attention_mask_past(self, config, input_ids, input_mask, head_mask, *args): + def create_and_check_xglm_model_attention_mask_past(self, config, input_ids, input_mask, *args): model = XGLMModel(config=config) model.to(torch_device) model.eval() @@ -201,7 +198,7 @@ def create_and_check_xglm_model_attention_mask_past(self, config, input_ids, inp # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_xglm_model_past_large_inputs(self, config, input_ids, input_mask, head_mask, *args): + def create_and_check_xglm_model_past_large_inputs(self, config, input_ids, input_mask, *args): model = XGLMModel(config=config) model.to(torch_device) model.eval() @@ -233,7 +230,7 @@ def create_and_check_xglm_model_past_large_inputs(self, config, input_ids, input # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) - def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, *args): + def create_and_check_lm_head_model(self, config, input_ids, input_mask, *args): model = XGLMForCausalLM(config) model.to(torch_device) model.eval() @@ -243,7 +240,7 @@ def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mas self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_forward_and_backwards( - self, config, input_ids, input_mask, head_mask, *args, gradient_checkpointing=False + self, config, input_ids, input_mask, *args, gradient_checkpointing=False ): model = XGLMForCausalLM(config) model.to(torch_device) @@ -270,12 +267,10 @@ def prepare_config_and_inputs_for_common(self): config, input_ids, input_mask, - head_mask, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, - "head_mask": head_mask, } return config, inputs_dict diff --git a/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py b/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py index 6ab20ba5feb0..badc6f067c7f 100644 --- a/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py +++ b/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py @@ -579,7 +579,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/xlstm/test_modeling_xlstm.py b/tests/models/xlstm/test_modeling_xlstm.py index eea931253b81..c25c4f12e3e5 100644 --- a/tests/models/xlstm/test_modeling_xlstm.py +++ b/tests/models/xlstm/test_modeling_xlstm.py @@ -157,7 +157,6 @@ class xLSTMModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi fx_compatible = False test_torchscript = False test_pruning = False - test_head_masking = False # xLSTM does not have attention heads pipeline_model_mapping = ( {"feature-extraction": xLSTMModel, "text-generation": xLSTMForCausalLM} if is_torch_available() else {} diff --git a/tests/models/xmod/test_modeling_xmod.py b/tests/models/xmod/test_modeling_xmod.py index 298c7ad3a27b..f0b834feaca7 100644 --- a/tests/models/xmod/test_modeling_xmod.py +++ b/tests/models/xmod/test_modeling_xmod.py @@ -593,7 +593,7 @@ def attention_mask_padding_matches_padding_free_with_position_ids( with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) - # Drop all keys except for the minimal set. Hard to manipulate with multimodals/head_mask/etc + # Drop all keys except for the minimal set. Hard to manipulate with multimodals etc inputs_dict = {k: v for k, v in inputs_dict.items() if k in ["input_ids", "attention_mask"]} # Ensure left padding, to adapt for some models diff --git a/tests/models/yolos/test_modeling_yolos.py b/tests/models/yolos/test_modeling_yolos.py index e4de540ef3e8..33a803770b19 100644 --- a/tests/models/yolos/test_modeling_yolos.py +++ b/tests/models/yolos/test_modeling_yolos.py @@ -176,7 +176,6 @@ class YolosModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): test_pruning = False test_resize_embeddings = False - test_head_masking = False test_torchscript = False test_torch_exportable = True diff --git a/tests/models/yoso/test_modeling_yoso.py b/tests/models/yoso/test_modeling_yoso.py index 621cb184e84e..864127fa7c5a 100644 --- a/tests/models/yoso/test_modeling_yoso.py +++ b/tests/models/yoso/test_modeling_yoso.py @@ -260,7 +260,6 @@ class YosoModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): else () ) test_pruning = False - test_headmasking = False test_torchscript = False pipeline_model_mapping = ( diff --git a/tests/models/zamba/test_modeling_zamba.py b/tests/models/zamba/test_modeling_zamba.py index a5580a7814dd..30c36e78dce5 100644 --- a/tests/models/zamba/test_modeling_zamba.py +++ b/tests/models/zamba/test_modeling_zamba.py @@ -301,7 +301,6 @@ class ZambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi if is_torch_available() else {} ) - test_headmasking = False test_pruning = False def _check_past_key_values_for_generate(self, batch_size, decoder_past_key_values, cache_length, config): diff --git a/tests/models/zamba2/test_modeling_zamba2.py b/tests/models/zamba2/test_modeling_zamba2.py index 8bad77e71b22..2b667dc89b40 100644 --- a/tests/models/zamba2/test_modeling_zamba2.py +++ b/tests/models/zamba2/test_modeling_zamba2.py @@ -313,7 +313,6 @@ class Zamba2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMix if is_torch_available() else {} ) - test_headmasking = False test_pruning = False def _check_past_key_values_for_generate(self, batch_size, decoder_past_key_values, cache_length, config): diff --git a/tests/models/zoedepth/test_modeling_zoedepth.py b/tests/models/zoedepth/test_modeling_zoedepth.py index 5fcf0e9a2f7f..e8c1aca81226 100644 --- a/tests/models/zoedepth/test_modeling_zoedepth.py +++ b/tests/models/zoedepth/test_modeling_zoedepth.py @@ -146,7 +146,6 @@ class ZoeDepthModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase test_pruning = False test_resize_embeddings = False - test_head_masking = False # `strict=True/False` are both failing with torch 2.7, see #38677 test_torch_exportable = get_torch_major_and_minor_version() != "2.7" diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py index ee51f183f19c..d3a450ba09ed 100755 --- a/tests/test_modeling_common.py +++ b/tests/test_modeling_common.py @@ -575,7 +575,6 @@ class ModelTesterMixin: test_pruning = True test_resize_embeddings = True test_resize_position_embeddings = False - test_head_masking = True test_mismatched_shapes = True test_missing_keys = True test_torch_exportable = False @@ -1706,77 +1705,6 @@ def flatten_output(output): # (Even with this call, there are still memory leak by ~0.04MB) self.clear_torch_jit_class_registry() - def test_headmasking(self): - if not self.test_head_masking: - self.skipTest(reason="Model does not support head masking") - - global_rng.seed(42) - config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() - global_rng.seed() - - inputs_dict["output_attentions"] = True - config.output_hidden_states = True - configs_no_init = _config_zero_init(config) # To be sure we have no Nan - configs_no_init._attn_implementation = "eager" # head mask works only in eager mode and will be removed soon - for model_class in self.all_model_classes: - model = model_class(config=configs_no_init) - model.to(torch_device) - model.eval() - - # Prepare head_mask - # Set require_grad after having prepared the tensor to avoid error (leaf variable has been moved into the graph interior) - head_mask = torch.ones( - self.model_tester.num_hidden_layers, - self.model_tester.num_attention_heads, - device=torch_device, - ) - head_mask[0, 0] = 0 - head_mask[-1, :-1] = 0 - head_mask.requires_grad_(requires_grad=True) - inputs = self._prepare_for_class(inputs_dict, model_class).copy() - inputs["head_mask"] = head_mask - if model.config.is_encoder_decoder: - signature = inspect.signature(model.forward) - arg_names = [*signature.parameters.keys()] - if "decoder_head_mask" in arg_names: # necessary differentiation because of T5 model - inputs["decoder_head_mask"] = head_mask - if "cross_attn_head_mask" in arg_names: - inputs["cross_attn_head_mask"] = head_mask - outputs = model(**inputs, return_dict=True) - - # Test that we can get a gradient back for importance score computation - output = sum(t.sum() for t in outputs[0]) - output = output.sum() - output.backward() - multihead_outputs = head_mask.grad - - self.assertIsNotNone(multihead_outputs) - self.assertEqual(len(multihead_outputs), self.model_tester.num_hidden_layers) - - def check_attentions_validity(attentions): - # Remove Nan - for t in attentions: - self.assertLess( - torch.sum(torch.isnan(t)), t.numel() / 4 - ) # Check we don't have more than 25% nans (arbitrary) - attentions = [ - t.masked_fill(torch.isnan(t), 0.0) for t in attentions - ] # remove them (the test is less complete) - - self.assertAlmostEqual(attentions[0][..., 0, :, :].flatten().sum().item(), 0.0) - self.assertNotEqual(attentions[0][..., -1, :, :].flatten().sum().item(), 0.0) - if len(attentions) > 2: # encoder-decoder models have only 2 layers in each module - self.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0) - self.assertAlmostEqual(attentions[-1][..., -2, :, :].flatten().sum().item(), 0.0) - self.assertNotEqual(attentions[-1][..., -1, :, :].flatten().sum().item(), 0.0) - - if model.config.is_encoder_decoder: - check_attentions_validity(outputs.encoder_attentions) - check_attentions_validity(outputs.decoder_attentions) - check_attentions_validity(outputs.cross_attentions) - else: - check_attentions_validity(outputs.attentions) - def test_head_pruning(self): if not self.test_pruning: self.skipTest(reason="Pruning is not activated") @@ -1787,9 +1715,6 @@ def test_head_pruning(self): inputs_dict, ) = self.model_tester.prepare_config_and_inputs_for_common() - if "head_mask" in inputs_dict: - del inputs_dict["head_mask"] - inputs_dict["output_attentions"] = True config.output_hidden_states = False config._attn_implementation = "eager" @@ -1822,9 +1747,6 @@ def test_head_pruning_save_load_from_pretrained(self): inputs_dict, ) = self.model_tester.prepare_config_and_inputs_for_common() - if "head_mask" in inputs_dict: - del inputs_dict["head_mask"] - inputs_dict["output_attentions"] = True config.output_hidden_states = False config._attn_implementation = "eager" @@ -1861,9 +1783,6 @@ def test_head_pruning_save_load_from_config_init(self): inputs_dict, ) = self.model_tester.prepare_config_and_inputs_for_common() - if "head_mask" in inputs_dict: - del inputs_dict["head_mask"] - inputs_dict["output_attentions"] = True config.output_hidden_states = False config._attn_implementation = "eager" @@ -1898,9 +1817,6 @@ def test_head_pruning_integration(self): inputs_dict, ) = self.model_tester.prepare_config_and_inputs_for_common() - if "head_mask" in inputs_dict: - del inputs_dict["head_mask"] - inputs_dict["output_attentions"] = True config.output_hidden_states = False config._attn_implementation = "eager" @@ -2845,12 +2761,6 @@ def test_inputs_embeds_matches_input_ids(self): def test_multi_gpu_data_parallel_forward(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() - # some params shouldn't be scattered by nn.DataParallel - # so just remove them if they are present. - blacklist_non_batched_params = ["head_mask", "decoder_head_mask", "cross_attn_head_mask"] - for k in blacklist_non_batched_params: - inputs_dict.pop(k, None) - # move input tensors to accelerator O for k, v in inputs_dict.items(): if torch.is_tensor(v):