Skip to content

Conversation

@gante
Copy link
Contributor

@gante gante commented Feb 19, 2025

What does this PR do?

Carved from #36238: Removes redundant code and updates outdated docs in qwen2_audio.

In a nutshell, the attention layers were copied from whisper. However, in qwen2_audio, the attention layers are exclusively used in the encoder, and thus never use cache. Removing cache-related code there prevents us from having to refactor every time we revisit caches, and results in more readable code.

The following bits were also updated in this PR:

  • Missing/outdated docs
  • Redundant overwrites

def __init__(self, config: Qwen2AudioConfig):
super().__init__(config)
self.audio_tower = AutoModel.from_config(config.audio_config)
self.audio_tower = AutoModel.from_config(config.audio_config) # Usually a `Qwen2AudioEncoder` instance
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were no references of Qwen2AudioEncoder usage in our codebase nor on the hub. However, upon closer inspection of checkpoints, we can see it is used here. Added a comment for clarification.

(btw, Qwen2AudioEncoder is untested 😢 )

attention_mask=attention_mask,
)

def prepare_inputs_for_generation(
Copy link
Contributor Author

@gante gante Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't seem to be needed? No tests fail. (the only custom code is in L1280-L1285, and _merge_input_ids_with_audio_features is related to the legacy processing)

It has the old, outdated pattern, so we would have to rewrite it anyway. If we get issues, then we can add the new prepare_inputs_for_generation pattern, with the correction

Comment on lines -1332 to -1333
def _reorder_cache(self, *args, **kwargs):
return self.language_model._reorder_cache(*args, **kwargs)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only used in models that use the legacy cache format (not the case here)

@gante gante requested review from ArthurZucker and eustlb February 19, 2025 11:27
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for cleaning up!

Comment on lines +215 to +217
@deprecate_kwarg("key_value_states", version="4.52")
@deprecate_kwarg("past_key_value", version="4.52")
@deprecate_kwarg("cache_position", version="4.52")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be nice to have a single call here hehe

@gante gante merged commit 957b05b into huggingface:main Mar 20, 2025
16 checks passed
@gante gante deleted the qwen2_audio_fixes branch March 20, 2025 10:54
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants