Skip to content

Conversation

@gante
Copy link
Contributor

@gante gante commented Oct 24, 2024

What does this PR do?

test_eager_matches_sdpa_generate has failed in our new failure reporting system (here, cc @ydshieh )

Having a look at the test, the cause for flakiness was clear: we are using random models with generate, and tiny perturbations can result in a different sampled token, causing generation to go in a different direction and ultimately failing the check (eager generate == sdpa generate).

This PR:

  1. Moves the test to GenerationTesterMixin, as it calls generate
  2. Adds logic to handle the flakiness and do the correct check. If the generation is different, check the logits in the first different token. Despite resulting in a different token, the logits should be nearly identical -- if they are, the test pass.
  3. Remove most overwrites, which only existed to handle flakiness through is_flaky().

The following test commands were run:

  1. RUN_SLOW=1 py.test tests/models/ -k test_eager_matches_sdpa_generate
  2. RUN_SLOW=1 py.test tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_eager_matches_sdpa_generate --flake-finder --flake-runs 500

@gante gante requested review from ArthurZucker and ydshieh October 24, 2024 16:09
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(this is mostly copy-paste, going to comment the sections that are changed)

Comment on lines +2059 to +2067
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uses self.prepare_config_and_inputs_for_generate() instead, which enables us to pass a dictionary of inputs to generate (better input control than simply using inputs_dict[model_class.main_input_name])

Comment on lines 2090 to 2098
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uses dictionaries -> more compact

Comment on lines +2100 to +2127
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flakiness handling as explained in the PR header

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Comment on lines 2077 to 2101
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with #34282 we won't have to init 2 models.
Also this is memory hungry

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH we can do it one model at a time, going to change the test.

After Flex attention becomes the norm, we probably won't need this test

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@ydshieh ydshieh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is sooooo great! ❤️

@gante gante force-pushed the test_eager_matches_sdpa_generate branch from 6ffa18d to 415d009 Compare October 25, 2024 10:35
@gante gante merged commit 186b8dc into huggingface:main Oct 25, 2024
@gante gante deleted the test_eager_matches_sdpa_generate branch October 25, 2024 10:55
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
ylacombe added a commit that referenced this pull request Dec 10, 2024
* Support BatchNorm in Hubert pos_conv_emb as in fairseq

* Correct the new defaults (#34377)

* Correct the new defaults

* CIs

* add check

* Update utils.py

* Update utils.py

* Add the max_length in generate test checking shape without passing length

* style

* CIs

* fix fx CI issue

* [auto. ping] Avoid sending empty info + add more team members (#34383)

* update

* update

---------

Co-authored-by: ydshieh <[email protected]>

* Fix glm  (#34388)

* Fix duplicated

* fix import

* Use non nested images and batched text Idefics2/3  (#34222)

* add support for non nested images and add tests

* add tests error scenario

* fix style

* added single and no image to error tests

* Fix onnx non-expotable inplace aten op (#34376)

* fix onnx non-expotable inplace op

* mistral, qwen2, qwen2_vl, starcoder2

* fixup copies

* Fix right padding in LLaVA models (#34305)

* fix right pad llavas

* device mismatch

* no filter (#34391)

* no filter

* no filter

* no filter

---------

Co-authored-by: ydshieh <[email protected]>

* SynthID: better example (#34372)

* better example

* Update src/transformers/generation/configuration_utils.py

* Update src/transformers/generation/logits_process.py

* nits

* Tests: upgrade `test_eager_matches_sdpa_generate` (#34386)

* Fix bnb training test failure (#34414)

* Fix bnb training test: compatibility with OPTSdpaAttention

* Avoid check expected exception when it is on CUDA (#34408)

* update

* update

---------

Co-authored-by: ydshieh <[email protected]>

* Fix typos in agents_advanced.md (#34405)

* [docs] Cache implementations (#34325)

cache

* [run-slow] hubert

* Support BatchNorm in Hubert pos_conv_emb as in fairseq
Add conversion integration test, and make batchnorm explicit variable

* Support BatchNorm in Hubert pos_conv_emb as in fairseq
fix make fixup styling changes

* [run-slow] hubert

* Support BatchNorm in Hubert pos_conv_emb as in fairseq

* [run-slow] hubert

* Support BatchNorm in Hubert pos_conv_emb as in fairseq
Add conversion integration test, and make batchnorm explicit variable

* Support BatchNorm in Hubert pos_conv_emb as in fairseq
fix make fixup styling changes

* [run-slow] hubert

* [run-slow] hubert

---------

Co-authored-by: Cyril Vallez <[email protected]>
Co-authored-by: Yih-Dar <[email protected]>
Co-authored-by: ydshieh <[email protected]>
Co-authored-by: Yoni Gozlan <[email protected]>
Co-authored-by: Ilyas Moutawwakil <[email protected]>
Co-authored-by: Raushan Turganbay <[email protected]>
Co-authored-by: Joao Gante <[email protected]>
Co-authored-by: Matthew Douglas <[email protected]>
Co-authored-by: Rudy Delouya <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
Co-authored-by: Yoach Lacombe <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants