-
Notifications
You must be signed in to change notification settings - Fork 31.9k
Remove @slow for test_eager_matches_sdpa_inference
#34558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
9c040bd
72184d9
ac16809
631cdec
271306b
7082ba0
2779c68
c18c6c6
69021b5
f709dbb
296c068
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -17,10 +17,11 @@ | |
| import unittest | ||
|
|
||
| from packaging import version | ||
| from parameterized import parameterized | ||
|
|
||
| from transformers import AlbertConfig, AutoTokenizer, is_torch_available | ||
| from transformers.models.auto import get_values | ||
| from transformers.testing_utils import require_torch, slow, torch_device | ||
| from transformers.testing_utils import require_torch, require_torch_sdpa, slow, torch_device | ||
|
|
||
| from ...test_configuration_common import ConfigTester | ||
| from ...test_modeling_common import ModelTesterMixin, ids_tensor, random_attention_mask | ||
|
|
@@ -288,6 +289,12 @@ def setUp(self): | |
| self.model_tester = AlbertModelTester(self) | ||
| self.config_tester = ConfigTester(self, config_class=AlbertConfig, hidden_size=37) | ||
|
|
||
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| @require_torch_sdpa | ||
| @unittest.skip("Albert requires `head_mask` which is currently not done in this test.") | ||
| def test_eager_matches_sdpa_inference(self): | ||
| pass | ||
|
Comment on lines
+292
to
+296
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On skips like this, on Albert and other models: the test pulls the main input and the attention mask to manipulate them, finally sending them to the model. We could
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe let's merge and do this in a follow-up PR. It's never worked before. |
||
|
|
||
| def test_config(self): | ||
| self.config_tester.run_common_tests() | ||
|
|
||
|
|
||
Large diffs are not rendered by default.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -134,7 +134,7 @@ def __init__( | |
| num_attention_heads=self.vision_num_attention_heads, | ||
| num_hidden_layers=self.vision_num_hidden_layers, | ||
| intermediate_size=self.vision_intermediate_size, | ||
| ) | ||
| ).to_dict() | ||
|
|
||
| self.perceiver_qk_layer_norms_perceiver = perceiver_qk_layer_norms_perceiver | ||
| self.perceiver_resampler_depth = perceiver_resampler_depth | ||
|
|
@@ -316,7 +316,6 @@ def prepare_pixel_values(self): | |
| return floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) | ||
|
|
||
| @require_torch_sdpa | ||
| @slow | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i think this test dont need skip anymore since we dont check if model has SDPA layers within this test anymore. But it can be skipped due to the same flakiness
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It still has input preparation issues, where I added below to another model test class
As mentioned in a reply to Joao's comment, let's try to do it in a follow up PR |
||
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| def test_eager_matches_sdpa_inference(self, torch_dtype: str): | ||
| self.skipTest(reason="Idefics has a hard requirement on SDPA, skipping this test") | ||
|
|
@@ -353,6 +352,12 @@ def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): | |
|
|
||
| return inputs_dict | ||
|
|
||
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| @require_torch_sdpa | ||
| @unittest.skip("Idefics requires both text and image inputs which is currently not done in this test.") | ||
ydshieh marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| def test_eager_matches_sdpa_inference(self): | ||
| pass | ||
|
|
||
| def test_model_outputs_equivalence(self): | ||
| try: | ||
| orig = self.all_model_classes | ||
|
|
@@ -602,6 +607,12 @@ def setUp(self): | |
| ) | ||
| self.config_tester = ConfigTester(self, config_class=IdeficsConfig, hidden_size=37) | ||
|
|
||
| @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) | ||
| @require_torch_sdpa | ||
| @unittest.skip("Idefics requires both text and image inputs which is currently not done in this test.") | ||
| def test_eager_matches_sdpa_inference(self, torch_dtype): | ||
| pass | ||
|
|
||
| @pytest.mark.generate | ||
| def test_left_padding_compatibility(self): | ||
| """Overwrite because IDEFICS needs image attention mask to be also padded""" | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.