-
Notifications
You must be signed in to change notification settings - Fork 312
Add option to allow missing or additional detected PyTorch test suites #4052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Add option to allow missing or additional detected PyTorch test suites #4052
Conversation
|
@boegelbot please test @ jsc-zen3-a100 |
|
@boegel: Request for testing this PR well received on jsczen3l1.int.jsc-zen3.fz-juelich.de PR test command '
Test results coming soon (I hope)... Details- notification for comment with ID 3809792319 processed Message to humans: this is just bookkeeping information for me, |
|
@boegelbot please test @ jsc-zen3 |
|
@boegel: Request for testing this PR well received on jsczen3l1.int.jsc-zen3.fz-juelich.de PR test command '
Test results coming soon (I hope)... Details- notification for comment with ID 3811375867 processed Message to humans: this is just bookkeeping information for me, |
|
Test report by @boegelbot Overview of tested easyconfigs (in order)
Build succeeded for 1 out of 1 (total: 47 hours 36 mins 39 secs) (1 easyconfigs in total) |
|
Test report by @boegelbot Overview of tested easyconfigs (in order)
Build succeeded for 1 out of 1 (total: 52 hours 4 mins 5 secs) (1 easyconfigs in total) |
|
@boegelbot please test @ jsc-zen3-a100 |
|
@boegel: Request for testing this PR well received on jsczen3l1.int.jsc-zen3.fz-juelich.de PR test command '
Test results coming soon (I hope)... Details- notification for comment with ID 3880190685 processed Message to humans: this is just bookkeeping information for me, |
|
|
(created using
eb --new-pr)This relaxes the PyTorch test evaluation a bit. The easyblock parses the XML files and compares that against the summary output in stdout of the test command. We have 2 cases:
1: There are more failures in the XML files than in the summary -> PyTorch didn't consider something as failed that we do. Very weird and might be an issue with the XML parser.
However this is only a minor issue as we counted too many failures (from the XML files) than might be actually present. So if the allowed-test-failure-count check still succeeds we can ignore this, at least for users.
2: The summary shows a failure we have not found in the XML files -> The XML report might be missing because the test crashed or otherwise didn't write its results.
This is an issue because one test ("suite") might contain 100s of test cases where many could have failed but we didn't count any of those failures.
Of course there might be only a single failure but we cannot know for sure, hence we fail.
I added 2 options:
allow_extra_failures&allow_missing_failuresfor those 2 cases.They can be set to
True/Falsebut also to a maximum number