-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flaky and pytest-check don't fail when used together #25
Comments
@antoche did you try with the Edit: tested with the |
This seems like a fundamental incompatibility between these two plugins. |
Found a similar plugin that works well with Adding the Tests:
Results:
|
This is something I'd like to fix, just not sure how to. |
TBH, I don't think this is fixable.. you can fix compatibility with the flaky plugin by using another pytest hook function, but then you'd probably break compatibility with some other plugin. My solution (and what I recommend to others) is to not use a plugin to re-run failures and rather use pytest's options via CLI.
The @okken I would close this issue as won't fix, simply b/c the fix would simply shift the problem to another place. |
Could this be fixed by changing pytest itself so that its hooks are more
compatible with both "flaky tests" and "checks"?
…On Thu, 19 Dec 2019 at 06:30, Joao Coelho ***@***.***> wrote:
TBH, I don't think this is fixable.. you can fix compatibility with the
flaky plugin by using another pytest hook function, but then you'd probably
break compatibility with some other plugin. Case in point: pytest-check
is compatible with pytest-timeout and not with flaky. pytest-rerunfailures
is compatible with flaky but not with pytest-timeout (details here
<pytest-dev/pytest-rerunfailures#99>).
My solution (and what I recommend to others) is to *not* use a plugin to
re-run failures and rather use pytest's options via CLI.
1. Mark flaky tests with @pytest.mark.flaky
2. pytest <file_or_dir> <options> || pytest <file_or_dir> <options>
--last-failed -m flaky
The || is important b/c that portion will not be run if there are no
failues. If it is run when there are no failures, pytest's exit code is 5
(as opposed to 0 success), which means the script would return an error.
@okken <https://github.com/okken> I would close this issue as won't fix,
simply b/c the fix would simply shift the problem to another place.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#25?email_source=notifications&email_token=AAMNJ6HJSGTK7LQNLU4X7ZTQZJM2PA5CNFSM4JO22VOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHG4DTA#issuecomment-567132620>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMNJ6HZA7ZAXBGO6SPRBXTQZJM2PANCNFSM4JO22VOA>
.
|
Unclear, but don't think so. pytest exposes those hooks with clear design on how they work and plugins are allowed to make potential breaking changes and that's what's going on. Basically, when you have multiple plugins changing how the execution flow works, you're bound to have incompatibilities. |
Having had a closer look at pytest-check, I think this is a design issue in
pytest-check.
Pytest-check swallows and logs the exceptions raised during a test, then
flags the tests as failed at makereport() time. So any part of the system
expecting to act on a failed test will not see any failure.
It seems to me it would be more robust to raise the failure around
runtest_call() or runtest_teardown(), i.e., immediately after the test is
finished. This is how I implemented a similar feature in a nose plugin, and
it looks like this is also what pytest-assume does (
https://github.com/astraw38/pytest-assume/blob/master/pytest_assume/plugin.py).
At first glance, pytest-assume would probably be compatible with most other
plugins including flaky.
…On Thu, 19 Dec 2019 at 11:43, Joao Coelho ***@***.***> wrote:
Unclear, but don't think so. pytest exposes those hooks with clear design
on how they work and plugins are allowed to make potential breaking changes
and that's what's going on.
Basically, when you have multiple plugins changing how the execution flow
works, you're bound to have incompatibilities.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#25?email_source=notifications&email_token=AAMNJ6FNSXNK5EXCLFV4UI3QZKRSXA5CNFSM4JO22VOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHHXYMA#issuecomment-567245872>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMNJ6HFC7NZPGBQPPV2C7TQZKRSXANCNFSM4JO22VOA>
.
|
pytest-rerunfailures is a reasonable workaround. |
Hi,
I've also filed this under box/flaky#162, but since the issue arises only when using both plugins together, I'm not sure where the responsibilities lie, so flagging it with both projects to be sure.
This simple case fails as expected:
But uncomment the
with check
and the test now "passes".The "Failed Checks: 1" output disappears.
The "Flaky Test Report" in the console prints
test_foo passed 1 out of the required 1 times. Success!
, whereas the pytest final line saysno tests ran
instead of either the red1 failed
(expected) or the green1 passed
(which would be more consistent with the flaky report).The exit code from pytest is 0 (i.e., tests passing) instead of either 1 (expected) or 5 (which would be more consistent with the
no tests ran
printout).This was using python 3.7.3, pytest 4.6.5, pytest-check 0.3.5 and flaky 3.6.1.
Regards,
A.
The text was updated successfully, but these errors were encountered: