-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP Change outcome to 'passed' for xfail unless it's strict #1795
Changes from 1 commit
4fc20d0
10a6ed1
296f42a
14a4dd0
225341c
55ec1d7
ea379e0
018197d
bb3d6d8
d1f2f77
767c28d
0173952
dfc659f
4ed412e
224ef67
0fb34cd
68ebf55
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -230,7 +230,8 @@ def pytest_runtest_makereport(item, call): | |
if hasattr(item, '_unexpectedsuccess') and rep.when == "call": | ||
# we need to translate into how pytest encodes xpass | ||
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess) | ||
rep.outcome = "failed" | ||
# TODO: Do we need to check for strict xfail here as well? | ||
rep.outcome = "passed" | ||
elif item.config.option.runxfail: | ||
pass # don't interefere | ||
elif call.excinfo and call.excinfo.errisinstance(pytest.xfail.Exception): | ||
|
@@ -245,7 +246,12 @@ def pytest_runtest_makereport(item, call): | |
rep.outcome = "skipped" | ||
rep.wasxfail = evalxfail.getexplanation() | ||
elif call.when == "call": | ||
rep.outcome = "failed" # xpass outcome | ||
strict_default = item.config.getini('xfail_strict') | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm I think this is the more correct implementation of "strict xfail" than the one I did... with this, I think we don't even need |
||
is_strict_xfail = evalxfail.get('strict', strict_default) | ||
if is_strict_xfail: | ||
rep.outcome = "failed" | ||
else: | ||
rep.outcome = "passed" | ||
rep.wasxfail = evalxfail.getexplanation() | ||
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple: | ||
# skipped by mark.skipif; change the location of the failure | ||
|
@@ -260,7 +266,7 @@ def pytest_report_teststatus(report): | |
if hasattr(report, "wasxfail"): | ||
if report.skipped: | ||
return "xfailed", "x", "xfail" | ||
elif report.failed: | ||
elif report.passed: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @fabioz I think this will mean you no longer have to do special handling for this in pydev_runfiles_pytest2.py, correct? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @nicoddemus do you think we need to check for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No need: strict xfails now have report set to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. okay, that's what I thought. Thanks! |
||
return "xpassed", "X", ("XPASS", {'yellow': True}) | ||
|
||
# called by the terminalreporter instance/plugin | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are right, this has to be:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest creating a helper function:
And reusing in both places.
I also noticed you can reuse this same function in
check_strict_xfail
in the same module.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh OK, I looked some moreand noticed that in this case it is just to cover
unittest
specifically, which doesn't have a strict option:Running it:
So I think the original code was the correct one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thing: I noticed that no tests from
unittest
caught this regression, we should add two new test cases for it:This test suite should pass:
This one should fail because
@expectedFailure
works likexfail(strict=True)
:Additional idea: we could probably parametrize this and run it using
py.test
andunittest.main
to ensure we have the same outcome in both cases.