Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StreamResult wasSuccessful ignores unexpectedSuccess while TestResult method fails on any unexpectedSuccess #273

Open
electrofelix opened this issue Jul 12, 2018 · 0 comments

Comments

@electrofelix
Copy link

StreamResult method wasSuccessful() ignores the value of unexpectedSuccesses in determining whether the overall run was successful or not, however TestResult class does test the value of this attribute.

TestResult wasSuccessful() method result: https://github.com/testing-cabal/testtools/blob/master/testtools/testresult/real.py#L174
StreamResult wasSuccessful method result: https://github.com/testing-cabal/testtools/blob/master/testtools/testresult/real.py#L977

In testing changing StreamResult to align in behaviour with TestResult it appears that tests validating the behaviour against python2.7 fail.

While there are 3 tests that start failing once I changed the behaviour, the one (which is effectively called twice through inheritance) that suggest that it is intentional that StreamResult is defined at
https://github.com/testing-cabal/testtools/blob/master/testtools/tests/test_testresult.py#L223-L229
It results in the following two failures once StreamResult behaviour is changed to treat unexpectedSuccesses as failures:
testtools.tests.test_testresult.TestAdaptedStreamResult.test_addUnexpectedSuccess_was_successful
testtools.tests.test_testresult.TestStreamToExtendedContract.test_addUnexpectedSuccess_was_successful

Upstream python on changed this behaviour to determine the overall test results from unittest being successful sometime around 3.4 - https://bugs.python.org/issue20165

It's unclear if testtools is intentionally maintaining compatibility with unittest's TestResult class prior to this change in it's StreamResult class, or if this is just an oversight and StreamResult should be aligned with TestResult?

Currently this impacts stestr reporting of tests which I've logged as mtreinish/stestr#189 along with noting that it is possible to work around by changing to call the TestResult.wasSuccessful() method passing the StreamResult class instance in as the self argument.

I'm happy to submit a PR if someone can give some guidance on what should be the correct outcome above. Does StreamResult need to remain aligned with the python 2.7 contract, in which any fix needs to do some version detection and different depending on the python version, or is this an oversight in the enhancements that testtools is bringing to python unittests for 2.7?

mtreinish added a commit to mtreinish/stestr that referenced this issue Aug 3, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to call the wasSuccessful() method from the
TestResult class instead which has the correct behavior.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that function. But for the time being
this mitigates the issue.

Fixes #189
mtreinish added a commit to mtreinish/stestr that referenced this issue Aug 3, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
mtreinish added a commit to mtreinish/stestr that referenced this issue Aug 6, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
mtreinish added a commit to mtreinish/stestr that referenced this issue Aug 6, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant