-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test fail (instead of skip) on empty parameter set #2527
Comments
There isn't a better way yet |
Erroring out by default could be problematic as well I suppose |
I see this code in pytest sources: if not argvalues:
argvalues = [(NOTSET,) * len(argnames)]
# we passed a empty list to parameterize, skip that test
#
fs, lineno = getfslineno(self.function)
newmark = pytest.mark.skip(
reason="got empty parameter set %r, function %s at %s:%d" % (
argnames, self.function.__name__, fs, lineno))
newkeywords = [{newmark.markname: newmark}] There are all parameters set to NOTSET sentinel value, maybe there is some way to check if all parameters is NOTSET, then raise? I didn't find any way to get these argvalues from item passed into pytest_runtest_teardown. |
Or maybe there is something else i can return from provider (not empty list) so test will fail, but collection will succeed and all other tests will run also? |
return a senseless parameterset and put a mark onto it you can detect |
@RonnyPfannschmidt thanks for suggestion, implemented this way: PARAMETER_NOT_SET = object()
def parameters_provider():
# HTTP request to some external provider service.
return service_request().result or [PARAMETER_NOT_SET]
@pytest.mark.parametrize("parameter", parameters_provider())
def test_something(parameter):
assert parameter is not None
def pytest_pyfunc_call(pyfuncitem):
arg_values = pyfuncitem.funcargs.values()
if arg_values and all(av is PARAMETER_NOT_SET for av in arg_values):
pytest.fail("Test function got empty parameter set!") |
Should we close this then? |
If you don't mind, i'd prefer to make it optional - fail on empty set or skip. I'll make pull request for this a bit later if you think it can be useful. |
Thanks for the offer to open a PR, my personal opinion is that it should fail by default, adding another option will only complicate things. @RonnyPfannschmidt what's your take on this? |
If prefer to avoid complication in that code |
I'm also agree that it should fail by default, but for some reason it was implemented to skip such tests, that's why i proposed make it an option. If you think that nobody need skip and it won't break compatibility, let it be so. Personally i think there might be many cases when users don't know that some of their tests even not get executed and currently it's a problem. |
I implemented the skip as people demonstrated valid use cases for empty parameter sets a while back |
@RonnyPfannschmidt so your choice is option for skip/fail or only fail? |
Would xfail without run fit your use case? If an option is introduced it should enable skip (default), xfail, fail and ignore, optionally via marker to choose per test |
In my case xfail won't be enough because it's still success, but empty data from data provider is always an error and must be reported. You mean skip as default for backwards compatibility only? It doesn't look reasonable and not really intuitive. Say like if you specify -k 'unexistent_test' nothing will be executed and result code will be != 0 mean error - is correct, but success in this case would be confusing. |
The default is for backward compat |
I don't know, empty parameter sets seems such a rare occurrence to me that it should fail by default, as it is most likely an error. I would propose to change the default to "error", specially if users that intend for it to skip can do this: @pytest.mark.skipif(not get_parameter_values(), reason='empty parameters')
@pytest.mark.parametrize('a', get_parameter_values())
def test_foo(a):
pass Then at least whoever wants to keep the existing "skip" behavior can have it. @RonnyPfannschmidt can you please search that discussion about the points in favor of "skip"? It would help this discussion for sure. |
I need to fail (instead of skip) tests that got empty parameters set.
My test looks like this:
So if provider fails to return results, test gets empty parameters set and pytest skips it, but for me this is actually error - kind of no tests run and nothing actually checked, so test must fail.
After some investigation i've found pretty nasty way to achieve my goal, in conftest.py:
I don't like it because it depends on reason text which can change in any next pytest release. Is there any better way to do it?
The text was updated successfully, but these errors were encountered: