Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't bail out on test errors by default #10835

Closed
wants to merge 1 commit into from

Conversation

ihnorton
Copy link
Member

I don't always run tests
But when I do, I ignore the failures.

Seriously though: when I run all tests locally, it is because I want to know the status of all of them for individual investigation later (e.g. #10394). Tests are slow in general, really slow on Windows, and painfully slow with MCJIT on Windows. Stopping the run due to a (possibly random) failure in the middle of the list is unhelpful because it only tells me about one test failure at a time. (I usually go away when running tests because my 2-core 3320 gets really sluggish).

This PR makes the test suite continue through errors by default, and adds a check for the environment variable JL_TESTFAILURE_STOP for use on the CI systems.

@ihnorton ihnorton added the test This change adds or pertains to unit tests label Apr 15, 2015
@vtjnash
Copy link
Member

vtjnash commented Apr 15, 2015

fwiw, i think it's good to run to completion on CI also. just so long as any failure poisons the final return value from runtests to cause exit(1)

@ihnorton
Copy link
Member Author

Sounds fine to me. That is easy to do for the make all case, but slightly harder for the make some selected subset case because there we call runtests individually for each specified target. GMake has a MAKECMDGOALS magic variable that might be useful but I'm not sure if it is portable.

cc @tkelman @staticfloat

@staticfloat
Copy link
Member

I don't see why running make testfoo should act any differently from make testall; we're just changing the set of tests that get returned from choosetests(); whether there are 100 entries in the tests list, or just 1, the reduce() call will still run across all of them, and your all(x -> x == false, results) test should still work.

This looks like a very positive change to me, and I'm all in favor of it.

@ihnorton
Copy link
Member Author

The case I am talking about is make testfoo1 testfoo2 testfoo3. There,
the Makefile invokes runtests.jl testfoo1 runtests.jl testfoo2 etc.
separately (AFAICT) for each target. If any runtests invocation calls
exit(1) due to a failure, make will give up on the remainder.

Maybe that is ok? I don't care about the limited multi-target case very
much because I'm never going to run more than a handful of tests that way.
But I was hoping there would be a simple way to handle it nicely.

On Wed, Apr 15, 2015 at 3:01 PM, Elliot Saba [email protected]
wrote:

I don't see why running make testfoo should act any differently from make
testall; we're just changing the set of tests that get returned from
choosetests(); whether there are 100 entries in the tests list, or just
1, the reduce() call will still run across all of them, and your all(x ->
x == false, results) test should still work.


Reply to this email directly or view it on GitHub
#10835 (comment).

@vtjnash
Copy link
Member

vtjnash commented Apr 15, 2015

i'm not sure there's much you can do about that case. this pull request (or my suggestion) shouldn't change the current behavior of that case anyways, since we should always be calling exit(1) if any test failed

@amitmurthy
Copy link
Contributor

Just so that we are aware - for a single worker case, all tests will not be run in case of an error, even with err_stop=false, since pmap removes a worker from the scheduling pool in case of any error on that worker.

Adds check for JL_TESTFAILURE_STOP environment variable to control fail-on-error.
@ViralBShah
Copy link
Member

I really would like a status at the end that says 5 out of 5000 tests failed or something like that.

@amitmurthy
Copy link
Contributor

I think you are confusing @test lines with the .jl test files itself. The granularity here is at the test file level.

@ViralBShah
Copy link
Member

But I wish it were at the test level.

@tkelman
Copy link
Contributor

tkelman commented Apr 16, 2015

@ViralBShah - ref #9398

@ihnorton
Copy link
Member Author

Fixed by #13062

@ihnorton ihnorton closed this Sep 23, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test This change adds or pertains to unit tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants