Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Julia tests with resource limits, with worker process, and collate the results #24

Merged
merged 5 commits into from
Feb 14, 2019

Conversation

timholy
Copy link
Member

@timholy timholy commented Feb 13, 2019

This makes it much more feasible to run "all" of Julia's tests. The key advance is the ability to run a specified maximum number of statements in the interpreter, and abort early when that number is reached. Choosing a number of statements, rather than execution time, should make it easier to compare results on machines with different performance. On my laptop, the default settings run all of Julia's tests in < 5 minutes. This is not a sign that the interpreter is really efficient (it's not), just that it aborts a lot of slow tests.

Sample output is now visible in #13.

@timholy timholy force-pushed the teh/limit branch 2 times, most recently from 1ffffa9 to cfda204 Compare February 13, 2019 17:22
@KristofferC
Copy link
Member

Would bolding the text in the table for those files that somehow fail make sense? Might make it easier to find them in the table and get an overview of the "density" of failures.

@codecov-io
Copy link

codecov-io commented Feb 13, 2019

Codecov Report

Merging #24 into master will not change coverage.
The diff coverage is 0%.

Impacted file tree graph

@@          Coverage Diff          @@
##           master    #24   +/-   ##
=====================================
  Coverage       0%     0%           
=====================================
  Files           6      6           
  Lines        1115   1117    +2     
=====================================
- Misses       1115   1117    +2
Impacted Files Coverage Δ
src/interpret.jl 0% <0%> (ø) ⬆️
src/JuliaInterpreter.jl 0% <0%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f099f83...6036751. Read the comment docs.

@timholy
Copy link
Member Author

timholy commented Feb 13, 2019

In the long run, I agree, but even if we discount the "aborted" column, a sizable majority of tests have some problem that Base itself does not. So most lines would be bolded. (The only tests that may not need attention are those with zeros in the "Fails" and "Errors" columns.)

@timholy
Copy link
Member Author

timholy commented Feb 13, 2019

OK to merge? Given that I've advertised #13 I'd like to make this runnable by others.

@timholy timholy mentioned this pull request Feb 13, 2019
Some tests cause segfaults, so this insulates the rest from a few bad
apples. As a nice extra benefit, it allows speedup by parallelism.
@timholy
Copy link
Member Author

timholy commented Feb 14, 2019

I now have the suspicion that anonymous functions are the overwhelming reason for the tests with Xs in #13. This is tricky territory; check out the results of Meta.lower(Main, :(map(x->x^2, [1,2,3]))). Because you're defining new structs there are things that have to be done at top-level and as a consequence you can get into world-age issues depending on how you handle this. While this PR tries to solve that problem, I suspect it's not really correct yet. I'm not immediately sure about the best fix.

@KristofferC
Copy link
Member

Feel free to merge this and future PRs at will :)

@timholy
Copy link
Member Author

timholy commented Feb 14, 2019

I was just wondering how strongly you felt about the bold text suggestion.

@timholy timholy merged commit edae3f5 into master Feb 14, 2019
@timholy timholy deleted the teh/limit branch February 14, 2019 09:53
@timholy timholy mentioned this pull request Feb 14, 2019
@KristofferC
Copy link
Member

weakly [======|                                ] strongly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants