-
Notifications
You must be signed in to change notification settings - Fork 84
Parallel preprocessing #597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
With all this work happening in parallel in child processes and the distinction of CPU vs wall time, measuring this can be problematic. These notions might also screw with some of our crappy benchmarking scripts. I'm not sure how they all do the measurements. |
|
The changes here that |
|
I'm now thinking that actually it might be safer to have It would also be nicer for regression testing, because now that script itself will start Goblints on call cores, each of which also wants to run multiple preprocessors, at which point the CPU is overloaded anyway. |
Implements the first half of #589.
The separate calling of
Unix.open_processandUnix.close_processwas extremely simple, but trying it out on OpenSSL it revealed a problem: it crashed with the Unix errorEMFILE(Too many open files by the process). Apparently starting 1009 child processes at the same time isn't a good idea.Of course
ulimit -ncould've worked around the problem, but I thought I'd be nice and implement aProcessPoolmodule, which runs at mostjobssubprocesses in parallel.By default (
-j 0), the number of parallel jobs is derived from the number of CPU cores. Otherwise the-jcommand line argument (which is really thejobsoption) controls the limit.OpenSSL
Trying this on OpenSSL (just preprocessing, no parsing or merging), with
-j 1it takes:Whereas with
-j 16it takes:So here the preprocessing takes 6.4 times less wall time!