-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic validation #1396
Automatic validation #1396
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice :) I really like this, it will be of so much help!
examples/incompressible-flow/2d-taylor-couette/postprocess_taylor_couette.py
Outdated
Show resolved
Hide resolved
examples/incompressible-flow/2d-taylor-couette/postprocess_taylor_couette.py
Outdated
Show resolved
Hide resolved
examples/incompressible-flow/3d-taylor-green-vortex/plot_dissipation_rate.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I finally managed to install magick with all the dependencies on my computer. Everything seems to work, but it is still running.
Everything worked on my computer! It is ready to merge in my opinion. |
5ff987a
to
43582e3
Compare
Description
Although Lethe is covered by extensive testing through application_tests, sometimes these tests are not enough to measure changes in the models that do not break the code but yield poorer results. This PR introduces an additional way to validate Lethe. The idea is to leverage the examples of Lethe for which there are solutions to compare to and to use them to validate the sotware on complicated cases. The validation loop is meant to be ran manually and generates artifacts, logs and a PDF report that can be looked at. We aim at launching these validation cases manually every week or so and to increase the number of validation test to provide, asymptotically, additional validation and comparison to measure and track the performance of the code.
The structure is relatively simple. Examples that are used for the validation now have a validate.sh script within their folder. Furthermore, there is a list of cases i contrib/validation/validation_cases.txt
For every case, the path of the case is specified and the number of core to run it. Right now, running all of the 5 cases takes about 8 hours on my machine, but these test cases are extensive and provide a lot of good information. Once we acquire our own node, we can launch many more test cases and things are going to be much more robust. Adding a new case is very cheap and the fun part is that they also use the python script to post-process the results from the regular simulations and as such, they keep the examples in a very mature state.
Testing
I have tested it on my machine extensively while developing it. This is not an intrusive change, it does not alter the code.
Documentation
I had to use the argparse module pretty much everywhere in the python script to enable the dump of the data files. This has forced me to modify slightly some python scripts and improve them. I changed the documentation accordingly.
Code related list:
Pull request related list: