-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Automated examples testing to prevent regression #34
base: master
Are you sure you want to change the base?
Conversation
There are 2 goals for doing this: * User will have the output of all examples ready. They can compare the output with the code in the example without being forced to run the examples. * Since the examples are showcase for the features we support, we should always make sure the examples work as expected. One quick and easy way is to make sure the output does not change between commits.
flask_restful.mochi.out and timer.mochi.out has been rebuilt to avoid false positive (testexamples expects a different current dir than previously when these 2 .out files were built). As of now, the following example failures are observed (with Docker image tlvu/mochi:0.2.4.2-20150508). The simple reason is because the examples do not produce identical output between different runs. Not sure how to fix these examples. $ ./testexamples 18c18 < 0.0025186538696289062 5 --- > 0.002317190170288086 5 24c24 < <function _gs126.<locals>.fafa at 0x7fbd67d047b8> --- > <function _gs126.<locals>.fafa at 0x7f90df7e47b8> ERROR: etc.mochi changed 4c4 < time: 0.07205533981323242 --- > time: 0.0695199966430664 ERROR: fact.mochi changed 4c4 < time: 0.15062212944030762 --- > time: 0.1504371166229248 ERROR: fact_pattern.mochi changed 2c2 < pmap({'a': 1, 'b': 2}) --- > pmap({'b': 2, 'a': 1}) 5c5 < pmap({'a': 1, 'b': 2}) --- > pmap({'b': 2, 'a': 1}) ERROR: keyword_arguments.mochi changed 1c1 < 23.061758756637573 1 --- > 22.92900848388672 1 ERROR: tak.mochi changed 3a4 > 明後日 雨のち晴 7a9 > 明後日 雨のち晴 11a14 > 明後日 晴れ ERROR: urlopen.mochi changed
Here are the testing guidelines: How do they play with your testing approach? |
@pya Was not aware you wrote that TESTS.rst document, sorry. I guess the proper way is to convert all the examples to your format (with matching result_* function)? I recall seeing you wrote tests from some of the examples. Maybe we should consolidate that and add directly all the matching result_* functions directly in the examples files? Should we go this route instead? I was just doing this because I needed a quick and simple way to
By having all the matching result_* functions in the example files will also satisfy my needs above. For the shell script not working for Windows users, I totally forgot about Windows users because I wanted to use this to test new Docker images (docker is the "default" mochi command in the script). |
@tlvu Sounds like a good idea. The
Something like a unique naming or numbering of tests might be useful. This should be primarily for human consumption but also machine readable and searchable. Just using commit IDs seems not really working. Maybe using categories and numbers or only numbers and some kind of database with a two-way number-description mapping. This should not make things more complicated, therefore it needs to be somehow (semi-) automatic. Registering tests somewhere should give you a new number automatically. Of course, the description has to be provided by a human. Maybe naming conventions for files, functions, and docstrings can help here. „Conventions before configuration.“ Challenge: Provide useful functionality, yet make it simple to use. It should just work and go out of the way. Ideas welcome. |
Maybe this might be relevant: https://github.com/boxed/pytest-readme It has the really nice feature that all line numbers are the same in the tests as in the README. |
As a new contributor, it will be very easy for me to introduce regression because obviously I do not know the code. I prefer to be able to discover regressions myself than sending broken PR.
I know that Mike is working on some unit testing. Not sure what coverage level he is at.
This PR is a quick and simple way to automatically test that all the examples still produce the same output, if not, it means I might have introduced some regression when working in the code.
I chose to focus my effort on the example because:
My naive, quick and simplistic approach does have a problem: it does not work well when the output is not consistent between runs (see details in the commit description). I am not sure what we should do about it. Making the test more complex to handle that or fixing the examples so they produce consistent output.
The problem with timer.mochi.out is because my Docker image is missing RxPY, I will add it later.