Before we get into writing tests, please make sure you have pre-commit hook for styling tools setup so CI won't fail from these
Instructions here
Writing tests involve running tests locally (duh). So let's get that setup. (You'll only have to do this once)
Tests can leave fake records. This will pollute your local setup. So, get yourself a test site. You can get these commands from the CI workflow file too, but I'll save you some time. You can name the site and set password to whatever.
bench new-site --db-root-password admin --admin-password admin test_site
bench --site test_site install-app press
bench --site test_site add-to-hosts # in case you wanna call APIs
bench --site test_site set-config allow_tests true
Finally, you need to start bench as some of the tests may want to trigger background jobs, which would fail if background workers aren't there
bench start
As you write tests you'll occasionally want to remove all test data in your test site from time to time. So, here ya go:
bench --site test_site reinstall --yes
This is the hard part. Because of Press's dependency with outside world, it's hard to isolate unit tests to this project. Regardless it's still possible with plain old python's built in libraries.
Majority of this is done with the help of python's unittest.mock
library. We
use this library to mock parts of code when referencing things that are out of
Press's control.
Eg: We can mock all Agent Job creation calls by decorating the TestCase class like so
@patch.object(AgentJob, "enqueue_http_request", new=Mock())
class TestSite(unittest.TestCase):
We use patch.object
decorator here so that every instance of AgentJob
object will have it's enqueue_http_request
method be replaced by whatever we
pass in the new argument, which in this case is Mock()
which does nothing.
You can think of it as a pass
. But it has other uses as you'll find if you
keep reading.
Note: Class decorators aren't inherited, so you'll have to do this on all classes you want to mock http request creation for Agent Job
There's also a decorator you can use to fake the result of an agent job. For example, you may do it like so:
press/press/api/tests/test_site.py
Lines 243 to 247 in 983631c
This way you can use the name of the type of job and fake a response from the same.
You may also fake the output obtained from the job which you can then use to test the callback that uses the same:
press/press/api/tests/test_site.py
Lines 305 to 323 in 983631c
It is also possible to fake multiple jobs in the same context, for when multiple jobs are processed in the same request or job:
press/press/press/doctype/site_migration/test_site_migration.py
Lines 29 to 77 in 983631c
Note that with this, you can't fake 2 results for the same type of job. This is still a limitation. As a workaround, you can have multiple
with
statements for such cases.
This is all done with the help of the responses library by intercepting the http requests for the same.
Note that you shouldn't mock
AgentJob.enqueue_http_request
when using the above decorator as that will interfere with the request interception need to fake the job results
Now that we've learned to mock the external things, we can go about mocking internal things, which forms the basis of testing, which is
- Make test records
- Perform operation (i.e Run code that will on production)
- Test the test records for results
Making test records is also kind of a pain as we have validations all around code that will need to be passed every time you create a doc. This is too much cognition. Therefore, we can create utility functions (with sensible defaults) to make test record of the corresponding Doctype in their own corresponding test files (for organization reasons). These functions will be doing the bare minimum to make a valid document of that doctype.
Eg: create_test_bench
in test_bench.py
can be imported and used whenever
you need a valid bench (which itself has dependencies on many other doctypes)
You can also add default args to these utility functions as you come across the need. Just append to end so you won't have to rewrite pre-existing tests.
You write a test by writing a method in the TestCase. Make the method name as long as you want. Test methods are supposed to test a specific case. When the test breaks eventually (serving it's purpose), the reader should be able to tell what it's trying to test is supposed without even having to read the code. Making the method name small is pointless; we're never going to reference this method anywhere in code, ever. Eg:
press/press/press/doctype/site/test_site.py
Lines 215 to 228 in 2503e52
You can also go the extra mile and write a function docstring. This docstring will be shown in the output when the testrunner detects that the test has failed.
Not a real word, but I like to be able to re-run my tests without having to
nuke the database. Leaving the database in an "empty state" after every test is
a very easy way to achieve this. This also makes testing for things like count
of docs super easy. Lucky for us there's a method in TestCase
that's run
after every individual test in the class. It's called tearDown
.
We can easily do
def tearDown(self):
frappe.db.rollback()
And every doc you create (in foreground at least) will not be committed into the database.
Note: If the code you're testing calls frappe.db.commit, be sure to mock it cuz otherwise docs will get committed till that point regardless.
You can mock certain lines while testing a piece of code with the patch
decorator too. Eg:
from unittest.mock import MagicMock, patch
# this will mock all the frappe.db.commit calls in server.py while in this test suite
@patch("press.press.doctype.server.server.frappe.db.commit", new=MagicMock)
class TestBench(unittest.TestCase):
You can also use the patch decorator on test methods too. Eg:
press/press/tests/test_cleanup.py
Lines 280 to 290 in 6dd6b2c
Mock()
object) along as
an argument, so you can later do asserts on it (if you want to).
You can even use the decorator as context manager if you don't want to mock things for the entirety of the test.
press/press/tests/test_audit.py
Lines 97 to 102 in 6dd6b2c
Here, we're actually faking the output of the function which usually calls a
remote endpoint that's out of our control by adding the new
argument to the
method.
Note: When you use asserts on Mock object, Document comparisons will mostly work as expected as we're overriding eq of Document class during tests (check before_test.py). This is because by default when 2 Document objects are compared, only their
id()
is checked, which will return False as the objects will be different in memory.
Note: If you need to mock some Callable while preserving it's function, (in case you want to do asserts on it, you can use the
wraps
kwarg instead of new). Eg:
Here, we check what args was Ansible constructor was called with.
That's pretty much all you need to write safe, rerunnable tests for Press. You can checkout https://docs.python.org/3/library/unittest.mock.html for more things you can do with the standard python libraries. If your editor and plugins are setup configured nicely, you can even do TDD with ease.
Protip: When you have test records you want across a TestCase, then you can simply use the create the test record in
setUp
method of the same. The test records can be assigned to member variables. Eg:
def setUp(self):
self.team = create_test_team()
Since background jobs are forked off of a different process, our mocks and patches are not going to hold there. Not only that, but we can't control/predict when the background job will run and finish. So, when your code involves creating a background job, we can simply mock the call so that it runs in foreground instead. There's a utility method you can use to achieve this with ease:
You can run all of the tests with the following command.
bench --site test_site run-tests --app press
But you'll never have to. That's what CI is for. Instead, you'll mostly want to use:
bench --site test_site run-tests --app press --module press.press.doctype.some_doctype.test_some_doctype
This is because while writing bugs, your changes will mostly affect that one module only and since we don't have many tests to begin with, it won't take very long to run a module's test by itself anyway. Give your eyes a break while this happens.
You can also run individual test with:
bench --site test_site run-tests --module press.press.doctype.some_doctype.test_some_doctype --test test_very_specific_thing
You most likely won't enjoy running commands manually like this. So you'd want to check out this vim plugin or this vscode plugin
Note: frappe_test plugin doesn't populate vim's quickfix list yet. Though Frappe's test runner output isn't very pyunit errorformat friendly, you can still make it work with a custom errorformat and some hacks to set makeprg