Skip to content

Historic: Testing circa 2015

Robert Sparks edited this page May 1, 2023 · 1 revision

TOC(Testing,SeleniumTesting,depth=3)

Testing

At the time of writing (March 2015), the test support in the project consists of the facilities described on this page.

PyFlakes

A django management which runs PyFlakes over the datatracker code is available for standalone pyflakes testing:

   $ ietf/manage.py pyflakes

This lets you do a quick check for obvious problems with new code, without running the full test suite.

PyFlakes testing is also part of the full test suite, so you don't need to run this separately if you're running the full test suite anyhow.

Test Suite

We're doing automated testing using Django's test support (see [http://docs.djangoproject.com/en/dev/topics/testing/ Testing Django applications]), with two different types of test cases:

  • Python UnitTests: some functionality (such as filling out a form) can't be adequately tested by just fetching an URL. See, for example, [/browser/branch/2.00/ietf/ipr/tests.py ipr/tests.py].

  • Python DocTests: currently used only in [/browser/trunk/ietf/doc/templatetags/ietf_filters.py doc/templatetags/ietf_filters.py] (see [/browser/trunk/ietf/doc/tests.py doc/tests.py] for how to tell Django about them).

To run all test suite tests (except the Selenium Tests which does javascript testing), just do:

  $ ietf/manage.py test --settings=settings_sqlitetest

To run tests for a single application (subdirectory under ietf/), do:

  $ ietf/manage.py test --settings=settings_sqlitetest ipr

Alternatively, you can run the tests against the regular database engine, by omitting the settings file which specifies an SQLLite3 database, but it will run much more slowly.

Coverage Testing

At the end of the test suite, there are 3 special tests which compares coverage data gathered from the current run of the test suite with data gathered from the latest release, checked out from the repository.

We are gathering data on template loading, url pattern matching, and python code coverage. The fact that a template has been loaded doesn't mean that all branches through it has been tested, and that a URL pattern has been matched doesn't mean that all variations on an URL pattern regex has been tested, but it's a start.

For code coverage testing, we are using the coverage.py tool, which has been integrated into the test suite.

If any of the 3 tests show coverage which is less than the coverage of the latest release, the test will fail. This hopefully will encourage everyone to write tests for any new code committed.

At the end of a test suite run, the coverage number from the current run will be output, together with the coverage at the time of the latest release. It should look something like this:

Test coverage data:
      Template coverage:  65.06%  (5.12.2:  65.06%)
           Url coverage:  51.64%  (5.12.2:  51.64%)
          Code coverage:  66.20%  (5.12.2:  66.20%)

Code Coverage Details

Once you've run the full test suite, raw data for the code coverage will be written to a file .coverage in the root of your working copy, and html pages showing per-file line-by-line coverage will be available under /static/coverage/index.html in your development server.

Coverage Changes

More comprehensive code, template, and URL coverage data than the summary information mentioned above is written to the file latest-coverage.json in the root of your working copy. Coverage data for releases from 5.11.0 and onwards are available in release-coverage.json.

You can compare the latest run with the most recent release coverage with the management command

   $ ietf/manage.py coverage_changes

Test Crawler

We have a test crawler which can be set to crawl all internal links which can be found from a set of initial pages: [/browser/trunk/bin/test-crawl bin/test-crawl]. It takes more than two hours to complete, so isn't normally run as part of daily bug-fixing or coding, but it's always run before doing a release.

To run the test crawler, from the top of the working copy tree:

   $ bin/test-crawl

Expect the run to take between 2 and 4 hours, with one line of output for each URL visited.

BuildBot

As of March 14th, 2015, we again have [BuildBot(https://github.com/ietf-tools/datatracker/wiki/BuildBot)] suport: see the [/buildbot/waterfall BuildBot Waterfall] display.

The BuildBot runs the 3 tests described earlier, triggered by repository commits:

  • A PyFlakes check is triggered when there has been 10 seconds without repository activity after a commit
  • A Test Suite run is triggered when there has been 5 minutes without repository activity after a commit
  • A Test Crawler run is triggered when there has been 4 hours without repository activity after a commit

The buildbot is also available on its own standalone site: https://trac.tools.ietf.org:8010/ .

[BuildBot(https://github.com/ietf-tools/datatracker/wiki/BuildBot)] is a continuous integration tool written in Python which is very configurable and extensible, and is in use by many software projects: [BuildBot(https://github.com/ietf-tools/datatracker/wiki/BuildBot) showcases].

(The datatracker project has also used buildbot earlier, during the intensive skunkworks project to rewrite the public datatracker from Perl to Python/Django, but it has been offline for some years, triggered by BuildBot version changes and changes in our test suite.)

Clone this wiki locally