25 Sep 2014.

stbt batch run works really well for stress-testing and soak-testing, but it’s lacking a few features that would be useful for functional acceptance testing: Run each test once, then re-run each failing test; run the tests in a random order or specify ordering dependencies between tests; update an external bug database with the results. Or for a continuous integration service, only run the fast tests and don’t fail the build if a test fails due to a known defect. The list goes on.

All posts in this series:

Stb-tester.com Ltd is currently working with a global TV manufacturer to provide testing services for their latest development project. To support this work we have developed a lot of exciting new features in stb-tester. We’ve been far too busy to release these features officially, but over the next months you should see them making their way to an stb-tester release.

pytest

Instead of adding all the above features to stbt batch run (which is currently a 300-line shell script that’s already pushing the limits of what can be comfortably maintained in shell) we’re investigating ways to use pytest as our test runner.

pytest is a Python test runner similar to nose but with a stronger emphasis on a stable plugin API. The plan is to write a pytest plugin that will save a test’s output (logfiles, exit status, etc.) in the format expected by stbt batch report. We’ll keep the same interactive html reports; the only change is to the command-line you use to run the tests. For backwards compatibility we might keep stbt batch run as a wrapper around pytest, though it won’t have all the capabilities of using pytest directly.

pytest has a well-defined plugin interface; pytest’s own built-in configuration, test collection, running and reporting are implemented as plugins. The built-in plugins include support for:

  • Skipping slow tests.
  • Skipping a test, or marking a failure as an “expected failure”, if certain conditions are met.
  • A general mechanism for adding arbitrary “markers” to tests – you can then choose to run only the tests that match certain markers.
  • Parameterized tests, so that you can run the same test multiple times with different parameters.

Running pip search pytest- reveals 75 third-party plugins, which add the ability to timeout hanging tests, randomize the order of tests, automatically re-run failing tests, integrate with bugzilla and jira, and more.

All of the above is probably also possible with nose, but pytest’s killer feature is:

Assertion introspection

pytest will show the values of subexpressions in a failing assert:

>    assert set1 == set2
E    assert set(['0', '1', '3', '8']) == set(['0', '3', '5', '8'])
E      Extra items in the left set:
E      '1'
E      Extra items in the right set:
E      '5'

This makes test failures much easier to triage.

We’ll need to customise this somewhat: for example for assert stbt.match(...) we want to inspect the MatchResult that stbt.match returns, extract the MatchResult.frame and save it to disk as “screenshot.png”.

Comments? Join the discussion on the project mailing list.