HTTP REST API v1

The HTTP REST API allows you to control the stb-tester ONE appliance programmatically using HTTP requests and JSON.

This API is intended for integrating the stb-tester ONE with larger test systems, such as a continuous integration system.

Grab screenshot

GET /api/v1/device/screenshot.png

Grab a screenshot from the device under test.

Example usage with curl:

Save a screenshot to screenshot.png:

curl -O http://stb-tester-one-example.local/api/v1/device/screenshot.png
Response Headers:
 
Status Codes:

Grab screenshot thumbnail

GET /api/v1/device/thumbnail.jpg

Grab a small thumbnail of the screenshot from the device under test.

Response Headers:
 
Status Codes:

List test cases

GET /api/v1/test_pack/(test_pack_commit_sha)/test_case_names

List test cases found at a particular revision (test_pack_commit_sha) of your test-pack. These names can be passed to /api/v1/run_tests.

Example usage with Python requests:

List the test cases:

print requests.get(
    "http://stb-tester-one-example.local/api/v1/test_pack/c2c82cad/test_case_names").json()
['tests/menu.py::test_that_menu_appears_when_menu_key_is_pressed',
 'tests/menu.py::test_that_close_button_closes_menu',
 'tests/live_tv.py::test_that_pause_button_pauses_live_tv']

List the test cases in your local git checkout HEAD:

my_sha = subprocess.check_output(['git', 'rev-parse', 'HEAD']).strip()
tests = requests.get(
    'http://stb-tester-one-example.local/api/v1/test_pack/%s/test_case_names'
    % my_sha).json()
for test_name in tests:
    print test_name
tests/menu.py::test_that_menu_appears_when_menu_key_is_pressed
tests/menu.py::test_that_close_button_closes_menu
tests/live_tv.py::test_that_pause_button_pauses_live_tv
Response Headers:
 
Status Codes:

Run tests

POST /api/v1/run_tests

Run the specified test cases from the given git commit. The parameters are encoded in the request body as a JSON object. Returns a job_uid that can be used to poll for job completion and test results.

Example usage with Python requests:

Start a test job with a single test:

response = requests.post(
    'http://stb-tester-one-example.local/api/v1/run_tests',
    data=json.dumps({
        "test_pack_revision": "c2c82cad",
        "test_cases": ["tests/menu.py::test_that_menu_appears_when_menu_key_is_pressed"],
        }))
print response.json()
{'job_id': 972,
 'job_uid': '/stb-tester-one-example/0a23/972',
 'job_url': 'http://stb-tester-one-example.local/api/v1/jobs/stb-tester-one-example/0a23/972',
 'status': 'running',
 'start_time': '2015-05-28T14:46:35.354791Z',
 'end_time': None,
 'result_counts': {'pass': 0, 'fail': 0, 'error': 0, 'total': 1},
}

Run all the test cases in the test pack and wait for it to complete, printing PASSED if no tests failed and at least one test passed. This is the kind of code you’d use for integration with a CI system:

commit_sha = "c2c82cad08114f973e2c36f0bbfbfb0a78dad911"
all_test_cases = requests.get(
    'http://stb-tester-one-example.local/api/v1/test_pack/%s/test_case_names'
    % commit_sha).json()
job = requests.post(
    'http://stb-tester-one-example.local/api/v1/run_tests',
    data=json.dumps({
        "test_pack_revision": commit_sha,
        "test_cases": all_test_cases})).json()

# Wait for job to complete:
while requests.get(job['job_url']).json()['status'] == "running":
    time.sleep(0.1)

# Inspect the results
counts = requests.get(job['job_url']).json()['result_counts']
if counts['pass'] + counts['fail'] > 0:
    if counts['fail'] == 0:
        print "PASSED"
        sys.exit(0)
    else:
        print "FAILED"
        sys.exit(1)
else:
    print "ERROR: No tests ran to completion"
    sys.exit(2)
Request JSON Object:
 
  • test_pack_revision (string) – (mandatory) Git commit SHA in test-pack repository identifying the version of the tests to run. Can also be the name of a git branch or tag.
  • test_cases (array) –

    (mandatory) List of tests to run.

    Testcase names have the form (filename)::(function_name) where filename is given relative to the root of the test-pack repository and function_name identifies a function within that file; for example tests/my_test.py::test_that_blah_dee_blah.

    The list of all possible testcase names for a given revision can be retrieved using /api/v1/test_pack/(test_pack_commit_sha)/test_case_names.

  • category (string) –

    (optional) Category to save the results in. Defaults to "default".

    When you are viewing test results you can filter by this string so that you don’t see results that you aren’t interested in. For example you might set this to “soak test of release x”.

    This must be a valid UTF-8 string and must not contain the characters /, newline (\n), colon (:), NUL (\0), $, or any control characters. It must not start with . or - and must not be empty. UTF-8 encoded Unicode characters are accepted.

  • soak (string) –

    (optional) Soak-testing mode. The allowed values are:

    • "run once" – this is the default mode; it runs each of the specified testcases once.
    • "run forever" – runs each of the testcases once, then repeats, forever. Use /api/v1/jobs/(job_uid)/stop to stop it.
  • shuffle (bool) –

    (optional) Randomise the order in which the tests are run. The allowed values are:

    • true – Randomise the order in which the test cases are run, weighting towards running the faster test cases more often. See Run forever in random order for more information.
    • false – Run the test cases in the order in which they appear in test_cases. This is the default if shuffle is omitted.
  • remote_control (string) –

    (optional) The remote control infrared configuration to use when running the tests.

    This should match the name of a remote control configuration file in your test-pack git repository. For example if your test-pack has config/remote-control/Sony_RM-ED022.lircd.conf, then you should specify "Sony_RM-ED022".

    If not specified, this defaults to the configuration setting test_pack.default_remote_control. If that isn’t specified, this defaults to the first remote control configuration file in alphabetical order.

Response JSON Object:
 
  • job_uid (string) – Identifier that can be used later to refer to this test job (see the job status and stop endpoints below). This identifier is unique across all stb-tester ONE devices in your test farm.
  • job_url (string) – URL of the job status endpoint for this job. Included for convenience.
  • job_id (int) – Deprecated, included for backwards compatibility only. Use job_uid to identify this test job.
  • status – See /api/v1/jobs/(job_uid).
  • start_time – See /api/v1/jobs/(job_uid).
  • end_time – See /api/v1/jobs/(job_uid).
  • result_counts – See /api/v1/jobs/(job_uid).
Request Headers:
 
Response Headers:
 
Status Codes:

Added in v17-1: The job_uid response parameter.

Added in v23.3: The shuffle request parameter.

Added in v22.4: The job_url, status, start_time, end_time, and result_counts response parameters.

Inspect test job progress

GET /api/v1/jobs/(job_uid)

Find out the current status of the job started with /api/v1/run_tests.

Parameters:
  • job_uid – The desired job’s universal identifier, as returned from /api/v1/run_tests.
Response JSON Object:
 
  • job_uid (string) – Identifier that refers to this test job. This is unique across all stb-tester ONE devices in your test farm.
  • job_url (string) – The URL of this endpoint. This is included here for consistency with /api/v1/run_tests.
  • status (string) – Current status of the job. This can be either "running" or "exited".
  • start_time (string) – The time at which the job started as an ISO8601 formatted datetime (for example "2015-05-28T14:46:35.354791Z").
  • end_time (string) – The time at which the job finished as an ISO8601 formatted datetime, or null if the job is still running.
  • result_counts (object) –

    A summary of the test results. This is a dictionary with the following keys:

    • pass – number of tests that completed and passed.
    • fail – number of tests that completed but failed.
    • error – number of tests that couldn’t complete due to an error (see Exceptions in the Python API reference).
    • total – can be greater than pass+fail+error if the job is still running or was stopped prematurely.
Response Headers:
 
Status Codes:

Added in v22.4.

Stop a job in progress

POST /api/v1/jobs/(job_uid)/stop

Stop a job started with /api/v1/run_tests.

By the time this endpoint returns a response, the job will be stopped. This makes it safe to use /api/v1/run_tests immediately after you receive the HTTP response from this endpoint.

This endpoint will respond with HTTP status 200 even if the job has already exited. This makes it safe to stop a job any number of times.

Parameters:
  • job_uid – The desired job’s universal identifier, as returned from /api/v1/run_tests.
Status Codes:

Added in v22.4.

Get list of test results

GET /api/v1/results

Retrieve test results for each test run, optionally filtered by a search expression.

Notes:

  • The response will not include a test-run that is currently in progress.
  • The response only includes the first 2,000 matching test-runs. Use the filter and sort parameters to retrieve older results (for example, sort by date, then make a second request using a filter to exclude results newer than the last result in the first response).

Example usage with Python requests:

response = requests.get(
    'http://stb-tester-one-example.local/api/v1/results',
    params={'filter': 'job:/stb-tester-one-example/0a23/972'})
print response.json()
[{'result_id': '/stb-tester-one-example/0a23/972/2015-05-28_14.46.34'
  'job_uid': '/stb-tester-one-example/0a23/972',
  'result_url': 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.34',
  'start_time': '2015-05-28T14:46:35.354791Z',
  'end_time': '2015-05-28T14:46:42.325432Z',
  'test_pack_sha': 'c2c82cad08114f973e2c36f0bbfbfb0a78dad911',
  'test_case': 'tests/epg.py::test_that_epg_is_populated',
  'result': 'fail',
  'failure_reason': "EPG is empty"
 },
 {'result_id': '/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
  'job_uid': '/stb-tester-one-example/0a23/972',
  'result_url': 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44',
  'start_time': '2015-05-28T14:46:45.431553Z',
  'end_time': '2015-05-28T14:46:49.334224Z',
  'test_pack_sha': 'c2c82cad08114f973e2c36f0bbfbfb0a78dad911',
  'test_case': 'tests/epg.py::test_that_epg_is_populated',
  'result': 'pass',
  'failure_reason': None
 },
]
Query Parameters:
 
  • filter – A search expression in the same format as the interactive search box in the test-results web interface, documented here. If not specified it will return all results (up to a limit of 2,000 results).
  • sort – “<fieldname>:asc” or “<fieldname>:desc”. Sort results by the specified field in ascending or descending order. <fieldname> can be “category”, “date”, “duration”, “job”, “testcase”, or “result”. Defaults to “date:desc”.
  • tz – A name like “America/Denver” from the Olson timezone database. This is the default timezone used for any dates specified in the filter parameter if the timezone isn’t explicit in those dates. Defaults to “UTC”.

The response consists of a JSON array of JSON objects. Each object corresponds to a test run. Each object contains a subset of the fields from /api/v1/result/(result_id)/:

Response JSON Object:
 
  • result_id (string) – Identifier that refers to this test result. This is unique across all stb-tester ONE devices in your test farm.
  • result_url (string) – The URL of this test result. This can be used later to retrieve more detailed information, test artifacts, screenshots by appending /artifacts/(filename) to the url.
  • job_uid (string) – Identifier that refers to this test job. See /api/v1/jobs/(job_uid).
  • start_time (string) – The time at which the test run started as an ISO8601 formatted datetime (for example "2015-05-28T14:46:35.354791Z").
  • end_time (string) – The time at which the test run finished as an ISO8601 formatted datetime.
  • test_case (string) – The name of the test case as given to /api/v1/run_tests.
  • test_pack_sha (string) – The git sha of the test-pack revision as given to /api/v1/run_tests.
  • result (string) –

    One of the following values:

    • pass – the test case completed and passed.
    • fail – the test case completed but failed.
    • error – the test case couldn’t complete due to an error (see Exceptions in the Python API reference).
  • failure_reason (string) – The exception message when the test failed. null if the test passed.
Response Headers:
 
Status Codes:

Added in v27.4.

Get list of results for a job

GET /api/v1/jobs/(job_uid)/results

Retrieve the results of each of the test runs in the given job. This is the same as /api/v1/results with a filter of “job:job_uid”; it is kept for backwards compatibility.

Added in v22.5.

Get detailed information about a test run

GET /api/v1/result/(result_id)/

Retrieve detailed information about a test result.

Example usage with Python requests:

response = requests.get(
    'http://stb-tester-one-example.local/api/v1/result/' +
    'stb-tester-one-example/0a23/972/2015-05-28_14.46.44')
print response.json()
{'result_id': '/stb-tester-one-example/0a23/972/2015-05-28_14.46.34'
 'job_uid': '/stb-tester-one-example/0a23/972',
 'result_url': 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.34',
 'start_time': '2015-05-28T14:46:35.354791Z',
 'end_time': '2015-05-28T14:46:42.325432Z',
 'test_pack_sha': 'c2c82cad08114f973e2c36f0bbfbfb0a78dad911',
 'test_case': 'tests/epg.py::test_that_epg_is_populated',
 'result': 'fail',
 'failure_reason': 'EPG is empty',
 'traceback': '''Traceback (most recent call last):
  File "/usr/lib/stbt/stbt-run", line 88, in <module>
    function()
  File "tests/epg.py", line 24, in test_that_epg_is_populated
    assert epg_is_populated(), "EPG is empty"
AssertionError: EPG is empty''',
 'artifacts': {
     'combined.log': {'size': 39381},
     'duration': {'size': 3},
     'exit-status': {'size': 2},
     'failure-reason': {'size': 17},
     'git-commit': {'size': 8},
     'index.html': {'size': 2254},
     'screenshot.png': {'size': 196119},
     'stbt-version.log': {'size': 17},
     'stderr.log': {'size': 39161},
     'stdout.log': {'size': 220},
     'test-args': {'size': 1},
     'test-name': {'size': 70},
     'thumbnail.jpg': {'size': 15220},
     'video.webm': {'size': 262863},
     'my-custom-logfile.log': {'size': 52023},
     'a/file in a subdirectory/another.log': {'size': 2433}
 }
}
Parameters:
  • result_id – Identifier that refers to this test result, as returned from /api/v1/results. This is unique across all stb-tester ONE devices in your test farm.

The response consists of a JSON object, with the following elements:

Response JSON Object:
 
  • result_id (string) – Identifier that refers to this test result. This is unique across all stb-tester ONE devices in your test farm.
  • result_url (string) – The URL of this test result.
  • job_uid (string) – Identifier that refers to this test job. See /api/v1/jobs/(job_uid).
  • start_time (string) – The time at which the test run started as an ISO8601 formatted datetime (for example "2015-05-28T14:46:35.354791Z").
  • end_time (string) – The time at which the test run finished as an ISO8601 formatted datetime.
  • test_case (string) – The name of the test case as given to /api/v1/run_tests.
  • test_pack_sha (string) – The git sha of the test-pack revision as given to /api/v1/run_tests.
  • result (string) –

    One of the following values:

    • pass – the test case completed and passed.
    • fail – the test case completed but failed.
    • error – the test case couldn’t complete due to an error (see Exceptions in the Python API reference).
  • failure_reason (string) – The exception message when the test failed. null if the test passed.
  • job_category (string) – The category as passed to /api/v1/run_tests when the tests were run.
  • traceback (string) – The traceback of the exception that caused the test to fail. null if the test passed.
  • artifacts (object) – The files created during the test run. See /api/v1/result/(result_id)/artifacts/ for details.
Response Headers:
 
Status Codes:

Added in v22.5.

Get log output from test run

GET /api/v1/result/(result_id)/stbt.log

Retrieve the output of the test run. This includes logging from stb-tester and any lines printed (written to stderr or stdout) by your test case during the test run.

Each consists of

iso8601_datetime <space> line

e.g.:

2015-06-18T14:59:00.321421+00:00 A line from the test script

The file is served as utf-8, but no guarantees are made that the contents is valid utf-8.

Example usage with Python requests:

Show the output of a test run:

result_url = 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
response = requests.get('%s/stbt.log' % result_url')
print response.text()
2015-06-16T07:29:17.162452+00:00 stbt-run: Arguments:
2015-06-16T07:29:17.162615+00:00 control: lirc:172.17.0.133:8765:vstb
2015-06-16T07:29:17.162646+00:00 args: ['test_that_when_pressing_back_i_go_back_to_where_i_was_before_action_panel_epg']
2015-06-16T07:29:17.162673+00:00 verbose: 1
2015-06-16T07:29:17.162701+00:00 source_pipeline: ximagesrc use-damage=false remote=true show-pointer=false display-name=:10 ! video/x-raw,framerate=24/1
2015-06-16T07:29:17.162728+00:00 script: /var/lib/stbt/test-pack/tests/brand_rollups.py
2015-06-16T07:29:17.162754+00:00 restart_source: False
2015-06-16T07:29:17.162781+00:00 sink_pipeline: fakesink sync=false
2015-06-16T07:29:17.162808+00:00 write_json: stbt-run.json
2015-06-16T07:29:17.162835+00:00 save_video: video.webm
2015-06-16T07:29:17.212650+00:00 stbt-run: Saving video to 'video.webm'
Parameters:
  • result_id – The desired result’s identifier, as returned from /api/v1/results.
Response Headers:
 
Status Codes:

Added in v22.5.

Get screenshot saved at end of test run

GET /api/v1/result/(result_id)/screenshot.png

At the end of each test-run stb-tester saves a screenshot (in a lossless format) from the device under test. This can be useful for performing automated triage or as a source of new template images.

This endpoint retrieves that screenshot.

Example usage:

Save the screenshot to disk as screenshot.png, using curl from the command line:

curl -O "http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44/screenshot.png"

Save the screenshot to disk from Python code, using the urllib module from the Python standard library:

import urllib
result_url = 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
urllib.urlretrieve('%s/screenshot.png' % result_url, 'screenshot.png')

View the screenshot in IPython notebook, using the Python requests library:

import IPython.display, requests
result_url = 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
response = requests.get('%s/screenshot.png' % result_url)
IPython.display.Image(data=response.content)

Try a stbt.match against the saved screenshot:

import cv2, numpy, requests, stbt
result_url = 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
response = requests.get('%s/screenshot.png' % result_url)
image = cv2.imdecode(numpy.frombuffer(response.content, dtype='uint8'), 1)
print stbt.match('my-template.png', frame=image)
MatchResult(timestamp=None, match=False, region=Region(x=82, y=48, width=54, height=54), first_pass_result=0.942756436765, frame=1280x720x3, image='sony-menu-search-selected.png')
Parameters:
  • result_id – The desired result’s identifier, as returned from /api/v1/results.
Response Headers:
 
Status Codes:

Added in v22.5.

List artifacts produced during a test run

GET /api/v1/result/(result_id)/artifacts/

List the files saved during a test run. This includes files written by stb-tester and any files written by your test scripts or pre/post_run scripts into the current working directory while the test is running.

Note: The files that stb-tester writes may change depending on the value of test_pack.stbt_version in .stbt.conf. It is therefore recommended to only rely on this endpoint to retrieve test artifacts that the test-scripts explicitly write and to use the values of /api/v1/result/(result_id) for data provided by stb-tester.

Example usage with Python requests:

List the files produced by a test run.

result_url = 'http://stb-tester-one-example.local/api/v1/result/stb-tester-one-example/0a23/972/2015-05-28_14.46.44'
response = requests.get('%s/artifacts/ % result_url')
print response.json()
{
  'combined.log': {'size': 39381},
  'duration': {'size': 3},
  'exit-status': {'size': 2},
  'failure-reason': {'size': 17},
  'git-commit': {'size': 8},
  'index.html': {'size': 2254},
  'screenshot.png': {'size': 196119},
  'stbt-version.log': {'size': 17},
  'stderr.log': {'size': 39161},
  'stdout.log': {'size': 220},
  'test-args': {'size': 1},
  'test-name': {'size': 70},
  'thumbnail.jpg': {'size': 15220},
  'video.webm': {'size': 262863},
  'my-custom-logfile.log': {'size': 52023},
  'a/file in a subdirectory/another.log': {'size': 2433}
}
Parameters:
  • result_id – The desired result’s identifier, as returned from /api/v1/results.

The response is a JSON object. The keys of the object are filenames and the values are dictionaries containing additional information about the files. Files in subdirectories are also listed, but the directories themselves won’t appear in the listing.

The information about each file is:

Response JSON Object:
 
  • size (string) – The size of the file in bytes.
Response Headers:
 
Status Codes:

Added in v22.5.

Get an artifact produced during a test run

GET /api/v1/result/(result_id)/artifacts/(filename)

Retrieve files recorded during a test run.

For example usage refer to the examples for the screenshot endpoint.

Parameters:
  • result_id – The desired result’s identifier, as returned from /api/v1/results.
  • filename – The name of the file to retrive.

The response consists of the contents of a file given by filename recorded during a test run.

Response Headers:
 
  • Content-Type – Varies depending on the filename extension.
Status Codes:

Added in v22.5.