stbt Python API

Testcases are Python functions stored in the test-pack git repository under tests/*.py. The function name must begin with test_.

Example

import stbt

# You can import your own helper libraries from the test-pack.
import dialogues


def test_that_pressing_EPG_opens_the_guide():
    # We recommend starting each testcase with setup steps so that
    # the testcase can be run no matter what state the device-under-
    # test is in. Note that you can call other Python functions
    # defined elsewhere in your test-pack.
    if dialogues.modal_dialogue_is_up():
        dialogues.close_modal_dialogue()

    # Send an infrared keypress:
    stbt.press("KEY_EPG")

    # Verify that the device-under-test has reacted appropriately:
    stbt.wait_for_match("guide.png")

Controlling the system-under-test

Remote control

Network-based protocols

Some devices (such as the Roku and some Smart TVs) can be controlled via HTTP or other network protocols. You can use any Python networking library to make network requests to such devices (to install third-party Python libraries see Customising the test-run environment). We recommend the Python requests library, which is already installed.

Alexa and Google Home

  • stbt.play_audio_file: Play an audio clip (for example “Alexa, play Teletubbies”) to test integration of your device with voice-controlled devices like Alexa or Google Home.

Verifying the system-under-test’s behaviour

Searching for an image

Use stbt.match with assert and stbt.wait_until for a more flexible alternative to stbt.wait_for_match. For example, to wait for an image to disappear:

stbt.press("KEY_CLOSE")
assert wait_until(lambda: not stbt.match("guide.png"))

Searching for text using OCR (optical character recognition)

Searching for motion

Miscellaneous video APIs

Audio APIs

Audio input:

  • stbt.get_rms_volume: Calculate the average RMS volume over a given duration.

  • stbt.wait_for_volume_change: Wait for changes in the RMS audio volume. Can detect the start of content playback or unmuting; bleeps or clicks while navigating the UI; or beeps in an A/V sync video.

  • stbt.audio_chunks: Low-level API to get raw audio samples for custom analysis.

Audio output:

  • stbt.play_audio_file: Play an audio file through the Stb-tester Node’s “audio out” jack. Useful for testing integration of your device with Alexa or Google Home.

Custom image processing

Stb-tester can give you raw video frames for you to do your own image processing with OpenCV’s “cv2” Python API. Stb-tester’s video frames are numpy.ndarray objects, which is the same format that OpenCV uses.

To save a frame to disk, use cv2.imwrite. Note that any file you write to the current working directory will appear as an artifact in the test-run results.

Logging

  • stbt.draw_text: Write the specified text on this test-run’s video recording.

Anything you write to stdout or stderr appears in the test-run’s logfile in stb-tester’s test-results viewer.

Metrics

For some customers we run Prometheus and Grafana on your Stb-tester Portal. (Prometheus is an open-source time-series database for metrics; Grafana is an open-source dashboard & reporting tool driven by the data in Prometheus.) If this is enabled on your Portal, you can log metrics to Prometheus using the following APIs:

Utilities

Exceptions

If your testcase raises one of the following exceptions, it is considered a test failure:

Any other exception is considered a test error. For details see Test failures vs. errors.

API reference

stbt.android.adb

stbt.android.adb(args, **subprocess_kwargs) CompletedProcess

Send commands to an Android device using ADB.

This is a convenience function. It will construct an AdbDevice with the default parameters (taken from your config files) and call AdbDevice.adb with the parameters given here.

stbt.android.AdbDevice

class stbt.android.AdbDevice(address=None, adb_server=None, adb_binary=None, tcpip=None)

Send commands to an Android device using ADB.

Default values for each parameter can be specified in your “stbt.conf” config file under the “[android]” section.

Parameters
  • address (string) – IP address (if using Network ADB) or serial number (if connected via USB) of the Android device. You can get the serial number by running adb devices -l. If not specified, there must be only one Android device connected by USB.

  • adb_server (string) – The ADB server (that is, the PC connected to the Android device). Defaults to localhost.

  • adb_binary (string) – The path to the ADB client executable. Defaults to “adb”.

  • tcpip (bool) – The ADB server communicates with the Android device via TCP/IP, not USB. This requires that you have enabled Network ADB access on the device. Defaults to True if address is an IP address, False otherwise.

adb(args, *, timeout=None, **subprocess_kwargs) CompletedProcess

Run any ADB command.

For example, the following code will use “adb shell am start” to launch an app on the device:

d = AdbDevice(...)
d.adb(["shell", "am", "start", "-S",
       "com.example.myapp/com.example.myapp.MainActivity"])

Any keyword arguments are passed on to subprocess.run.

Returns

subprocess.CompletedProcess from subprocess.run.

Raises

subprocess.CalledProcessError if check is true and the adb process returns a non-zero exit status.

Raises

AdbError if adb connect fails.

devices() str

Output of adb devices -l.

get_frame(coordinate_system=None) stbt.Frame

Take a screenshot using ADB.

If you are capturing video from the Android device via another method (namely, HDMI capture) sometimes it can be useful to capture a frame via ADB for debugging. This function will manipulate the ADB screenshot (scale and/or rotate it) to match the screenshots from your main video-capture method as closely as possible.

Returns

A stbt.Frame, that is, an image in OpenCV format. Note that the time attribute won’t be very accurate (probably to <0.5s or so).

press(key) None

Send a keypress.

Parameters

key (str) – An Android keycode as listed in <https://developer.android.com/reference/android/view/KeyEvent.html>. Also accepts standard Stb-tester key names like “KEY_HOME” and “KEY_BACK”.

swipe(start_position, end_position) None

Swipe from one point to another point.

Parameters
  • start_position – A stbt.Region or (x, y) tuple of coordinates at which to start.

  • end_position – A stbt.Region or (x, y) tuple of coordinates at which to stop.

Example:

d.swipe((100, 100), (100, 400))
tap(position) None

Tap on a particular location.

Parameters

position – A stbt.Region, or an (x,y) tuple.

Example:

d.tap((100, 20))
d.tap(stbt.match(...).region)
logcat(filename='logcat.log', logcat_args=None)

Run adb logcat and stream the logs to filename.

This is a context manager. You can use it as a decorator on your test-case functions, and adb logcat will run for the duration of the decorated function:

adb = stbt.android.AdbDevice()

@adb.logcat()
def test_launching_my_androidtv_app():
    ...
Parameters
  • filename (str) – Where the logs are written.

  • logcat_args (list) – Optional arguments to pass on to adb logcat, such as filter expressions. For example: logcat_args=["ActivityManager:I", "MyApp:D", "*:S"]. See the logcat documentation.

stbt.android.AdbError

exception stbt.android.AdbError

Bases: Exception

Exception raised by AdbDevice.adb.

Variables
  • returncode (int) – Exit status of the adb command.

  • cmd (list) – The command that failed, as given to AdbDevice.adb.

  • output (str) – The output from adb.

  • devices (str) – The output from “adb devices -l” (useful for debugging connection errors).

stbt.apply_ocr_corrections

stbt.apply_ocr_corrections(text, corrections=None)

Applies the same corrections as stbt.ocr’s corrections parameter.

This is available as a separate function so that you can use it to post-process old test artifacts using new corrections.

Parameters

stbt.as_precondition

stbt.as_precondition(message)

Context manager that replaces test failures with test errors.

Stb-tester’s reports show test failures (that is, UITestFailure or AssertionError exceptions) as red results, and test errors (that is, unhandled exceptions of any other type) as yellow results. Note that wait_for_match, wait_for_motion, and similar functions raise a UITestFailure when they detect a failure. By running such functions inside an as_precondition context, any UITestFailure or AssertionError exceptions they raise will be caught, and a PreconditionError will be raised instead.

When running a single testcase hundreds or thousands of times to reproduce an intermittent defect, it is helpful to mark unrelated failures as test errors (yellow) rather than test failures (red), so that you can focus on diagnosing the failures that are most likely to be the particular defect you are looking for. For more details see Test failures vs. errors.

Parameters

message (str) – A description of the precondition. Word this positively: “Channels tuned”, not “Failed to tune channels”.

Raises

PreconditionError if the wrapped code block raises a UITestFailure or AssertionError.

Example:

def test_that_the_on_screen_id_is_shown_after_booting():
    channel = 100

    with stbt.as_precondition("Tuned to channel %s" % channel):
        mainmenu.close_any_open_menu()
        channels.goto_channel(channel)
        power.cold_reboot()
        assert channels.is_on_channel(channel)

    stbt.wait_for_match("on-screen-id.png")

stbt.audio_chunks

stbt.audio_chunks(time_index=None)

Low-level API to get raw audio samples.

audio_chunks returns an iterator of AudioChunk objects. Each one contains 100ms to 5s of mono audio samples (see AudioChunk for the data format).

audio_chunks keeps a buffer of 10s of audio samples. time_index allows the caller to access these old samples. If you read from the returned iterator too slowly you may miss some samples. The returned iterator will skip these old samples and silently re-sync you at -10s. You can detect this situation by comparing the .end_time of the previous chunk to the .time of the current one.

Parameters

time_index (int or float) – Time from which audio samples should be yielded. This is an epoch time compatible with time.time(). Defaults to the current time as given by time.time().

Returns

An iterator yielding AudioChunk objects

Return type

Iterator[AudioChunk]

stbt.AudioChunk

class stbt.AudioChunk(array, dtype=None, order=None, time=None, rate=48000)

A sequence of audio samples.

An AudioChunk object is what you get from audio_chunks. It is a subclass of numpy.ndarray. An AudioChunk is a 1-D array containing audio samples in 32-bit floating point format (numpy.float32) between -1.0 and 1.0.

In addition to the members inherited from numpy.ndarray, AudioChunk defines the following attributes:

Variables
  • time (float) – The wall-clock time of the first audio sample in this chunk, as number of seconds since the unix epoch (1970-01-01T00:00:00Z). This is the same format used by the Python standard library function time.time.

  • rate (int) – Number of samples per second. This will typically be 48000.

  • duration (float) – The duration of this audio chunk in seconds.

  • end_time (float) – time + duration.

AudioChunk supports slicing using Python’s [x:y] syntax, so the above attributes will be updated appropriately on the returned slice.

stbt.Color

class stbt.Color(hexstring: str)
class stbt.Color(blue: int, green: int, red: int)
class stbt.Color(bgr: Tuple[int, int, int])

A BGR color, optionally with an alpha (transparency) value.

A Color can be created from an HTML-style hex string:

>>> Color('#f77f00')
Color('#f77f00')

Or from Blue, Green, Red values in the range 0-255:

>>> Color(0, 127, 247)
Color('#f77f00')

Note: When you specify the colors in this way, the BGR order is the opposite of the HTML-style RGB order. This is for compatibility with the way OpenCV stores colors.

stbt.color_diff

stbt.color_diff(frame=None, *, background_color=None, foreground_color=None, threshold=0.05, erode=False)

Calculate euclidean color distance in a perceptually uniform colorspace.

Calculates the distance of each pixel in frame against the color specified in background_color or foreground_color. The output is a binary (black and white) image.

Parameters
  • frame (stbt.Frame) – The video frame to process.

  • background_color (Color) – The color to diff against. Output pixels will be white where the color distance is greater than threshold. Use this to remove a background of a particular color.

  • foreground_color (Color) – The color to diff against. Output pixels will be white where the color distance is smaller than threshold. Use this to find a foreground feature of a particular color, such as text or the selection/focus.

  • threshold (float) – Binarization threshold in the range [0., 1.]. Foreground pixels will be set to white, background pixels to black. A value of 0.01 means a barely-noticeable difference to human perception. To disable binarization set threshold=None; the output will be a grayscale image.

  • erode (bool) – Run the thresholded differences through an erosion algorithm to remove noise or small differences (less than 3px).

Return type

numpy.ndarray

Returns

Binary (black & white) image, or grayscale image if threshold=None.

Added in v33.

stbt.ConfigurationError

exception stbt.ConfigurationError

Bases: Exception

An error with your stbt configuration file.

stbt.ConfirmMethod

class stbt.ConfirmMethod(value)

An enum. See MatchParameters for documentation of these values.

NONE = 'none'
ABSDIFF = 'absdiff'
NORMED_ABSDIFF = 'normed-absdiff'

stbt.crop

stbt.crop(frame, region)

Returns an image containing the specified region of frame.

Parameters

frame (stbt.Frame or numpy.ndarray) – An image in OpenCV format (for example as returned by frames, get_frame and load_image, or the frame parameter of MatchResult).

Returns

An OpenCV image (numpy.ndarray) containing the specified region of the source frame. This is a view onto the original data, so if you want to modify the cropped image call its copy() method first.

stbt.detect_motion

stbt.detect_motion(timeout_secs=10, noise_threshold=None, mask=Region.ALL, region=Region.ALL, frames=None)

Generator that yields a sequence of one MotionResult for each frame processed from the device-under-test’s video stream.

The MotionResult indicates whether any motion was detected.

Use it in a for loop like this:

for motionresult in stbt.detect_motion():
    ...

In most cases you should use wait_for_motion instead.

Parameters
  • timeout_secs (int or float or None) – A timeout in seconds. After this timeout the iterator will be exhausted. Thas is, a for loop like for m in detect_motion(timeout_secs=10) will terminate after 10 seconds. If timeout_secs is None then the iterator will yield frames forever. Note that you can stop iterating (for example with break) at any time.

  • noise_threshold (float) –

    The amount of noise to ignore. This is only useful with noisy analogue video sources. Valid values range from 0 (all differences are considered noise; a value of 0 will never report motion) to 1.0 (any difference is considered motion).

    This defaults to 0.84. You can override the global default value by setting noise_threshold in the [motion] section of .stbt.conf.

  • mask (str|numpy.ndarray|Mask|Region) – A Region or a mask that specifies which parts of the image to analyse. This accepts anything that can be converted to a Mask using stbt.load_mask. See Regions and Masks.

  • region (Region) – Deprecated synonym for mask. Use mask instead.

  • frames (Iterator[stbt.Frame]) – An iterable of video-frames to analyse. Defaults to stbt.frames().

Changed in v33: mask accepts anything that can be converted to a Mask using load_mask. The region parameter is deprecated; pass your Region to mask instead. You can’t specify mask and region at the same time.

stbt.detect_pages

stbt.detect_pages(frame=None, candidates=None, test_pack_root='')

Find Page Objects that match the given frame.

This function tries each of the Page Objects defined in your test-pack (that is, subclasses of stbt.FrameObject) and returns an instance of each Page Object that is visible (according to the object’s is_visible property).

This is a Python generator that yields 1 Page Object at a time. If your code only consumes the first object (like in the example below), detect_pages will try each Page Object class until it finds a match, yield it to your code, and then it won’t waste time trying other Page Object classes:

page = next(stbt.detect_pages())

To get all the matching pages you can iterate like this:

for page in stbt.detect_pages():
    print(type(page))

Or create a list like this:

pages = list(stbt.detect_pages())
Parameters
  • frame (stbt.Frame) – The video frame to process; if not specified, a new frame is grabbed from the device-under-test by calling stbt.get_frame.

  • candidates (Sequence[Type[stbt.FrameObject]]) – The Page Object classes to try. Note that this is a list of the classes themselves, not instances of those classes. If candidates isn’t specified, detect_pages will use static analysis to find all of the Page Objects defined in your test-pack.

  • test_pack_root (str) – A subdirectory of your test-pack to search for Page Object definitions, used when candidates isn’t specified. Defaults to the entire test-pack.

Return type

Iterator[stbt.FrameObject]

Returns

An iterator of Page Object instances that match the given frame.

Added in v32.

stbt.Direction

class stbt.Direction(value)

An enumeration.

HORIZONTAL = 'horizontal'

Process the image from left to right

VERTICAL = 'vertical'

Process the image from top to bottom

stbt.draw_text

stbt.draw_text(text, duration_secs=3)

Write the specified text to the output video.

Parameters
  • text (str) – The text to write.

  • duration_secs (int or float) – The number of seconds to display the text.

stbt.find_file

stbt.find_file(filename: str) str

Searches for the given filename relative to the directory of the caller.

When Stb-tester runs a test, the “current working directory” is not the same as the directory of the test-pack git checkout. If you want to read a file that’s committed to git (for example a CSV file with data that your test needs) you can use this function to find it. For example:

f = open(stbt.find_file("my_data.csv"))

If the file is not found in the directory of the Python file that called find_file, this will continue searching in the directory of that function’s caller, and so on, until it finds the file. This allows you to use find_file in a helper function that takes a filename from its caller.

This is the same algorithm used by load_image.

Parameters

filename (str) – A relative filename.

Return type

str

Returns

Absolute filename.

Raises

FileNotFoundError if the file can’t be found.

Added in v33.

stbt.find_regions_by_color

stbt.find_regions_by_color(color, *, frame=None, threshold=0.05, erode=False, mask=Region.ALL, min_size=(20, 20), max_size=None)

Find contiguous regions of a particular color.

Parameters
Return type

list[stbt.Region]

Returns

A list of stbt.Region instances.

Added in v33.

stbt.find_selection_from_background

stbt.find_selection_from_background(image, max_size, min_size=None, frame=None, mask=Region.ALL, threshold=25, erode=True)

Checks whether frame matches image, calculating the region where there are any differences. The region where frame doesn’t match the image is assumed to be the selection. This allows us to simultaneously detect the presence of a screen (used to implement a stbt.FrameObject class’s is_visible property) as well as finding the selection.

For example, to find the selection of an on-screen keyboard, image would be a screenshot of the keyboard without any selection. You may need to construct this screenshot artificially in an image editor by merging two different screenshots.

Unlike stbt.match, image must be the same size as frame.

Parameters
  • image (str|stbt.Image) –

    The background to match against. It can be the filename of a PNG file on disk, or an image previously loaded with stbt.load_image.

    If it has an alpha channel, any transparent pixels are masked out (that is, the alpha channel is ANDed with mask). This image must be the same size as frame.

  • max_size (stbt.Size) – The maximum size (width, height) of the differing region. If the differences between image and frame are larger than this in either dimension, the function will return a falsey result.

  • min_size (stbt.Size) – The minimum size (width, height) of the differing region (optional). If the differences between image and frame are smaller than this in either dimension, the function will return a falsey result.

  • frame (stbt.Frame) – If this is specified it is used as the video frame to search in; otherwise a new frame is grabbed from the device-under-test. This is an image in OpenCV format (for example as returned by stbt.frames and stbt.get_frame).

  • mask (str|numpy.ndarray|Mask|Region) – A Region or a mask that specifies which parts of the image to analyse. This accepts anything that can be converted to a Mask using stbt.load_mask. See Regions and Masks.

  • threshold (int) – Threshold for differences between image and frame for it to be considered a difference. This is a colour distance between pixels in image and frame. 0 means the colours have to match exactly. 255 would mean that even white (255, 255, 255) would match black (0, 0, 0).

  • erode (bool) – By default we pass the thresholded differences through an erosion algorithm to remove noise or small anti-aliasing differences. If your selection is a single line less than 3 pixels wide, set this to False.

Returns

An object that will evaluate to true if image and frame matched with a difference smaller than max_size. The object has the following attributes:

  • matched (bool) – True if the image and the frame matched with a difference smaller than max_size.

  • region (stbt.Region) – The bounding box that contains the selection (that is, the differences between image and frame).

  • mask_region (stbt.Region) – The region of the frame that was analysed, as given in the function’s mask parameter.

  • image (stbt.Image) – The reference image given to find_selection_from_background.

  • frame (stbt.Frame) – The video-frame that was analysed.

find_selection_from_background was added in v32.

Changed in v33: mask accepts anything that can be converted to a Mask using load_mask (previously it only accepted a Region).

stbt.Frame

class stbt.Frame(array, dtype=None, order=None, time=None, _draw_sink=None)

A frame of video.

A Frame is what you get from stbt.get_frame and stbt.frames. It is a subclass of numpy.ndarray, which is the type that OpenCV uses to represent images. Data is stored in 8-bit, 3 channel BGR format.

In addition to the members inherited from numpy.ndarray, Frame defines the following attributes:

Variables

time (float) – The wall-clock time when this video-frame was captured, as number of seconds since the unix epoch (1970-01-01T00:00:00Z). This is the same format used by the Python standard library function time.time.

stbt.FrameObject

class stbt.FrameObject(frame=None)

Base class for user-defined Page Objects.

FrameObjects are Stb-tester’s implementation of the Page Object pattern. A FrameObject is a class that uses Stb-tester APIs like stbt.match() and stbt.ocr() to extract information from the screen, and it provides a higher-level API in the vocabulary and user-facing concepts of your own application.

_images/frame-object-pattern.png

Based on Martin Fowler’s PageObject diagram

Stb-tester uses a separate instance of your FrameObject class for each frame of video captured from the device-under-test (hence the name “Frame Object”). Stb-tester provides additional tooling for writing, testing, and maintenance of FrameObjects.

To define your own FrameObject class:

  • Derive from stbt.FrameObject.

  • Define an is_visible property (using Python’s @property decorator) that returns True or False.

  • Define any other properties for information that you want to extract from the frame.

  • Inside each property, when you call an image-processing function (like stbt.match or stbt.ocr) you must specify the parameter frame=self._frame.

The following behaviours are provided automatically by the FrameObject base class:

  • Truthiness: A FrameObject instance is considered “truthy” if it is visible. Any other properties (apart from is_visible) will return None if the object isn’t visible.

  • Immutability: FrameObjects are immutable, because they represent information about a specific frame of video – in other words, an instance of a FrameOject represents the state of the device-under-test at a specific point in time. If you define any methods that change the state of the device-under-test, they should return a new FrameObject instance instead of modifying self.

  • Caching: Each property will be cached the first time is is used. This allows writing testcases in a natural way, while expensive operations like ocr will only be done once per frame.

The FrameObject base class defines several convenient methods and attributes (see below).

For more details see Object Repository in the Stb-tester manual.

_fields

A tuple containing the names of the public properties.

__init__(frame=None)

The default constructor takes an optional frame of video; if the frame is not provided, it will grab a frame from the device-under-test.

If you override the constructor in your derived class (for example to accept additional parameters), make sure to accept an optional frame parameter and supply it to the super-class’s constructor.

__repr__()

The object’s string representation shows all its public properties.

We only print properties we have already calculated, to avoid triggering expensive calculations.

__bool__()

Delegates to is_visible. The object will only be considered True if it is visible.

__eq__(other)

Two instances of the same FrameObject type are considered equal if the values of all the public properties match, even if the underlying frame is different. All falsey FrameObjects of the same type are equal.

__hash__()

Two instances of the same FrameObject type are considered equal if the values of all the public properties match, even if the underlying frame is different. All falsey FrameObjects of the same type are equal.

refresh(frame=None, **kwargs)

Returns a new FrameObject instance with a new frame. self is not modified.

refresh is used by navigation functions that modify the state of the device-under-test.

By default refresh returns a new object of the same class as self, but you can override the return type by implementing refresh in your derived class.

Any additional keyword arguments are passed on to __init__.

stbt.frames

stbt.frames(timeout_secs=None)

Generator that yields video frames captured from the device-under-test.

For example:

for frame in stbt.frames():
    # Do something with each frame here.
    # Remember to add a termination condition to `break` or `return`
    # from the loop, or specify `timeout_secs` — otherwise you'll have
    # an infinite loop!
    ...

See also stbt.get_frame.

Parameters

timeout_secs (int or float or None) – A timeout in seconds. After this timeout the iterator will be exhausted. That is, a for loop like for f in stbt.frames(timeout_secs=10) will terminate after 10 seconds. If timeout_secs is None (the default) then the iterator will yield frames forever but you can stop iterating (for example with break) at any time.

Return type

Iterator[stbt.Frame]

Returns

An iterator of frames in OpenCV format (stbt.Frame).

stbt.get_config

stbt.get_config(section, key, default=NoDefault, type_=str)

Read the value of key from section of the test-pack configuration file.

For example, if your configuration file looks like this:

[test_pack]
stbt_version = 30

[my_company_name]
backend_ip = 192.168.1.23

then you can read the value from your test script like this:

backend_ip = stbt.get_config("my_company_name", "backend_ip")

This searches in the .stbt.conf file at the root of your test-pack, and in the config/test-farm/<hostname>.conf file matching the hostname of the stb-tester device where the script is running. Values in the host-specific config file override values in .stbt.conf. See Configuration files for more details.

Test scripts can use get_config to read tags that you specify at run-time: see Automatic configuration keys. For example:

my_tag_value = stbt.get_config("result.tags", "my tag name")

Raises ConfigurationError if the specified section or key is not found, unless default is specified (in which case default is returned).

Changed in v32: Allow specifying None as the default value (previously None would be treated as if you hadn’t specified any default value).

stbt.get_frame

stbt.get_frame()

Grabs a video frame from the device-under-test.

Return type

stbt.Frame

Returns

The most recent video frame in OpenCV format.

Most Stb-tester APIs (stbt.match, stbt.FrameObject constructors, etc.) will call get_frame if a frame isn’t specified explicitly.

If you call get_frame twice very quickly (faster than the video-capture framerate) you might get the same frame twice. To block until the next frame is available, use stbt.frames.

To save a frame to disk pass it to cv2.imwrite. Note that any file you write to the current working directory will appear as an artifact in the test-run results.

stbt.get_rms_volume

stbt.get_rms_volume(duration_secs=3, stream=None) RmsVolumeResult

Calculate the average RMS volume of the audio over the given duration.

For example, to check that your mute button works:

stbt.press('KEY_MUTE')
time.sleep(1)  # <- give it some time to take effect
assert get_rms_volume().amplitude < 0.001  # -60 dB
Parameters
  • duration_secs (int or float) – The window over which you should average, in seconds. Defaults to 3s in accordance with short-term loudness from the EBU TECH 3341 specification.

  • stream (Iterator[AudioChunk]) – Audio stream to measure. Defaults to audio_chunks().

Raises

ZeroDivisionError – If duration_secs is shorter than one sample or stream contains no samples.

Return type

RmsVolumeResult

stbt.Grid

class stbt.Grid(region, cols=None, rows=None, data=None)

A grid with items arranged left to right, then down.

For example a keyboard, or a grid of posters, arranged like this:

ABCDE
FGHIJ
KLMNO

All items must be the same size, and the spacing between them must be consistent.

This class is useful for converting between pixel coordinates on a screen, to x & y indexes into the grid positions.

Parameters
  • region (Region) – Where the grid is on the screen.

  • cols (int) – Width of the grid, in number of columns.

  • rows (int) – Height of the grid, in number of rows.

  • data – A 2D array (list of lists) containing data to associate with each cell. The data can be of any type. For example, if you are modelling a grid-shaped keyboard, the data could be the letter at each grid position. If data is specified, then cols and rows are optional.

class Cell(index, position, region, data)

A single cell in a Grid.

Don’t construct Cells directly; create a Grid instead.

Variables
  • index (int) – The cell’s 1D index into the grid, starting from 0 at the top left, counting along the top row left to right, then the next row left to right, etc.

  • position (Position) –

    The cell’s 2D index (x, y) into the grid (zero-based). For example in this grid “I” is index 8 and position (x=3, y=1):

    ABCDE
    FGHIJ
    KLMNO
    

  • region (Region) – Pixel coordinates (relative to the entire frame) of the cell’s bounding box.

  • data – The data corresponding to the cell, if data was specified when you created the Grid.

get(index=None, position=None, region=None, data=None)

Retrieve a single cell in the Grid.

For example, let’s say that you’re looking for the selected item in a grid by matching a reference image of the selection border. Then you can find the (x, y) position in the grid of the selection, like this:

selection = stbt.match("selection.png")
cell = grid.get(region=selection.region)
position = cell.position

You must specify one (and only one) of index, position, region, or data. For the meaning of these parameters see Grid.Cell.

A negative index counts backwards from the end of the grid (so -1 is the bottom right position).

region doesn’t have to match the cell’s pixel coordinates exactly; instead, this returns the cell that contains the center of the given region.

Returns

The Grid.Cell that matches the specified query; raises IndexError if the index/position/region is out of bounds or the data is not found.

stbt.Image

class stbt.Image

An image, possibly loaded from disk.

This is a subclass of numpy.ndarray, which is the type that OpenCV uses to represent images.

In addition to the members inherited from numpy.ndarray, Image defines the following attributes:

Variables
  • filename (str or None) – The filename that was given to stbt.load_image.

  • absolute_filename (str or None) – The absolute path resolved by stbt.load_image.

  • relative_filename (str or None) – The path resolved by stbt.load_image, relative to the root of the test-pack git repo.

Added in v32.

stbt.is_screen_black

stbt.is_screen_black(frame: Optional[Frame] = None, mask: Mask | Region | str = Region.ALL, threshold: Optional[int] = None, region: Region = Region.ALL) _IsScreenBlackResult

Check for the presence of a black screen in a video frame.

Parameters
  • frame (Frame) – If this is specified it is used as the video frame to check; otherwise a new frame is grabbed from the device-under-test. This is an image in OpenCV format (for example as returned by frames and get_frame).

  • mask (str|numpy.ndarray|Mask|Region) – A Region or a mask that specifies which parts of the image to analyse. This accepts anything that can be converted to a Mask using stbt.load_mask. See Regions and Masks.

  • threshold (int) – Even when a video frame appears to be black, the intensity of its pixels is not always 0. To differentiate almost-black from non-black pixels, a binary threshold is applied to the frame. The threshold value is in the range 0 (black) to 255 (white). The global default (20) can be changed by setting threshold in the [is_screen_black] section of .stbt.conf.

  • region (Region) – Deprecated synonym for mask. Use mask instead.

Returns

An object that will evaluate to true if the frame was black, or false if not black. The object has the following attributes:

  • black (bool) – True if the frame was black.

  • frame (stbt.Frame) – The video frame that was analysed.

Changed in v33: mask accepts anything that can be converted to a Mask using load_mask. The region parameter is deprecated; pass your Region to mask instead. You can’t specify mask and region at the same time.

stbt.Keyboard

class stbt.Keyboard(*, mask=Region.ALL, navigate_timeout=60)

Models the behaviour of an on-screen keyboard.

You customize for the appearance & behaviour of the keyboard you’re testing by specifying two things:

  • A Directed Graph that specifies the navigation between every key on the keyboard. For example: When A is selected, pressing KEY_RIGHT on the remote control goes to B, and so on.

  • A Page Object that tells you which key is currently selected on the screen. See the page parameter to enter_text and navigate_to.

The constructor takes the following parameters:

Parameters
  • mask (str|numpy.ndarray|Mask|Region) – A mask to use when calling stbt.press_and_wait to determine when the current selection has finished moving. If the search page has a blinking cursor you need to mask out the region where the cursor can appear, as well as any other regions with dynamic content (such as a picture-in-picture with live TV). See stbt.press_and_wait for more details about the mask.

  • navigate_timeout (int or float) – Timeout (in seconds) for navigate_to. In practice navigate_to should only time out if you have a bug in your model or in the real keyboard under test.

For example, let’s model the lowercase keyboard from the YouTube search page on Apple TV:

_images/youtube-keyboard.png
# 1. Specify the keyboard's navigation model
# ------------------------------------------

kb = stbt.Keyboard()

# The 6x6 grid of letters & numbers:
kb.add_grid(stbt.Grid(stbt.Region(x=125, y=175, right=425, bottom=475),
                      data=["abcdef",
                            "ghijkl",
                            "mnopqr",
                            "stuvwx",
                            "yz1234",
                            "567890"]))
# The 3x1 grid of special keys:
kb.add_grid(stbt.Grid(stbt.Region(x=125, y=480, right=425, bottom=520),
                      data=[[" ", "DELETE", "CLEAR"]]))

# The `add_grid` calls (above) defined the transitions within each grid.
# Now we need to specify the transitions from the bottom row of numbers
# to the larger keys below them:
#
#     5 6 7 8 9 0
#     ↕ ↕ ↕ ↕ ↕ ↕
#     SPC DEL CLR
#
# Note that `add_transition` adds the symmetrical transition (KEY_UP)
# by default.
kb.add_transition("5", " ", "KEY_DOWN")
kb.add_transition("6", " ", "KEY_DOWN")
kb.add_transition("7", "DELETE", "KEY_DOWN")
kb.add_transition("8", "DELETE", "KEY_DOWN")
kb.add_transition("9", "CLEAR", "KEY_DOWN")
kb.add_transition("0", "CLEAR", "KEY_DOWN")

# 2. A Page Object that describes the appearance of the keyboard
# --------------------------------------------------------------

class SearchKeyboard(stbt.FrameObject):
    """The YouTube search keyboard on Apple TV"""

    @property
    def is_visible(self):
        # Implementation left to the reader. Should return True if the
        # keyboard is visible and focused.
        ...

    @property
    def selection(self):
        """Returns the selected key.

        Used by `Keyboard.enter_text` and `Keyboard.navigate_to`.

        Note: The reference image (selection.png) is carefully cropped
        so that it will match the normal keys as well as the larger
        "SPACE", "DELETE" and "CLEAR" keys. The middle of the image
        (where the key's label appears) is transparent so that it will
        match any key.
        """
        m = stbt.match("selection.png", frame=self._frame)
        if m:
            return kb.find_key(region=m.region)
        else:
            return None

    # Your Page Object can also define methods for your test scripts to
    # use:

    def enter_text(self, text):
        return kb.enter_text(text.lower(), page=self)

    def clear(self):
        page = kb.navigate_to("CLEAR", page=self)
        stbt.press_and_wait("KEY_OK")
        return page.refresh()

For a detailed tutorial, including an example that handles multiple keyboard modes (lowercase, uppercase, and symbols) see our article Testing on-screen keyboards with Stb-tester.

stbt.Keyboard was added in v31.

Changed in v32:

  • Added support for keyboards with different modes (such as uppercase, lowercase, and symbols).

  • Changed the internal representation of the Directed Graph. Manipulating the networkx graph directly is no longer supported.

  • Removed stbt.Keyboard.parse_edgelist and stbt.grid_to_navigation_graph. Instead, first create the Keyboard object, and then use add_key, add_transition, add_edgelist, and add_grid to build the model of the keyboard.

  • Removed the stbt.Keyboard.Selection type. Instead, your Page Object’s selection property should return a Key value obtained from find_key.

Changed in v33:

  • Added class stbt.Keyboard.Key (the type returned from find_key). This used to be a private API, but now it is public so that you can use it in type annotations for your Page Object’s selection property.

  • Tries to recover from missed or double keypresses. To disable this behaviour specify retries=0 when calling enter_text or navigate_to.

  • Increased default navigate_timeout from 20 to 60 seconds.

class Key(name: Optional[str] = None, text: Optional[str] = None, region: Optional[Region] = None, mode: Optional[str] = None)

Represents a key on the on-screen keyboard.

This is returned by stbt.Keyboard.find_key. Don’t create instances of this class directly.

It has attributes name, text, region, and mode. See Keyboard.add_key.

add_key(name, text=None, region=None, mode=None)

Add a key to the model (specification) of the keyboard.

Parameters
  • name (str) – The text or label you can see on the key.

  • text (str) – The text that will be typed if you press OK on the key. If not specified, defaults to name if name is exactly 1 character long, otherwise it defaults to "" (an empty string). An empty string indicates that the key doesn’t type any text when pressed (for example a “caps lock” key to change modes).

  • region (stbt.Region) – The location of this key on the screen. If specified, you can look up a key’s name & text by region using find_key(region=...).

  • mode (str) – The mode that the key belongs to (such as “lowercase”, “uppercase”, “shift”, or “symbols”) if your keyboard supports different modes. Note that the same key, if visible in different modes, needs to be modelled as separate keys (for example (name=" ", mode="lowercase") and (name=" ", mode="uppercase")) because their navigation connections are totally different: pressing up from the former goes to lowercase “c”, but pressing up from the latter goes to uppercase “C”. mode is optional if your keyboard doesn’t have modes, or if you only need to use the default mode.

Returns

The added key (stbt.Keyboard.Key). This is an object that you can use with add_transition.

Raises

ValueError if the key is already present in the model.

find_key(name=None, text=None, region=None, mode=None)

Find a key in the model (specification) of the keyboard.

Specify one or more of name, text, region, and mode (as many as are needed to uniquely identify the key).

For example, your Page Object’s selection property would do some image processing to find the selection on screen, and then use find_key to identify the current key based on the region of that selection.

Returns

A stbt.Keyboard.Key object that unambiguously identifies the key in the model. It has “name”, “text”, “region”, and “mode” attributes. You can use this object as the source or target parameter of add_transition.

Raises

ValueError if the key does not exist in the model, or if it can’t be identified unambiguously (that is, if two or more keys match the given parameters).

find_keys(name=None, text=None, region=None, mode=None)

Find matching keys in the model of the keyboard.

This is like find_key, but it returns a list containing any keys that match the given parameters. For example, if there is a space key in both the lowercase and uppercase modes of the keyboard, calling find_keys(text=" ") will return a list of 2 objects [Key(text=" ", mode="lowercase"), Key(text=" ", mode="uppercase")].

This method doesn’t raise an exception; the list will be empty if no keys matched.

add_transition(source, target, keypress, mode=None, symmetrical=True)

Add a transition to the model (specification) of the keyboard.

For example: To go from “A” to “B”, press “KEY_RIGHT” on the remote control.

Parameters
  • source – The starting key. This can be a Key object returned from add_key or find_key; or it can be a dict that contains one or more of “name”, “text”, “region”, and “mode” (as many as are needed to uniquely identify the key using find_key). For convenience, a single string is treated as “name” (but this may not be enough to uniquely identify the key if your keyboard has multiple modes).

  • target – The key you’ll land on after pressing the button on the remote control. This accepts the same types as source.

  • keypress (str) – The name of the key you need to press on the remote control, for example “KEY_RIGHT”.

  • mode (str) –

    Optional keyboard mode that applies to both source and target. For example, the two following calls are the same:

    add_transition("c", " ", "KEY_DOWN", mode="lowercase")
    
    add_transition({"name": "c", "mode": "lowercase"},
                   {"name": " ", "mode": "lowercase"},
                   "KEY_DOWN")
    

  • symmetrical (bool) – By default, if the keypress is “KEY_LEFT”, “KEY_RIGHT”, “KEY_UP”, or “KEY_DOWN”, this will automatically add the opposite transition. For example, if you call add_transition("a", "b", "KEY_RIGHT") this will also add the transition ("b", "a", "KEY_LEFT)". Set this parameter to False to disable this behaviour. This parameter has no effect if keypress is not one of the 4 directional keys.

Raises

ValueError if the source or target keys do not exist in the model, or if they can’t be identified unambiguously.

add_edgelist(edgelist, mode=None, symmetrical=True)

Add keys and transitions specified in a string in “edgelist” format.

Parameters
  • edgelist (str) –

    A multi-line string where each line is in the format <source_name> <target_name> <keypress>. For example, the specification for a qwerty keyboard might look like this:

    '''
    Q W KEY_RIGHT
    Q A KEY_DOWN
    W E KEY_RIGHT
    ...
    '''
    

    The name “SPACE” will be converted to the space character (” “). This is because space is used as the field separator; otherwise it wouldn’t be possible to specify the space key using this format.

    Lines starting with “###” are ignored (comments).

  • mode (str) – Optional mode that applies to all the keys specified in edgelist. See add_key for more details about modes. It isn’t possible to specify transitions between different modes using this edgelist format; use add_transition for that.

  • symmetrical (bool) – See add_transition.

add_grid(grid, mode=None)

Add keys, and transitions between them, to the model of the keyboard.

If the keyboard (or part of the keyboard) is arranged in a regular grid, you can use stbt.Grid to easily specify the positions of those keys. This only works if the columns & rows are all of the same size.

If your keyboard has keys outside the grid, you will still need to specify the transitions from the edge of the grid onto the outside keys, using add_transition. See the example above.

Parameters
  • grid (stbt.Grid) – The grid to model. The data associated with each cell will be used for the key’s “name” attribute (see add_key).

  • mode (str) – Optional mode that applies to all the keys specified in grid. See add_key for more details about modes.

Returns

A new stbt.Grid where each cell’s data is a key object that can be used with add_transition (for example to define additional transitions from the edges of this grid onto other keys).

enter_text(text, page, verify_every_keypress=False, retries=2)

Enter the specified text using the on-screen keyboard.

Parameters
  • text (str) – The text to enter. If your keyboard only supports a single case then you need to convert the text to uppercase or lowercase, as appropriate, before passing it to this method.

  • page (stbt.FrameObject) –

    An instance of a stbt.FrameObject sub-class that describes the appearance of the on-screen keyboard. It must implement the following:

    • selection (Key) — property that returns a Key object, as returned from find_key.

    When you call enter_text, page must represent the current state of the device-under-test.

  • verify_every_keypress (bool) –

    If True, we will read the selected key after every keypress and assert that it matches the model. If False (the default) we will only verify the selected key corresponding to each of the characters in text. For example: to get from A to D you need to press KEY_RIGHT three times. The default behaviour will only verify that the selected key is D after the third keypress. This is faster, and closer to the way a human uses the on-screen keyboard.

    Set this to True to help debug your model if enter_text is behaving incorrectly.

  • retries (int) – Number of recovery attempts if a keypress doesn’t have the expected effect according to the model. Allows recovering from missed keypresses and double keypresses.

Returns

A new FrameObject instance of the same type as page, reflecting the device-under-test’s new state after the keyboard navigation completed.

Typically your FrameObject will provide its own enter_text method, so your test scripts won’t call this Keyboard class directly. See the example above.

navigate_to(target, page, verify_every_keypress=False, retries=2)

Move the selection to the specified key.

This won’t press KEY_OK on the target; it only moves the selection there.

Parameters
  • target – This can be a Key object returned from find_key, or it can be a dict that contains one or more of “name”, “text”, “region”, and “mode” (as many as are needed to identify the key using find_keys). If more than one key matches the given parameters, navigate_to will navigate to the closest one. For convenience, a single string is treated as “name”.

  • page (stbt.FrameObject) – See enter_text.

  • verify_every_keypress (bool) – See enter_text.

  • retries (int) – See enter_text.

Returns

A new FrameObject instance of the same type as page, reflecting the device-under-test’s new state after the keyboard navigation completed.

stbt.last_keypress

stbt.last_keypress()

Returns information about the last key-press sent to the device under test.

See the return type of stbt.press.

Added in v32.

stbt.load_image

stbt.load_image(filename, flags=None, color_channels=None) Image

Find & read an image from disk.

If given a relative filename, this will search in the directory of the Python file that called load_image, then in the directory of that file’s caller, and so on, until it finds the file. This allows you to use load_image in a helper function that takes a filename from its caller.

Finally this will search in the current working directory. This allows loading an image that you had previously saved to disk during the same test run.

This is the same search algorithm used by stbt.match and similar functions.

Parameters
  • filename (str) – A relative or absolute filename.

  • flags – Flags to pass to cv2.imread. Deprecated; use color_channels instead.

  • color_channels (Tuple[int]) –

    Tuple of acceptable numbers of color channels for the output image: 1 for grayscale, 3 for color, and 4 for color with an alpha (transparency) channel. For example, color_channels=(3, 4) will accept color images with or without an alpha channel. Defaults to (3, 4).

    If the image doesn’t match the specified color_channels it will be converted to the specified format.

Return type

stbt.Image

Returns

An image in OpenCV format — that is, a numpy.ndarray of 8-bit values. With the default color_channels parameter this will be 3 channels BGR, or 4 channels BGRA if the file has transparent pixels.

Raises

IOError if the specified path doesn’t exist or isn’t a valid image file.

  • Changed in v32: Return type is now stbt.Image, which is a numpy.ndarray sub-class with additional attributes filename, relative_filename and absolute_filename.

  • Changed in v32: Allows passing an image (numpy.ndarray or stbt.Image) instead of a string, in which case this function returns the given image.

  • Changed in v33: Added the color_channels parameter and deprecated flags. The image will always be converted to the format specified by color_channels (previously it was only converted to the format specified by flags if it was given as a filename, not as a stbt.Image or numpy array). The returned numpy array is read-only.

stbt.load_mask

stbt.load_mask(mask: Mask | Region | str) Mask

Used to load a mask from disk, or to create a mask from a Region.

A mask is a black & white image (the same size as the video-frame) that specifies which parts of the frame to process: White pixels select the area to process, black pixels the area to ignore.

In most cases you don’t need to call load_mask directly; Stb-tester’s image-processing functions such as is_screen_black, press_and_wait, and wait_for_motion will call load_mask with their mask parameter. This function is a public API so that you can use it if you are implementing your own image-processing functions.

Note that you can pass a Region directly to the mask parameter of stbt functions, and you can create more complex masks by adding, subtracting, or inverting Regions (see Regions and Masks).

Parameters

mask (str|Region) –

A relative or absolute filename of a mask PNG image. If given a relative filename, this uses the algorithm from load_image to find the file.

Or, a Region that specifies the area to process.

Returns

A mask as used by is_screen_black, press_and_wait, wait_for_motion, and similar image-processing functions.

Added in v33.

stbt.Mask

class stbt.Mask

Internal representation of a mask.

Most users will never need to use this type directly; instead, pass a filename or a Region to the mask parameter of APIs like stbt.wait_for_motion. See Regions and Masks.

to_array(region: Region, color_channels: int = 1) Tuple[Optional[ndarray], Region]

Materialize the mask to a numpy array of the specified size.

Most users will never need to call this method; it’s for people who are implementing their own image-processing algorithms.

Parameters
  • region (stbt.Region) – A Region matching the size of the frame that you are processing.

  • color_channels (int) – The number of channels required (1 or 3), according to your image-processing algorithm’s needs. All channels will be identical — for example with 3 channels, pixels will be either [0, 0, 0] or [255, 255, 255].

Return type

Tuple[Optional[numpy.ndarray], Region]

Returns

A tuple of:

  • An image (numpy array), where masked-in pixels are white (255) and masked-out pixels are black (0). The array is the same size as the region in the second member of this tuple.

  • A bounding box (stbt.Region) around the masked-in area. If most of the frame is masked out, limiting your image-processing operations to this region will be faster.

If the mask is just a Region, the first member of the tuple (the image) will be None because the bounding-box is sufficient.

stbt.match

stbt.match(image, frame=None, match_parameters=None, region=Region.ALL)

Search for an image in a single video frame.

Parameters
  • image (string or numpy.ndarray) –

    The image to search for. It can be the filename of a png file on disk, or a numpy array containing the pixel data in 8-bit BGR format. If the image has an alpha channel, any transparent pixels are ignored.

    Filenames should be relative paths. See stbt.load_image for the path lookup algorithm.

    8-bit BGR numpy arrays are the same format that OpenCV uses for images. This allows generating reference images on the fly (possibly using OpenCV) or searching for images captured from the device-under-test earlier in the test script.

  • frame (stbt.Frame or numpy.ndarray) – If this is specified it is used as the video frame to search in; otherwise a new frame is grabbed from the device-under-test. This is an image in OpenCV format (for example as returned by frames and get_frame).

  • match_parameters (MatchParameters) – Customise the image matching algorithm. See MatchParameters for details.

  • region (Region) – Only search within the specified region of the video frame.

Returns

A MatchResult, which will evaluate to true if a match was found, false otherwise.

stbt.match_all

stbt.match_all(image, frame=None, match_parameters=None, region=Region.ALL)

Search for all instances of an image in a single video frame.

Arguments are the same as match.

Returns

An iterator of zero or more MatchResult objects (one for each position in the frame where image matches).

Examples:

all_buttons = list(stbt.match_all("button.png"))
for match_result in stbt.match_all("button.png"):
    # do something with match_result here
    ...

stbt.match_text

stbt.match_text(text, frame=None, region=Region.ALL, mode=OcrMode.PAGE_SEGMENTATION_WITHOUT_OSD, lang=None, tesseract_config=None, case_sensitive=False, upsample=True, text_color=None, text_color_threshold=None, engine=None, char_whitelist=None)

Search for the specified text in a single video frame.

This can be used as an alternative to match, searching for text instead of an image.

Parameters
  • text (str) – The text to search for.

  • frame – See ocr.

  • region – See ocr.

  • mode – See ocr.

  • lang – See ocr.

  • tesseract_config – See ocr.

  • upsample – See ocr.

  • text_color – See ocr.

  • text_color_threshold – See ocr.

  • engine – See ocr.

  • char_whitelist – See ocr.

  • case_sensitive (bool) – Ignore case if False (the default).

Returns

A TextMatchResult, which will evaluate to True if the text was found, false otherwise.

For example, to select a button in a vertical menu by name (in this case “TV Guide”):

m = stbt.match_text("TV Guide")
assert m.match
while not stbt.match('selected-button.png').region.contains(m.region):
    stbt.press('KEY_DOWN')
Added in v31: The char_whitelist parameter.

stbt.MatchMethod

class stbt.MatchMethod(value)

An enum. See MatchParameters for documentation of these values.

SQDIFF = 'sqdiff'
SQDIFF_NORMED = 'sqdiff-normed'
CCORR_NORMED = 'ccorr-normed'
CCOEFF_NORMED = 'ccoeff-normed'

stbt.MatchParameters

class stbt.MatchParameters(match_method=None, match_threshold=None, confirm_method=None, confirm_threshold=None, erode_passes=None)

Parameters to customise the image processing algorithm used by match, wait_for_match, and press_until_match.

You can change the default values for these parameters by setting a key (with the same name as the corresponding python parameter) in the [match] section of .stbt.conf. But we strongly recommend that you don’t change the default values from what is documented here.

You should only need to change these parameters when you’re trying to match a reference image that isn’t actually a perfect match – for example if there’s a translucent background with live TV visible behind it; or if you have a reference image of a button’s background and you want it to match even if the text on the button doesn’t match.

Parameters
  • match_method (MatchMethod) – The method to be used by the first pass of stb-tester’s image matching algorithm, to find the most likely location of the reference image within the larger source image. For details see OpenCV’s cv2.matchTemplate. Defaults to MatchMethod.SQDIFF.

  • match_threshold (float) – Overall similarity threshold for the image to be considered a match. This threshold applies to the average similarity across all pixels in the image. Valid values range from 0 (anything is considered to match) to 1 (the match has to be pixel perfect). Defaults to 0.98.

  • confirm_method (ConfirmMethod) –

    The method to be used by the second pass of stb-tester’s image matching algorithm, to confirm that the region identified by the first pass is a good match.

    The first pass often gives false positives: It can report a “match” for an image with obvious differences, if the differences are local to a small part of the image. The second pass is more CPU-intensive, but it only checks the position of the image that the first pass identified. The allowed values are:

    ConfirmMethod.NONE

    Do not confirm the match. This is useful if you know that the reference image is different in some of the pixels. For example to find a button, even if the text inside the button is different.

    ConfirmMethod.ABSDIFF

    Compare the absolute difference of each pixel from the reference image against its counterpart from the candidate region in the source video frame.

    ConfirmMethod.NORMED_ABSDIFF

    Normalise the pixel values from both the reference image and the candidate region in the source video frame, then compare the absolute difference as with ABSDIFF.

    This method is better at noticing differences in low-contrast images (compared to the ABSDIFF method), but it isn’t suitable for reference images that don’t have any structure (that is, images that are a single solid color without any lines or variation).

    This is the default method, with a default confirm_threshold of 0.70.

  • confirm_threshold (float) –

    The minimum allowed similarity between any given pixel in the reference image and the corresponding pixel in the source video frame, as a fraction of the pixel’s total luminance range.

    Unlike match_threshold, this threshold applies to each pixel individually: Any pixel that exceeds this threshold will cause the match to fail (but see erode_passes below).

    Valid values range from 0 (less strict) to 1.0 (more strict). Useful values tend to be around 0.84 for ABSDIFF, and 0.70 for NORMED_ABSDIFF. Defaults to 0.70.

  • erode_passes (int) – After the ABSDIFF or NORMED_ABSDIFF absolute difference is taken, stb-tester runs an erosion algorithm that removes single-pixel differences to account for noise and slight rendering differences. Useful values are 1 (the default) and 0 (to disable this step).

stbt.MatchResult

class stbt.MatchResult

The result from match.

Variables
  • time (float) – The time at which the video-frame was captured, in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).

  • match (bool) – True if a match was found. This is the same as evaluating MatchResult as a bool. That is, if result: will behave the same as if result.match:.

  • region (Region) – Coordinates where the image was found (or of the nearest match, if no match was found).

  • first_pass_result (float) – Value between 0 (poor) and 1.0 (excellent match) from the first pass of stb-tester’s image matching algorithm (see MatchParameters for details).

  • frame (Frame) – The video frame that was searched, as given to match.

  • image (Image) – The reference image that was searched for, as given to match.

Changed in v32: The type of the image attribute is now stbt.Image. Previously it was a string or a numpy array.

stbt.MatchTimeout

exception stbt.MatchTimeout

Bases: UITestFailure

Exception raised by wait_for_match.

Variables
  • screenshot (Frame) – The last video frame that wait_for_match checked before timing out.

  • expected (str) – Filename of the image that was being searched for.

  • timeout_secs (int or float) – Number of seconds that the image was searched for.

stbt.MotionResult

class stbt.MotionResult

The result from detect_motion and wait_for_motion.

Variables
  • time (float) – The time at which the video-frame was captured, in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).

  • motion (bool) – True if motion was found. This is the same as evaluating MotionResult as a bool. That is, if result: will behave the same as if result.motion:.

  • region (Region) – Bounding box where the motion was found, or None if no motion was found.

  • frame (Frame) – The video frame in which motion was (or wasn’t) found.

stbt.MotionTimeout

exception stbt.MotionTimeout

Bases: UITestFailure

Exception raised by wait_for_motion.

Variables
  • screenshot (Frame) – The last video frame that wait_for_motion checked before timing out.

  • mask (Mask or None) – The mask that was used, if any.

  • timeout_secs (int or float) – Number of seconds that motion was searched for.

stbt.MultiPress

class stbt.MultiPress(key_mapping=None, interpress_delay_secs=None, interletter_delay_secs=1)

Helper for entering text using multi-press on a numeric keypad.

In some apps, the search page allows entering text by pressing the keys on the remote control’s numeric keypad: press the number “2” once for “A”, twice for “B”, etc.:

1.,     ABC2    DEF3
GHI4    JKL5    MNO6
PQRS7   TUV8    WXYZ9
      [space]0

To enter text with this mechanism, create an instance of this class and call its enter_text method. For example:

multipress = stbt.MultiPress()
multipress.enter_text("teletubbies")

The constructor takes the following parameters:

Parameters
  • key_mapping (dict) –

    The mapping from number keys to letters. The default mapping is:

    {
        "KEY_0": " 0",
        "KEY_1": "1.,",
        "KEY_2": "abc2",
        "KEY_3": "def3",
        "KEY_4": "ghi4",
        "KEY_5": "jkl5",
        "KEY_6": "mno6",
        "KEY_7": "pqrs7",
        "KEY_8": "tuv8",
        "KEY_9": "wxyz9",
    }
    

    This matches the arrangement of digits A-Z from ITU E.161 / ISO 9995-8.

    The value you pass in this parameter is merged with the default mapping. For example to override the punctuation characters you can specify key_mapping={"KEY_1": "@1.,-_"}.

    The dict’s key names must match the remote-control key names accepted by stbt.press. The dict’s values are a string or sequence of the corresponding letters, in the order that they are entered when pressing that key.

  • interpress_delay_secs (float) – The time to wait between every key-press, in seconds. This defaults to 0.3, the same default as stbt.press.

  • interletter_delay_secs (float) – The time to wait between letters on the same key, in seconds. For example, to enter “AB” you need to press key “2” once, then wait, then press it again twice. If you don’t wait, the device-under-test would see three consecutive keypresses which mean the letter “C”.

enter_text(text)

Enter the specified text using multi-press on the numeric keypad.

Parameters

text (str) – The text to enter. The case doesn’t matter (uppercase and lowercase are treated the same).

stbt.ocr

stbt.ocr(frame=None, region=Region.ALL, mode=OcrMode.PAGE_SEGMENTATION_WITHOUT_OSD, lang=None, tesseract_config=None, tesseract_user_words=None, tesseract_user_patterns=None, upsample=True, text_color=None, text_color_threshold=None, engine=None, char_whitelist=None, corrections=None)

Return the text present in the video frame as a Unicode string.

Perform OCR (Optical Character Recognition) using the “Tesseract” open-source OCR engine.

Parameters
  • frame (Frame) – If this is specified it is used as the video frame to process; otherwise a new frame is grabbed from the device-under-test.

  • region (Region) – Only search within the specified region of the video frame.

  • mode (OcrMode) – Tesseract’s layout analysis mode.

  • lang (str) – The three-letter ISO-639-3 language code of the language you are attempting to read; for example “eng” for English or “deu” for German. More than one language can be specified by joining with ‘+’; for example “eng+deu” means that the text to be read may be in a mixture of English and German. This defaults to “eng” (English). You can override the global default value by setting lang in the [ocr] section of .stbt.conf. You may need to install the tesseract language pack; see installation instructions here.

  • tesseract_config (dict) – Allows passing configuration down to the underlying OCR engine. See the tesseract documentation for details.

  • tesseract_user_words (unicode string, or list of unicode strings) – List of words to be added to the tesseract dictionary. To replace the tesseract system dictionary altogether, also set tesseract_config={'load_system_dawg': False, 'load_freq_dawg': False}.

  • tesseract_user_patterns (unicode string, or list of unicode strings) –

    List of patterns to add to the tesseract dictionary. The tesseract pattern language corresponds roughly to the following regular expressions:

    tesseract  regex
    =========  ===========
    \c         [a-zA-Z]
    \d         [0-9]
    \n         [a-zA-Z0-9]
    \p         [:punct:]
    \a         [a-z]
    \A         [A-Z]
    \*         *
    

  • upsample (bool) – Upsample the image 3x before passing it to tesseract. This helps to preserve information in the text’s anti-aliasing that would otherwise be lost when tesseract binarises the image. This defaults to True; you should only disable it if you are doing your own pre-processing on the image.

  • text_color (Color) – Color of the text. Specifying this can improve OCR results when tesseract’s default thresholding algorithm doesn’t detect the text, for example white text on a light-colored background or text on a translucent overlay with dynamic content underneath.

  • text_color_threshold (int) – The threshold to use with text_color, between 0 and 255. Defaults to 25. You can override the global default value by setting text_color_threshold in the [ocr] section of .stbt.conf.

  • engine (OcrEngine) – The OCR engine to use. Defaults to OcrEngine.TESSERACT. You can override the global default value by setting engine in the [ocr] section of .stbt.conf.

  • char_whitelist (str) – String of characters that are allowed. Useful when you know that the text is only going to contain numbers or IP addresses, for example so that tesseract won’t think that a zero is the letter o. Note that Tesseract 4.0’s LSTM engine ignores char_whitelist.

  • corrections (dict) –

    Dictionary of corrections to replace known OCR mis-reads. Each key of the dict is the text to search for; the value is the corrected string to replace the matching key. If the key is a string, it is treated as plain text and it will only match at word boundaries (for example the string "he saw" won’t match "the saw" nor "he saws"). If the key is a regular expression pattern (created with re.compile) it can match anywhere, and the replacement string can contain backreferences such as "\1" which are replaced with the corresponding group in the pattern (same as Python’s re.sub). Example:

    corrections={'bad': 'good',
                 re.compile(r'[oO]'): '0'}
    

    Plain strings are replaced first (in the order they are specified), followed by regular expresions (in the order they are specified).

    The default value for this parameter can be set with stbt.set_global_ocr_corrections. If global corrections have been set and this corrections parameter is specified, the corrections in this parameter are applied first.

Added in v31: The char_whitelist parameter.
Added in v32: The corrections parameter.

stbt.OcrEngine

class stbt.OcrEngine(value)

An enumeration.

TESSERACT = 0

Tesseract’s “legacy” OCR engine (v3). Recommended.

LSTM = 1

Tesseract v4’s “Long Short-Term Memory” neural network. Not recommended for reading menus, buttons, prices, numbers, times, etc, because it hallucinates text that isn’t there when the input isn’t long prose.

TESSERACT_AND_LSTM = 2

Combine results from Tesseract legacy & LSTM engines. Not recommended because it favours the result from the LSTM engine too heavily.

DEFAULT = 3

Default engine, based on what is installed.

stbt.OcrMode

class stbt.OcrMode(value)

Options to control layout analysis and assume a certain form of image.

For a (brief) description of each option, see the tesseract(1) man page.

ORIENTATION_AND_SCRIPT_DETECTION_ONLY = 0
PAGE_SEGMENTATION_WITH_OSD = 1
PAGE_SEGMENTATION_WITHOUT_OSD_OR_OCR = 2
PAGE_SEGMENTATION_WITHOUT_OSD = 3
SINGLE_COLUMN_OF_TEXT_OF_VARIABLE_SIZES = 4
SINGLE_UNIFORM_BLOCK_OF_VERTICALLY_ALIGNED_TEXT = 5
SINGLE_UNIFORM_BLOCK_OF_TEXT = 6
SINGLE_LINE = 7
SINGLE_WORD = 8
SINGLE_WORD_IN_A_CIRCLE = 9
SINGLE_CHARACTER = 10
SPARSE_TEXT = 11
SPARSE_TEXT_WITH_OSD = 12
RAW_LINE = 13

stbt.play_audio_file

stbt.play_audio_file(filename)

Play an audio file through the Stb-tester Node’s “audio out” jack.

Useful for testing integration of your device with Alexa or Google Home.

Parameters

filename (str) –

The audio file to play (for example a WAV or MP3 file committed to your test-pack).

Filenames should be relative paths. This uses the same path lookup algorithm as stbt.load_image.

stbt.Position

class stbt.Position(x, y)

A point with x and y coordinates.

stbt.PreconditionError

exception stbt.PreconditionError

Exception raised by as_precondition.

stbt.press

stbt.press(key, interpress_delay_secs=None, hold_secs=None)

Send the specified key-press to the device under test.

Parameters
  • key (str) –

    The name of the key/button.

    If you are using infrared control, this is a key name from your lircd.conf configuration file.

    If you are using HDMI CEC control, see the available key names here. Note that some devices might not understand all of the CEC commands in that list.

  • interpress_delay_secs (int or float) –

    The minimum time to wait after a previous key-press, in order to accommodate the responsiveness of the device-under-test.

    This defaults to 0.3. You can override the global default value by setting interpress_delay_secs in the [press] section of .stbt.conf.

  • hold_secs (int or float) – Hold the key down for the specified duration (in seconds). Currently this is implemented for the infrared, HDMI CEC, and Roku controls. There is a maximum limit of 60 seconds.

Returns

An object with the following attributes:

  • key (str) – the name of the key that was pressed.

  • start_time (float) – the time just before the keypress started (in seconds since the unix epoch, like time.time() and stbt.Frame.time).

  • end_time (float) – the time when transmission of the keypress signal completed.

  • frame_before (stbt.Frame) – the most recent video-frame just before the keypress started. Typically this is used by functions like stbt.press_and_wait to detect when the device-under-test reacted to the keypress.

  • Changed in v33: The key argument can be an Enum (we’ll use the Enum’s value, which must be a string).

stbt.press_and_wait

stbt.press_and_wait(key, region=stbt.Region.ALL, mask=None, timeout_secs=10, stable_secs=1, min_size=None)

Press a key, then wait for the screen to change, then wait for it to stop changing.

This can be used to wait for a menu selection to finish moving before attempting to OCR at the selection’s new position; or to measure the duration of animations; or to measure how long it takes for a screen (such as an EPG) to finish populating.

Parameters
  • key (str) – The name of the key to press (passed to stbt.press).

  • mask (str|numpy.ndarray|Mask|Region) – A Region or a mask that specifies which parts of the image to analyse. This accepts anything that can be converted to a Mask using stbt.load_mask. See Regions and Masks.

  • region (Region) – Deprecated synonym for mask. Use mask instead.

  • timeout_secs (int or float) – A timeout in seconds. This function will return a falsey value if the transition didn’t complete within this number of seconds from the key-press.

  • stable_secs (int|float) – A duration in seconds. The screen must stay unchanged (within the specified region or mask) for this long, for the transition to be considered “complete”.

  • min_size (Tuple[int, int]) – A tuple of (width, height), in pixels, for differences to be considered as “motion”. Use this to ignore small differences, such as the blinking text cursor in a search box.

Returns

An object that will evaluate to true if the transition completed, false otherwise. It has the following attributes:

  • key (str) – The name of the key that was pressed.

  • frame (stbt.Frame) – If successful, the first video frame when the transition completed; if timed out, the last frame seen.

  • status (stbt.TransitionStatus) – Either START_TIMEOUT (the transition didn’t start – nothing moved), STABLE_TIMEOUT (the transition didn’t end – movement didn’t stop), or COMPLETE (the transition started and then stopped). If it’s COMPLETE, the whole object will evaluate as true.

  • started (bool) – The transition started (movement was seen after the keypress). Implies that status is either COMPLETE or STABLE_TIMEOUT.

  • complete (bool) – The transition completed (movement started and then stopped). Implies that status is COMPLETE.

  • stable (bool) – The screen is stable (no movement). Implies complete or not started.

  • press_time (float) – When the key-press completed.

  • animation_start_time (float) – When animation started after the key-press (or None if timed out).

  • end_time (float) – When animation completed (or None if timed out).

  • duration (float) – Time from press_time to end_time (or None if timed out).

  • animation_duration (float) – Time from animation_start_time to end_time (or None if timed out).

All times are measured in seconds since 1970-01-01T00:00Z; the timestamps can be compared with system time (the output of time.time()).

Changed in v32: Use the same difference-detection algorithm as wait_for_motion.

Added in v33: The started, complete and stable attributes of the returned value.

Changed in v33: mask accepts anything that can be converted to a Mask using load_mask. The region parameter is deprecated; pass your Region to mask instead. You can’t specify mask and region at the same time.

stbt.pressing

stbt.pressing(key, interpress_delay_secs=None)

Context manager that will press and hold the specified key for the duration of the with code block.

For example, this will hold KEY_RIGHT until wait_for_match finds a match or times out:

with stbt.pressing("KEY_RIGHT"):
    stbt.wait_for_match("last-page.png")

The same limitations apply as stbt.press’s hold_secs parameter.

stbt.press_until_match

stbt.press_until_match(key, image, interval_secs=None, max_presses=None, match_parameters=None, region=Region.ALL)

Call press as many times as necessary to find the specified image.

Parameters
  • key – See press.

  • image – See match.

  • interval_secs (int or float) –

    The number of seconds to wait for a match before pressing again. Defaults to 3.

    You can override the global default value by setting interval_secs in the [press_until_match] section of .stbt.conf.

  • max_presses (int) –

    The number of times to try pressing the key and looking for the image before giving up and raising MatchTimeout. Defaults to 10.

    You can override the global default value by setting max_presses in the [press_until_match] section of .stbt.conf.

  • match_parameters – See match.

  • region – See match.

Returns

MatchResult when the image is found.

Raises

MatchTimeout if no match is found after timeout_secs seconds.

stbt.prometheus.Counter

class stbt.prometheus.Counter(name, description)

Log a cumulative metric that increases over time, to the Prometheus database on your Stb-tester Portal.

Prometheus is an open-source monitoring & alerting tool. A Prometheus Counter tracks counts of events or running totals. See Metric Types and instrumentation best practices in the Prometheus documentation.

Example use cases for Counters:

  • Number of times the “buffering” indicator or “loading” spinner has appeared.

  • Number of frames seen with visual glitches or blockiness.

  • Number of VoD assets that failed to play.

Parameters
  • name (str) – A unique identifier for the metric. See Metric names in the Prometheus documentation.

  • description (str) – A longer description of the metric.

Added in v32.

inc(value=1, labels=None)

Increment the Counter by the given amount.

Parameters
  • value (int) – The amount to increase.

  • labels (Mapping[str,str]) –

    Optional dict of label_name: label_value entries. See Labels in the Prometheus documentation.

    Warning

    Every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of memory required to store the data on the Stb-tester Node, on the Stb-tester Portal, and on your Prometheus server. Do not use labels to store dimensions with high cardinality (many different label values), such as programme names or other unbounded sets of values.

stbt.prometheus.Gauge

class stbt.prometheus.Gauge(name, description)

Log a numerical value that can go up and down, to the Prometheus database on your Stb-tester Portal.

Prometheus is an open-source monitoring & alerting tool. A Prometheus Gauge tracks values like temperatures or current memory usage.

Parameters
  • name (str) – A unique identifier for the metric. See Metric names in the Prometheus documentation.

  • description (str) – A longer description of the metric.

Added in v32.

set(value, labels=None)

Set the Gauge to the given value.

Parameters

stbt.prometheus.Histogram

class stbt.prometheus.Histogram(name, description, buckets)

Log measurements, in buckets, to the Prometheus database on your Stb-tester Portal.

Prometheus is an open-source monitoring & alerting tool. A Prometheus Histogram counts measurements (such as sizes or durations) into configurable buckets.

Prometheus Histograms are commonly used for performance measurements:

  • Channel zapping time.

  • App launch time.

  • Time for VoD content to start playing.

Prometheus Histograms allow reporting & alerting on particular quantiles. For example you could configure an alert if the 90th percentile of the above measurements exceeds a certain threshold (that is, the slowest 10% of requests are slower than the threshold).

Parameters
  • name (str) – A unique identifier for the metric. See Metric names in the Prometheus documentation.

  • description (str) – A longer description of the metric.

  • buckets (Sequence[float]) – A list of numbers in increasing order, where each number is the upper bound of the corresponding bucket in the Histogram. With Prometheus you must specify the buckets up-front because the raw measurements aren’t stored, only the counts of how many measurements fall into each bucket.

Added in v32.

log(value, labels=None)

Store the given value into the Histogram.

Parameters

stbt.Region

class stbt.Region(x, y, width=None, height=None, right=None, bottom=None)

Region(x, y, width=width, height=height) or Region(x, y, right=right, bottom=bottom)

Rectangular region within the video frame.

For example, given the following regions a, b, and c:

- 01234567890123
0 ░░░░░░░░
1 ░a░░░░░░
2 ░░░░░░░░
3 ░░░░░░░░
4 ░░░░▓▓▓▓░░▓c▓
5 ░░░░▓▓▓▓░░▓▓▓
6 ░░░░▓▓▓▓░░░░░
7 ░░░░▓▓▓▓░░░░░
8     ░░░░░░b░░
9     ░░░░░░░░░
>>> a = Region(0, 0, width=8, height=8)
>>> b = Region(4, 4, right=13, bottom=10)
>>> c = Region(10, 4, width=3, height=2)
>>> a.right
8
>>> b.bottom
10
>>> b.center
Position(x=8, y=7)
>>> b.contains(c), a.contains(b), c.contains(b), c.contains(None)
(True, False, False, False)
>>> b.contains(c.center), a.contains(b.center)
(True, False)
>>> b.extend(x=6, bottom=-4) == c
True
>>> a.extend(right=5).contains(c)
True
>>> a.width, a.extend(x=3).width, a.extend(right=-3).width
(8, 5, 5)
>>> c.replace(bottom=10)
Region(x=10, y=4, right=13, bottom=10)
>>> Region.intersect(a, b)
Region(x=4, y=4, right=8, bottom=8)
>>> Region.intersect(a, b) == Region.intersect(b, a)
True
>>> Region.intersect(c, b) == c
True
>>> print(Region.intersect(a, c))
None
>>> print(Region.intersect(None, a))
None
>>> Region.intersect(a)
Region(x=0, y=0, right=8, bottom=8)
>>> Region.intersect()
Region.ALL
>>> quadrant = Region(x=float("-inf"), y=float("-inf"), right=0, bottom=0)
>>> quadrant.translate(2, 2)
Region(x=-inf, y=-inf, right=2, bottom=2)
>>> c.translate(x=-9, y=-3)
Region(x=1, y=1, right=4, bottom=3)
>>> Region(2, 3, 2, 1).translate(b)
Region(x=6, y=7, right=8, bottom=8)
>>> Region.intersect(Region.ALL, c) == c
True
>>> Region.ALL
Region.ALL
>>> print(Region.ALL)
Region.ALL
>>> c.above()
Region(x=10, y=-inf, right=13, bottom=4)
>>> c.below()
Region(x=10, y=6, right=13, bottom=inf)
>>> a.right_of()
Region(x=8, y=0, right=inf, bottom=8)
>>> a.right_of(width=2)
Region(x=8, y=0, right=10, bottom=8)
>>> c.left_of()
Region(x=-inf, y=4, right=10, bottom=6)
x

The x coordinate of the left edge of the region, measured in pixels from the left of the video frame (inclusive).

y

The y coordinate of the top edge of the region, measured in pixels from the top of the video frame (inclusive).

right

The x coordinate of the right edge of the region, measured in pixels from the left of the video frame (exclusive).

bottom

The y coordinate of the bottom edge of the region, measured in pixels from the top of the video frame (exclusive).

width

The width of the region, measured in pixels.

height

The height of the region, measured in pixels.

x, y, right, bottom, width and height can be infinite — that is, float("inf") or -float("inf").

center

A stbt.Position specifying the x & y coordinates of the region’s center.

static from_extents()

Create a Region using right and bottom extents rather than width and height.

Typically you’d use the right and bottom parameters of the Region constructor instead, but this factory function is useful if you need to create a Region from a tuple.

>>> extents = (4, 4, 13, 10)
>>> Region.from_extents(*extents)
Region(x=4, y=4, right=13, bottom=10)
static bounding_box(*args)
Returns

The smallest region that contains all the given regions.

>>> a = Region(50, 20, right=60, bottom=40)
>>> b = Region(20, 30, right=30, bottom=50)
>>> c = Region(55, 25, right=70, bottom=35)
>>> Region.bounding_box(a, b)
Region(x=20, y=20, right=60, bottom=50)
>>> Region.bounding_box(b, b)
Region(x=20, y=30, right=30, bottom=50)
>>> Region.bounding_box(None, b)
Region(x=20, y=30, right=30, bottom=50)
>>> Region.bounding_box(b, None)
Region(x=20, y=30, right=30, bottom=50)
>>> Region.bounding_box(b, Region.ALL)
Region.ALL
>>> print(Region.bounding_box(None, None))
None
>>> print(Region.bounding_box())
None
>>> Region.bounding_box(b)
Region(x=20, y=30, right=30, bottom=50)
>>> Region.bounding_box(a, b, c) == \
...     Region.bounding_box(a, Region.bounding_box(b, c))
True
static intersect(*args)
Returns

The intersection of the passed regions, or None if the regions don’t intersect.

Any parameter can be None (an empty Region) so intersect is commutative and associative.

to_slice()

A 2-dimensional slice suitable for indexing a stbt.Frame.

contains(other)
Returns

True if other (a Region or Position) is entirely contained within self.

translate(x=None, y=None)
Returns

A new region with the position of the region adjusted by the given amounts. The width and height are unaffected.

translate accepts separate x and y arguments, or a single Region.

For example, move the region 1px right and 2px down:

>>> b = Region(4, 4, 9, 6)
>>> b.translate(1, 2)
Region(x=5, y=6, right=14, bottom=12)

Move the region 1px to the left:

>>> b.translate(x=-1)
Region(x=3, y=4, right=12, bottom=10)

Move the region 3px up:

>>> b.translate(y=-3)
Region(x=4, y=1, right=13, bottom=7)

Move the region by another region. This can be helpful if TITLE defines a region relative another UI element on screen. You can then combine the two like so:

>>> TITLE = Region(20, 5, 160, 40)
>>> CELL = Region(140, 45, 200, 200)
>>> TITLE.translate(CELL)
Region(x=160, y=50, right=320, bottom=90)
extend(x=0, y=0, right=0, bottom=0)
Returns

A new region with the edges of the region adjusted by the given amounts.

replace(x=None, y=None, width=None, height=None, right=None, bottom=None)
Returns

A new region with the edges of the region set to the given coordinates.

This is similar to extend, but it takes absolute coordinates within the image instead of adjusting by a relative number of pixels.

dilate(n)

Expand the region by n px in all directions.

>>> Region(20, 30, right=30, bottom=50).dilate(3)
Region(x=17, y=27, right=33, bottom=53)
erode(n)

Shrink the region by n px in all directions.

>>> Region(20, 30, right=30, bottom=50).erode(3)
Region(x=23, y=33, right=27, bottom=47)
>>> print(Region(20, 30, 10, 20).erode(5))
None
above(height=inf)
Returns

A new region above the current region, extending to the top of the frame (or to the specified height).

below(height=inf)
Returns

A new region below the current region, extending to the bottom of the frame (or to the specified height).

right_of(width=inf)
Returns

A new region to the right of the current region, extending to the right edge of the frame (or to the specified width).

left_of(width=inf)
Returns

A new region to the left of the current region, extending to the left edge of the frame (or to the specified width).

stbt.RmsVolumeResult

class stbt.RmsVolumeResult

The result from get_rms_volume.

Variables
  • amplitude (float) – The RMS amplitude over the specified window. This is a value between 0.0 (absolute silence) and 1.0 (a full-range square wave).

  • time (float) – The start of the window, as number of seconds since the unix epoch (1970-01-01T00:00Z). This is compatible with time.time() and stbt.Frame.time.

  • duration_secs (int|float) – The window size in seconds, as given to get_rms_volume.

dBov(noise_floor_amplitude=0.0003) float

The RMS amplitude converted to dBov.

Decibels are a logarithmic measurement; human perception of loudness is also logarithmic, so decibels are a useful way to measure loudness.

This is a value between -70 (silence, or near silence) and 0 (the loudest possible signal, a full-scale square wave).

Parameters

noise_floor_amplitude – This is used to avoid ZeroDivisionError exceptions. We consider 0 amplitude to be this non-zero value instead. It defaults to ~0.0003 (-70dBov).

stbt.Roku

class stbt.Roku(address: str)

Helper for interacting with Roku devices over the network.

This uses Roku’s External Control Protocol.

To find the Roku’s IP address and to enable the Roku’s network control protocol see <https://stb-tester.com/kb/roku>.

Parameters

address (str) – IP address of the Roku.

Or, use Roku.from_config() to create an instance using the address configured in the test-pack’s configuration files.

Added in v33.

static from_config() Roku

Create a Roku instance from the test-packs’s configuration files.

Expects that the Roku’s IP address is specified in device_under_test.ip_address. This configuration belongs in your Stb-tester Node’s Node-specific configuration file. For example:

config/test-farm/stb-tester-00044b80ebeb.conf:

[device_under_test]
device_type = roku
ip_address = 192.168.1.7
Raises

ConfigurationError – If Roku IP address not configured.

save_logs(filename: str = 'roku.log') Generator[None, None, None]

Stream logs from the Roku’s debug console to filename.

This is a context manager. You can use it as a decorator on your test-case functions:

import stbt
roku = stbt.Roku.from_config()

@roku.save_logs()
def test_launching_my_roku_app():
    ...
query_apps() Dict[str, str]

Returns a dict of application_id: name with all the apps installed on the Roku device.

launch_app(id_or_name) None

Launches the specified app. Accepts the app’s ID or name.

Use Roku.query_apps to find the IDs & names of the apps installed on the Roku.

stbt.segment

stbt.segment(frame, *, region=Region.ALL, initial_direction=Direction.VERTICAL, steps=1, narrow=True, light_background=False)

Segment (partition) the image into a list of contiguous foreground regions.

This uses an adaptive threshold algorithm to binarize the image into foreground vs. background pixels. For finer control, you can do the binarization yourself (for example with stbt.color_diff) and pass the binarized image to segment.

Parameters
  • frame (Frame) – The video-frame or image to process.

  • region (Region) – Only search in this region.

  • initial_direction (Direction) – Start scanning in this direction (left-to-right or top-to-bottom).

  • steps (int) – Do another segmentation within each region found in the previous step, altering direction between VERTICAL and HORIZONTAL each step. For example, the default values steps=1, initial_direction=stbt.Direction.VERTICAL will find lines of text; steps=2 will recursively perform segmentation horizontally within each line to find each character in the line (assuming the characters don’t overlap due to kerning; overlapping characters will be segmented as a single region).

  • narrow (bool) – At the last step, narrow each region in the opposite direction. For example: if you are segmenting lines of text with steps=1, initial_direction=stbt.Direction.VERTICAL, narrow=False you will get regions with y & bottom matching the top & bottom of each line, but with x & right set to the left & right edges of the frame (0 and the frame’s width, respectively). With narrow=True, each region’s x & right will be the leftmost / rightmost edge of the line.

  • light_background (bool) – By default, the adaptive threshold algorithm assumes foreground pixels are light-coloured and background pixels are dark. Set light_background=True if foreground pixels are dark (for example black text on a light background).

Return type

list[stbt.Region]

Returns

A list of stbt.Region instances.

Added in v33.

stbt.set_global_ocr_corrections

stbt.set_global_ocr_corrections(corrections)

Specify default OCR corrections that apply to all calls to stbt.ocr and stbt.apply_ocr_corrections.

See the corrections parameter of stbt.ocr for more details.

We recommend calling this function from tests/__init__.py to ensure it is called before any test script is executed.

stbt.Size

class stbt.Size(width, height)

Size of a rectangle with width and height.

count(value, /)

Return number of occurrences of value.

height

Alias for field number 1

index(value, start=0, stop=9223372036854775807, /)

Return first index of value.

Raises ValueError if the value is not present.

width

Alias for field number 0

stbt.stop_job

stbt.stop_job(reason: Optional[str] = None) None

Stop this job after the current testcase exits.

If you are running a job with multiple testcases, or a soak-test, the job will stop when the current testcase exits. Any remaining testcases (that you specified when you started the job) will not be run.

Parameters

reason (str) – Optional message that will be logged.

Added in v31.

stbt.TextMatchResult

class stbt.TextMatchResult

The result from match_text.

Variables
  • time (float) – The time at which the video-frame was captured, in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).

  • match (bool) – True if a match was found. This is the same as evaluating MatchResult as a bool. That is, if result: will behave the same as if result.match:.

  • region (Region) – Bounding box where the text was found, or None if the text wasn’t found.

  • frame (Frame) – The video frame that was searched, as given to match_text.

  • text (str) – The text that was searched for, as given to match_text.

stbt.TransitionStatus

class stbt.TransitionStatus(value)

An enumeration.

START_TIMEOUT = 0

The transition didn’t start (nothing moved).

STABLE_TIMEOUT = 1

The transition didn’t end (movement didn’t stop).

COMPLETE = 2

The transition started and then stopped.

stbt.UITestFailure

exception stbt.UITestFailure

Bases: Exception

The test failed because the device under test didn’t behave as expected.

Inherit from this if you need to define your own test-failure exceptions.

stbt.VolumeChangeDirection

class stbt.VolumeChangeDirection(value)

An enumeration.

LOUDER = 1
QUIETER = -1

stbt.VolumeChangeTimeout

exception stbt.VolumeChangeTimeout

Bases: AssertionError

stbt.wait_for_match

stbt.wait_for_match(image, timeout_secs=10, consecutive_matches=1, match_parameters=None, region=Region.ALL, frames=None)

Search for an image in the device-under-test’s video stream.

Parameters
  • image – The image to search for. See match.

  • timeout_secs (int or float or None) – A timeout in seconds. This function will raise MatchTimeout if no match is found within this time.

  • consecutive_matches (int) – Forces this function to wait for several consecutive frames with a match found at the same x,y position. Increase consecutive_matches to avoid false positives due to noise, or to wait for a moving selection to stop moving.

  • match_parameters – See match.

  • region – See match.

  • frames (Iterator[stbt.Frame]) – An iterable of video-frames to analyse. Defaults to stbt.frames().

Returns

MatchResult when the image is found.

Raises

MatchTimeout if no match is found after timeout_secs seconds.

stbt.wait_for_motion

stbt.wait_for_motion(timeout_secs=10, consecutive_frames=None, noise_threshold=None, mask=Region.ALL, region=Region.ALL, frames=None)

Search for motion in the device-under-test’s video stream.

“Motion” is difference in pixel values between two frames.

Parameters
  • timeout_secs (int or float or None) – A timeout in seconds. This function will raise MotionTimeout if no motion is detected within this time.

  • consecutive_frames (int or str) –

    Considers the video stream to have motion if there were differences between the specified number of consecutive frames. This can be:

    • a positive integer value, or

    • a string in the form “x/y”, where “x” is the number of frames with motion detected out of a sliding window of “y” frames.

    This defaults to “10/20”. You can override the global default value by setting consecutive_frames in the [motion] section of .stbt.conf.

  • noise_threshold (float) – See detect_motion.

  • mask (str|numpy.ndarray|Mask|Region) – See detect_motion.

  • region (Region) – See detect_motion.

  • frames (Iterator[stbt.Frame]) – See detect_motion.

Returns

MotionResult when motion is detected. The MotionResult’s time and frame attributes correspond to the first frame in which motion was detected.

Raises

MotionTimeout if no motion is detected after timeout_secs seconds.

Changed in v33: mask accepts anything that can be converted to a Mask using load_mask. The region parameter is deprecated; pass your Region to mask instead. You can’t specify mask and region at the same time.

stbt.wait_for_transition_to_end

stbt.wait_for_transition_to_end(initial_frame=None, region=stbt.Region.ALL, mask=None, timeout_secs=10, stable_secs=1, min_size=None)

Wait for the screen to stop changing.

In most cases you should use press_and_wait to measure a complete transition, but if you need to measure several points during a single transition you can use wait_for_transition_to_end as the last measurement. For example:

keypress = stbt.press("KEY_OK")  # Launch my app
m = stbt.wait_for_match("my-app-home-screen.png")
time_to_first_frame = m.time - keypress.start_time
end = wait_for_transition_to_end(m.frame)
time_to_fully_populated = end.end_time - keypress.start_time
Parameters
Returns

See press_and_wait.

stbt.wait_for_volume_change

stbt.wait_for_volume_change(direction=VolumeChangeDirection.LOUDER, stream=None, window_size_secs=0.4, threshold_db=10.0, noise_floor_amplitude=0.0003, timeout_secs=10)

Wait for changes in the RMS audio volume.

This can be used to listen for the start of content, or for bleeps and bloops when navigating the UI. It returns after the first significant volume change. This function tries hard to give accurate timestamps for when the volume changed. It works best for sudden changes like a beep.

This function detects changes in volume using a rolling window. The RMS volume is calculated over a rolling window of size window_size_secs. For every sample this function compares the RMS volume in the window preceeding the sample, to the RMS volume in the window following the sample. The ratio of the two volumes determines whether the volume change is significant or not.

Example: Measure the latency of the mute button:

keypress = stbt.press('KEY_MUTE')
quiet = wait_for_volume_change(
    direction=VolumeChangeDirection.QUIETER,
    stream=audio_chunks(time_index=keypress.start_time))
print "MUTE latency: %0.3f s" % (quiet.time - keypress.start_time)

Example: Measure A/V sync between “beep.png” being displayed and a beep being heard:

video = wait_for_match("beep.png")
audio = wait_for_volume_change(
    stream=audio_chunks(time_index=video.time - 0.5),
    window_size_secs=0.01)
print "a/v sync: %i ms" % (video.time - audio.time) * 1000
Parameters
  • direction (VolumeChangeDirection) – Whether we should wait for the volume to increase or decrease. Defaults to VolumeChangeDirection.LOUDER.

  • stream (Iterator returned by audio_chunks) – Audio stream to listen to. Defaults to audio_chunks(). Postcondition: the stream will be positioned at the time of the volume change.

  • window_size_secs (int) – The time over which the RMS volume should be averaged. Defaults to 0.4 (400ms) in accordance with momentary loudness from the EBU TECH 3341 specification. Decrease this if you want to detect bleeps shorter than 400ms duration.

  • threshold_db (float) – This controls sensitivity to volume changes. A volume change is considered significant if the ratio between the volume before and the volume afterwards is greater than threshold_db. With threshold_db=10 (the default) and direction=VolumeChangeDirection.LOUDER the RMS volume must increase by 10 dB (a factor of 3.16 in amplitude). With direction=VolumeChangeDirection.QUIETER the RMS volume must fall by 10 dB.

  • noise_floor_amplitude (float) – This is used to avoid ZeroDivisionError exceptions. The change from an amplitude of 0 to 0.1 is ∞ dB. This isn’t very practical to deal with so we consider 0 amplitude to be this non-zero value instead. It defaults to ~0.0003 (-70dBov). Increase this value if there is some sort of background noise that you want to ignore.

  • timeout_secs (float) – Timeout in seconds. If no significant volume change is found within this time, VolumeChangeTimeout will be raised and your test will fail.

Raises

VolumeChangeTimeout – If no volume change is detected before timeout_secs.

Returns

An object with the following attributes:

  • direction (VolumeChangeDirection) – This will be either VolumeChangeDirection.LOUDER or VolumeChangeDirection.QUIETER as given to wait_for_volume_change.

  • rms_before (RmsVolumeResult) – The RMS volume averaged over the window immediately before the volume change. Use result.rms_before.amplitude to get the RMS amplitude as a float.

  • rms_after (RmsVolumeResult) – The RMS volume averaged over the window immediately after the volume change.

  • difference_db (float) – Ratio between rms_after and rms_before, in decibels.

  • difference_amplitude (float) – Absolute difference between the rms_after and rms_before. This is a number in the range -1.0 to +1.0.

  • time (float) – The time of the volume change, as number of seconds since the unix epoch (1970-01-01T00:00:00Z). This is the same format used by the Python standard library function time.time() and stbt.Frame.time.

  • window_size_secs (float) – The size of the window over which the volume was averaged, in seconds.

stbt.wait_until

stbt.wait_until(callable_, timeout_secs=10, interval_secs=0, predicate=None, stable_secs=0)

Wait until a condition becomes true, or until a timeout.

Calls callable_ repeatedly (with a delay of interval_secs seconds between successive calls) until it succeeds (that is, it returns a truthy value) or until timeout_secs seconds have passed.

Parameters
  • callable – any Python callable (such as a function or a lambda expression) with no arguments.

  • timeout_secs (int or float, in seconds) – After this timeout elapses, wait_until will return the last value that callable_ returned, even if it’s falsey.

  • interval_secs (int or float, in seconds) – Delay between successive invocations of callable_.

  • predicate – A function that takes a single value. It will be given the return value from callable_. The return value of this function will then be used to determine truthiness. If the predicate test succeeds, wait_until will still return the original value from callable_, not the predicate value.

  • stable_secs (int or float, in seconds) – Wait for callable_’s return value to remain the same (as determined by ==) for this duration before returning. If predicate is also given, the values returned from predicate will be compared.

Returns

The return value from callable_ (which will be truthy if it succeeded, or falsey if wait_until timed out). If the value was truthy when the timeout was reached but it failed the predicate or stable_secs conditions (if any) then wait_until returns None.

After you send a remote-control signal to the device-under-test it usually takes a few frames to react, so a test script like this would probably fail:

stbt.press("KEY_EPG")
assert stbt.match("guide.png")

Instead, use this:

import stbt
from stbt import wait_until

stbt.press("KEY_EPG")
assert wait_until(lambda: stbt.match("guide.png"))

wait_until allows composing more complex conditions, such as:

# Wait until something disappears:
assert wait_until(lambda: not stbt.match("xyz.png"))

# Assert that something doesn't appear within 10 seconds:
assert not wait_until(lambda: stbt.match("xyz.png"))

# Assert that two images are present at the same time:
assert wait_until(lambda: stbt.match("a.png") and stbt.match("b.png"))

# Wait but don't raise an exception if the image isn't present:
if not wait_until(lambda: stbt.match("xyz.png")):
    do_something_else()

# Wait for a menu selection to change. Here ``Menu`` is a `FrameObject`
# subclass with a property called `selection` that returns the name of
# the currently-selected menu item. The return value (``menu``) is an
# instance of ``Menu``.
menu = wait_until(Menu, predicate=lambda x: x.selection == "Home")

# Wait for a match to stabilise position, returning the first stable
# match. Used in performance measurements, for example to wait for a
# selection highlight to finish moving:
keypress = stbt.press("KEY_DOWN")
match_result = wait_until(lambda: stbt.match("selection.png"),
                          predicate=lambda x: x and x.region,
                          stable_secs=2)
assert match_result
match_time = match_result.time  # this is the first stable frame
print("Transition took %s seconds" % (match_time - keypress.end_time))

Release notes

Changes to the stbt core Python API are version-controlled. You can specify the version you want to use in your .stbt.conf file. See test_pack.stbt_version in the Configuration Reference.

We generally expect that upgrading to new versions will be safe; we strive to maintain backwards compatibility. But there may be some edge cases that affect some users, so this mechanism allows you to upgrade in a controlled manner, and to test the upgrade on a branch first.

v33

13 July 2022

Major new features:

  • v33 has a new test-runner environment based on Ubuntu 22.04 and Python 3.10. The most notable changes are:

    Software

    v32

    v33

    Ubuntu

    18.04

    22.04

    Python

    3.6

    3.10

    Tesseract (OCR engine)

    4.00~git2288-10f4998a-2

    4.1.1

    OpenCV

    3.2

    4.5

    Pylint

    1.8

    2.14

    ADB

    8.1

    29.0

    Any third-party packages that you install using apt in your setup script may be upgraded to a newer version too.

  • Python 2 is not supported. Customers who are still using Python 2 should continue using v32 or earlier. To upgrade see our guide: Porting a test-pack to Python 3.

  • New Mask API to construct masks from regions. You can add, subtract or invert regions to construct a mask. This is much more convenient than creating a mask PNG in an image editor. See Regions and Masks for examples.

    Any API with a mask parameter can take a single Region, or a complex mask constructed from Regions as described above, or a filename of a black-and-white PNG image. Previously, the mask parameter could only take a filename.

  • Any API that takes a color can take a web-style “#rrggbb” string (for example "#f76600"). Previously colors had to be specified as a (blue, green, red) tuple of ints. Now both formats are accepted.

  • stbt.find_regions_by_color: New API to find any GUI element of a solid color (for example a button, or a “focus” or “selection” border).

  • stbt.segment: New API to find distinct foreground elements (such as lines of text). It’s pronounced like the verb (segMENT) not the noun.

  • New stbt.android API for interacting with AndroidTV devices using ADB.

  • New stbt.Roku API for interacting with Roku devices over the network using Roku’s External Control Protocol.

  • stbt.find_file: New API to resolve a filename relative to the python file that’s calling it. It uses the same search algorithm as stbt.load_image, but it works with any type of file.

  • stbt.MultiPress: New API to enter text using a numeric keypad using the ITU-T E.161 mapping.

Changes in behaviour since v32:

  • Dropped support for Python 2.

  • Debug logging from stbt APIs uses Python’s logging framework. Each debug line now starts with the logger name and logging level (namely “DEBUG:stbt:”).

  • Reworked stbt.android API. Previously this was a private, undocumented API in _stbt.android but a small number of customers were already using it. The changes are:

    • Changed order & defaults of AdbDevice constructor arguments to make it more suitable for Android TV devices (as opposed to mobile devices).

    • Made AdbDevice.adb more like subprocess.run.

    • Support ADB host key committed to test-pack, to support stateless test-runners. From the root of your test-pack (git checkout) run adb keygen config/android/adbkey and commit the config/android directory to git. Warning: Anyone with read access to your test-pack git repository and with network access to your AndroidTV devices will be able to control them using ADB.

  • stbt.find_selection_from_background: The region attribute of the return value is set even if it failed the max_size / min_size checks. Previously region would have been None.

    If you have code like this:

    @property
    def selection(self):
        return stbt.find_selection_from_background(...).region  # Wrong in v33!
    

    …rewrite it to this:

    @property
    def selection(self):
        sel = stbt.find_selection_from_background(...)
        if sel:
            return sel.region
    
  • stbt.Keyboard: Removed the first parameter of the constructor. Since v32 it would raise an exception if you passed this parameter, so nobody should be using it by now. All the remaining parameters must be specified by keyword.

  • stbt.load_image: The returned image is read only; call its copy() method to make a writeable copy if you need to modify it.

Deprecated APIs:

Minor additions, bugfixes & improvements:

  • stbt.Frame and stbt.Image:

    • Add width, height, and region properties.

    • Fix IndexError in __repr__ when the Frame or Image has undergone numpy operations that change its shape (such as numpy.max, which will preserve the stbt.Image or stbt.Frame type of its argument).

  • stbt.FrameObject:

    • The object’s __repr__ only prints the values of properties that have already been calculated. That is, it doesn’t trigger evaluation of all public properties.

    • The __repr__ shows the object’s frame (self._frame) so you can see the timestamp of the frame associated with each FrameObject instance.

    • Fix comparison operators (== and !=). Previously they would raise TypeError if either operand had a property that returned None; or they could return the wrong result if comparing an instance of a class F against an instance of F’s subclass.

    • Remove ordering operators (<, etc). They were buggy and there’s no use-case for ordering Page Object instances.

  • stbt.RmsVolumeResult: Added dBov method to convert the RMS amplitude to decibels.

  • stbt.Keyboard:

    • Added type stbt.Keyboard.Key: This is the type returned from Keyboard.find_key. Previously it was an opaque, private type; now it is a public, documented API. This is so that you can use it in type annotations for your FrameObject properties.

    • Better support for slow/laggy keyboards:

      • Recover from missed or double keypresses by re-calculating the path from the current state of the device-under-test. To disable this behaviour specify retries=0 when calling Keyboard.enter_text or Keyboard.navigate_to. retries defaults to 2.

      • Increased default navigate_timeout from 20 to 60.

      • Wait longer for the selection to reach the final target when we’re not verifying every keypress.

    • Better error message when user’s Page Object’s selection property returns None (it’s a bug in your Page Object if it says is_visible==True but selection==None).

  • stbt.load_image:

    • Cache the last 5 loaded images. This will avoid repeating the same PNG decoding for every frame when you do something like stbt.wait_until(lambda: stbt.match("reference.png")).

    • New color_channels parameter, replacing flags which is now deprecated.

    • Raise FileNotFoundError with the correct errno, instead of IOError without an errno. Note that FileNotFoundError is a subclass of IOError.

    • Normalize the alpha channel (if any) so that each pixel is either fully transparent (0) or fully opaque (255). Previously this normalization was done in stbt.match.

  • stbt.match: Fixed the position of the region drawn in the Object Repository “Debug” tool.

  • stbt.ocr: corrections parameter: Fix matching non-word characters at word boundaries.

  • stbt.press: The key argument can be an Enum (press will use the Enum’s value, which must be a string).

  • stbt.press_and_wait: The return value has new started, complete, and stable attributes. This is often clearer than checking the value of the status attribute:

    transition = stbt.press_and_wait("KEY_OK")
    if not transition.started:
        ...
    # versus:
    # if transition.status == stbt.TransitionStatus.START_TIMEOUT:
    
  • stbt.Size: New helper type. It’s a tuple with width and height.

  • pylint plugin: Increase Astroid’s inference limit to fix various false positives.

v32

1 October 2020.

Warning

When upgrading the stb-tester package to v32 locally with pip (for your IDE) please follow these steps to uninstall the previous version first:

pip uninstall stb-tester stbt-premium-stubs stbt-extra-stubs
pip install stb-tester

Major new features:

  • stbt.Keyboard: Support keyboards with multiple modes (for example lowercase, uppercase, and symbols).

  • stbt.find_selection_from_background: New function to detect if a page is visible, and simultaneously find the position of the current “selection” or “highlight” on the page.

  • stbt.ocr:

    • Calls to Tesseract are cached if all the parameters are identical (including all the pixels in the frame & region). This cache is persisted on disk between test-jobs. This can greatly speed up calls to ocr when reading common text, for example when navigating menus.

    • New corrections parameter: A dict of {bad: good} mappings to correct known OCR mistakes.

    • New function stbt.apply_ocr_corrections to apply the same corrections to any string — useful for post-processing old test artifacts using new corrections.

    • New function stbt.set_global_ocr_corrections to specify the default value for ocr’s corrections parameter. Call this early in your tests, for example in the top-level of tests/__init__.py.

  • stbt.press_and_wait: New parameter min_size to ignore motion in small regions (useful when you can’t predict the exact position of those regions by specifying a mask).

  • stbt.Region:

  • stbt.detect_pages: New function to find the Page Objects that are relevant for the current video frame.

  • stbt.last_keypress: New function that returns information about the last key-press sent to the device under test.

  • stbt.stop_job: New function to stop a job of multiple testcases or a soak-test.

  • Pylint plugin new checker: Check that the return value from FrameObject.refresh is used (FrameObjects are immutable, so refresh() returns a new object instead of modifying the object it’s called on).

Changes in behaviour since v31:

  • stbt.crop: Implicitly clamp at the edges of the frame, if the region extends beyond the frame. Previously, this would have raised an exception. It still raises ValueError if the region is entirely outside of the frame.

  • stbt.draw_text: Also write text to stderr.

  • stbt.get_config: Allow None as a default value.

  • stbt.is_screen_black: Increase default threshold from 10 to 20.

  • stbt.Keyboard:

    • Changed the internal representation of the Directed Graph. Manipulating the networkx graph directly is no longer supported.

    • Removed Keyboard.parse_edgelist and grid_to_navigation_graph. Instead, first create the Keyboard object, and then use its add_key, add_transition, add_edgelist, and add_grid methods to build the model of the keyboard.

    • Removed the Keyboard.Selection type. Instead, your Page Object’s selection property should return a Key value obtained from Keyboard.find_key.

    • The edgelist format now allows key names with “#” in them. Previously anything starting with “#” was treated as a comment. Now comments are lines starting with “###” (three hashes), optionally preceded by whitespace.

    • Keyboard.enter_text adds a short inter-press delay when entering the same letter twice, because some keyboard implementations ignore the second keypress if pressed too quickly.

  • stbt.load_image:

    • Fix UnicodeDecodeError when filename is utf8-encoded bytes.

    • Allow passing a numpy array.

    • Return type changed from numpy.ndarray to stbt.Image, which is a sub-class of numpy.ndarray with the additional attributes filename, relative_filename, and absolute_filename.

  • stbt.match: Disable the “pyramid” performance optimisation if the reference image has too few non-transparent pixels. This fixes false negatives when the reference image is mostly transparent (for example a thin border of opaque pixels around a large transparent centre).

  • stbt.MatchResult (the return value from stbt.match): The image attribute is now an instance of stbt.Image. Previously it was a string or a numpy array, depending on what you had passed to stbt.match.

  • stbt.ocr and stbt.match_text: If region is entirely outside the frame, raise ValueError instead of returning an empty string. (This is likely to be an error in your test-script’s logic, not desired behaviour.) This is now consistent with all the other image-processing APIs such as stbt.match.

  • stbt.press_and_wait:

    • Now uses the same difference-detection algorithm as stbt.wait_for_motion. This algorithm is more tolerant of small noise-like differences (less than 3 pixels wide). To use the previous algorithm, run the following code early in your test script (for example at the top level of tests/__init__.py):

      stbt.press_and_wait.differ = stbt.StrictDiff
      
    • If you were passing a numpy array for the mask parameter, now it needs to be a single-channel image (greyscale) not a 3-channel image (BGR). (But if you were passing the mask as the filename of an image on disk, you don’t need to change anything.)

v31

19 September 2019.

Major new features:

  • Supports test-scripts written in Python 3. Python 2 is still supported, too. When upgrading your test-pack to v31 you will need to specify a Python version in your test-pack’s .stbt.conf file like this:

    [test_pack]
    stbt_version = 31
    python_version = 3
    

    Valid values are 2 or 3. The test-run environment is Ubuntu 18.04, so you get Python 2.7 or 3.6.

    We recommend Python 3 for all new test-packs. For large existing test-packs we will continue to support Python 2 until all our large customers have migrated.

  • stbt.Keyboard: New API for testing & navigating on-screen keyboards.

  • stbt.Grid: New API for describing grid-like layouts.

Minor additions, bugfixes & improvements:

  • stbt.match: Fix false negative when using MatchMethod.SQDIFF and a reference image that is mostly transparent except around the edges (for example to find a “highlight” or “selection” around some dynamic content).

  • stbt.match: Improve error message when you give it an explicit region that is smaller than the reference image.

  • stbt.ocr: New parameter char_whitelist. Useful when you’re reading text of a specific format, like the time from a clock, a serial number, or a passcode.

  • stbt.press_and_wait: Ignore small moiré-like differences between frames (temporal dithering?) seen with Apple TV.

  • stbt.press_and_wait: Draw motion bounding-box on output video (similar to stbt.wait_for_motion).

  • stbt.press_and_wait: Add key attribute (the name of the key that was pressed) to the return value.

  • stbt.Region: The static methods intersect and bounding_box will fail if called on an instance. That is, instead of calling self.intersect(other) you must call stbt.Region.intersect(self, other). Previously, if called on an instance it would silently return a wrong value.

  • stbt.wait_for_motion: More sensitive to slow motion (such as a slow fade to black) by comparing against the last frame since significant differences were seen, instead of always comparing against the previous frame.

  • stbt lint improvements:

    • New checker stbt-frame-object-get-frame: FrameObject properties must use self._frame, not stbt.get_frame().

    • New checker stbt-frame-object-property-press: FrameObject properties must not have side-effects that change the state of the device-under-test by calling stbt.press() or stbt.press_and_wait().

    • New checker stbt-assert-true: “assert True” has no effect.

    • Teach pylint that assert False is the same as raise AssertionError. This fixes incorrect behaviour of pylint’s “unreachable code” and “inconsistent return statements” checkers.

v30

25 February 2019.

Minimum version supported by the Stb-tester Platform.