stbt Python API

Test cases are Python functions stored in the test-pack git repository under tests/*.py. The function name must begin with test_.

Example

import stbt

# You can import your own helper libraries from the test-pack.
import dialogues


def test_that_pressing_EPG_opens_the_guide():
    # We recommend starting each testcase with setup steps so that
    # the testcase can be run no matter what state the device-under-
    # test is in. Note that you can call other Python functions
    # defined elsewhere in your test-pack.
    if dialogues.modal_dialogue_is_up():
        dialogues.close_modal_dialogue()

    # Send an infrared keypress:
    stbt.press("KEY_EPG")

    # Verify that the device-under-test has reacted appropriately:
    stbt.wait_for_match("guide.png")

Controlling the system-under-test

  • press: Send the specified key-press to the system-under-test
  • press_until_match: Call press as many times as necessary to find the specified image

Some devices (such as the Roku and some Smart TVs) can be controlled via HTTP or other network protocols. You can use your favourite Python library to make network requests to such devices (for example, the Python requests library). To install third-party Python libraries see Customising the test-run environment.

Verifying the system-under-test’s behaviour

Searching for an image

  • wait_for_match: Search for the specified image in the video stream; raise MatchTimeout if not found within a certain timeout.
  • match: Search for the specified image in a single video frame; return a truthy/falsey MatchResult.
  • match_all: Search for all instances of the specified image in a single video frame.

Use match with assert and wait_until for a more flexible alternative to wait_for_match. For example, to wait for an image to disappear:

stbt.press("KEY_CLOSE")
assert wait_until(lambda: not stbt.match("guide.png"))

Searching for text using OCR (optical character recognition)

  • match_text: Search for the specified text in a single video frame; return a truthy/falsey TextMatchResult.
  • ocr: Read the text present in a video frame.

Searching for motion

  • wait_for_motion: Search for motion in the video stream; raise MotionTimeout if no motion is found within a certain timeout.
  • detect_motion: Generator that yields a sequence of one MotionResult for each frame processed from the system-under-test, indicating whether any motion was detected.

Miscellaneous

  • is_screen_black: Check for the presence of a black screen in a single video frame.

Custom image processing

Stb-tester can give you raw video frames for you to do your own image processing with OpenCV’s “cv2” Python API. Stb-tester’s video frames are numpy.ndarray objects, which is the same format that OpenCV uses.

  • frames: Generator that yields frames captured from the system-under-test. Example usage: for (frame, timestamp) in stbt.frames(): ...
  • get_frame: Return the latest video frame.

To save a frame to disk, use cv2.imwrite. Note that any file you write to the current working directory will appear as an artifact in the test-run results.

Region, mask, and frame

Some of these functions take an optional region parameter that allows you to restrict the search to a specific rectangular region of the video frame. See Region.

Some of these functions take an optional mask parameter that allows you to specify a more complex region than the single rectangle you can specify with region. A mask is a black & white image where white pixels specify which parts of the frame to check, and black pixels specify which parts of the frame to ignore.

The functions that operate on a single frame at a time (match, match_text, ocr, etc) take an optional frame parameter. This is in the OpenCV BGR format, as returned by frames and get_frame or by OpenCV’s cv2.imread. If frame is not specified, a frame will be grabbed from the system-under-test. This is useful for writing unit-tests (self-tests) for those functions. If you write your own helper functions we recommend that you follow this pattern.

Logging

  • draw_text: Write the specified text on this test-run’s video recording.

Anything you write to stdout or stderr appears in the test-run’s logfile in stb-tester’s test-results viewer.

Utilities

  • as_precondition: Mark test failures as test errors in some parts of your testcase.
  • FrameObject: Base class for user-defined Frame Objects. The Frame Object pattern simplifies testcase development and maintenance; Frame Objects are a layer of abstraction between your testcases and the stbt image processing APIs.
  • get_config: Read a value from the test-pack’s Configuration files.
  • wait_until: Wait until a condition becomes true, or until a timeout.

Exceptions

If your testcase raises one of the following exceptions, it is considered a test failure:

Any other exception is considered a test error. For details see Test failures vs. errors.

API reference

stbt.as_precondition

stbt.as_precondition(message)

Context manager that replaces test failures with test errors.

Stb-tester’s reports show test failures (that is, UITestFailure or AssertionError exceptions) as red results, and test errors (that is, unhandled exceptions of any other type) as yellow results. Note that wait_for_match, wait_for_motion, and similar functions raise a UITestFailure when they detect a failure. By running such functions inside an as_precondition context, any UITestFailure or AssertionError exceptions they raise will be caught, and a PreconditionError will be raised instead.

When running a single testcase hundreds or thousands of times to reproduce an intermittent defect, it is helpful to mark unrelated failures as test errors (yellow) rather than test failures (red), so that you can focus on diagnosing the failures that are most likely to be the particular defect you are looking for. For more details see Test failures vs. errors.

Parameters:message (str) – A description of the precondition. Word this positively: “Channels tuned”, not “Failed to tune channels”.
Raises:PreconditionError if the wrapped code block raises a UITestFailure or AssertionError.

Example:

def test_that_the_on_screen_id_is_shown_after_booting():
    channel = 100

    with stbt.as_precondition("Tuned to channel %s" % channel):
        mainmenu.close_any_open_menu()
        channels.goto_channel(channel)
        power.cold_reboot()
        assert channels.is_on_channel(channel)

    stbt.wait_for_match("on-screen-id.png")

stbt.ConfigurationError

exception stbt.ConfigurationError

Bases: exceptions.Exception

An error with your stbt configuration file.

stbt.detect_motion

stbt.detect_motion(timeout_secs=10, noise_threshold=None, mask=None)

Generator that yields a sequence of one MotionResult for each frame processed from the system-under-test’s video stream.

The MotionResult indicates whether any motion was detected – that is, any difference between two consecutive frames.

Parameters:
  • timeout_secs (int or float or None) – A timeout in seconds. After this timeout the iterator will be exhausted. Thas is, a for loop like for m in detect_motion(timeout_secs=10) will terminate after 10 seconds. If timeout_secs is None then the iterator will yield frames forever. Note that you can stop iterating (for example with break) at any time.
  • noise_threshold (float) –

    The amount of noise to ignore. This is only useful with noisy analogue video sources. Valid values range from 0 (all differences are considered noise; a value of 0 will never report motion) to 1.0 (any difference is considered motion).

    This defaults to 0.84. You can override the global default value by setting noise_threshold in the [motion] section of stbt.conf.

  • mask (str) – The filename of a black & white image that specifies which part of the image to search for motion. White pixels select the area to search; black pixels select the area to ignore.

stbt.draw_text

stbt.draw_text(text, duration_secs=3)

Write the specified text to the output video.

Parameters:
  • text (str) – The text to write.
  • duration_secs (int or float) – The number of seconds to display the text.

stbt.Frame

class stbt.Frame

A frame of video.

A Frame is what you get from stbt.get_frame and stbt.frames. It is a subclass of numpy.ndarray, which is the type that OpenCV uses to represent images. Data is stored in 8-bit, 3 channel BGR format.

In addition to the members inherited from numpy.ndarray, Frame defines the following attributes:

  • time (float) - the wall-clock time that this video-frame was captured as number of seconds since the unix epoch (1970-01-01T00:00:00Z). This is the same format used by the Python standard library function time.time.

Frame was added in stb-tester v26.

stbt.FrameObject

class stbt.FrameObject(frame=None)

Base class for user-defined Frame Objects.

The Frame Object pattern is used to simplify testcase development and maintenance. Frame Objects are a layer of abstraction between your testcases and the stbt image processing APIs. They are easy to write and cheap to maintain.

A Frame Object extracts information from a frame of video, typically by calling stbt.ocr or stbt.match. All of your testcases use these objects rather than using ocr or match directly. A Frame Object translates from the vocabulary of low-level image processing functions and regions (like stbt.ocr(region=stbt.Region(213, 23, 200, 36))) to the vocabulary of high-level features and user-facing concepts (like programme_title).

FrameObject is a base class that makes it easier to create well-behaved Frame Objects. Your own Frame Object classes should:

  1. Derive from FrameObject.
  2. Define an is_visible property that returns True or False.
  3. Define any other properties for information that you want to extract from the frame.
  4. Take care to pass self._frame into any image processing function you call.

A Frame Object instance is considered “truthy” if it is visible. Any other properties (apart from is_visible) will return None if the object isn’t visible.

Frame Objects are immutable, because they represent information about a specific frame of video. If you define any methods that change the state of the device-under-test, they should return a new Frame Object instead of modifying self.

Each property will be cached the first time is is referenced. This allows writing test cases in a natural way while expensive operations like ocr will only be done once per frame.

The FrameObject base class defines the following methods:

  • __init__ – The default constructor takes an optional frame; if the frame is not provided, it will grab a frame from the device-under-test.
  • __nonzero__ – Delegates to is_visible. The object will only be considered True if it is visible.
  • __repr__ – The object’s string representation includes all the user-defined public properties.
  • __hash__ and __cmp__ – Two instances of the same FrameObject type are considered equal if the values of all the public properties match, even if the underlying frame is different.

For more background information on Frame Objects see Improve black-box testing agility: meet the Frame Object pattern.

Example

We’ll create a Frame Object class for the dialog box we see in this image that we’ve captured from our (hypothetical) set-top box:

screenshot of dialog box

Here’s our Frame Object class:

>>> class Dialog(FrameObject):
...     @property
...     def is_visible(self):
...         return bool(self._info)
...
...     @property
...     def title(self):
...         return ocr(region=Region(396, 249, 500, 50), frame=self._frame)
...
...     @property
...     def message(self):
...         right_of_info = Region(
...             x=self._info.region.right, y=self._info.region.y,
...             width=390, height=self._info.region.height)
...         return ocr(region=right_of_info, frame=self._frame) \
...                .replace('\n', ' ')
...
...     @property
...     def _info(self):
...         return match('../tests/info.png', frame=self._frame)

Let’s take this line by line:

class Dialog(FrameObject):

We create a class deriving from the FrameObject base class.

@property
def is_visible(self):
    return bool(self._info)

All Frame Objects must define the is_visible property, which will determine the truthiness of the object. Returning True from this property indicates that this Frame Object class can be used with the provided frame and that the values of the other properties are likely to be valid.

In this example we only return True if we see the “info” icon that appears on each dialog box. The actual work is delegated to the private property _info defined below.

It’s a good idea to return simple types from these properties rather than a MatchResult, to make the __repr__ cleaner and to preserve equality properties.

@property
def title(self):
    return ocr(region=Region(396, 249, 500, 50), frame=self._frame)

The base class provides a self._frame member. Here we’re using stbt.ocr to extract the dialog’s title text from this frame. This is the basic form that many Frame Object properties will take.

This property demonstrates an advantage of Frame Objects. Your testcases now look like this:

assert Dialog().title == "Information"

instead of this:

assert stbt.ocr(region=stbt.Region(396, 249, 500, 50)) == "Information"

This is clearer because it reveals the intention of the testcase author (we’re looking for the word in the title of the dialog). It is also easier (cheaper) to maintain: If the position of the title moves, you only need to update the implementation of Dialog.title; you won’t need to change any of your testcases.

When defining Frame Objects you must take care to pass self._frame into every call to an image processing function (like our title property does when it calls ocr, above). Otherwise the return values won’t correspond to the frame you were expecting.

@property
def message(self):
    right_of_info = Region(
        x=self._info.region.right, y=self._info.region.y,
        width=390, height=self._info.region.height)
    return ocr(region=right_of_info, frame=self._frame) \
           .replace('\n', ' ')

This property demonstrates an advantage of Frame Objects over stand-alone helper functions. We are using the position of the “info” icon to find this message. Because the private _info property is shared between this property and is_visible we don’t need to compute it twice – the FrameObject base class will remember the value from the first time it was computed.

@property
def _info(self):
    return match('../tests/info.png', frame=self._frame)

This is a private property because its name starts with _. It will not appear in __repr__ nor count toward equality comparisons, but the result from it will still be cached. This is useful for sharing intermediate values between your public properties, particularly if they are expensive to calculate. In this example we use _info from is_visible and message.

You wouldn’t want this to be a public property because it returns a MatchResult which includes the entire frame passed into match.

Using our new Frame Object class

The default constructor will grab a frame from the device-under-test. This allows you to use Frame Objects with wait_until like this:

dialog = wait_until(Dialog)
assert 'great' in dialog.message

We can also explicitly pass in a frame. This is mainly useful for unit-testing your Frame Objects.

The examples below will use these example frames:

dialog

dialog

no_dialog

no_dialog

dialog_bunnies

dialog_bunnies

no_dialog_bunnies

no_dialog_bunnies

dialog_fab

dialog_fab

 

Some basic operations:

>>> print dialog.message
This set-top box is great
>>> print dialog_fab.message
This set-top box is fabulous

FrameObject defines truthiness of your objects based on the mandatory is_visible property:

>>> bool(dialog)
True
>>> bool(no_dialog)
False

If is_visible is falsey, all the rest of the properties will be None:

>>> print no_dialog.message
None

This enables usage like:

assert wait_until(lambda: Dialog().title == 'Information')

FrameObject defines __repr__ so that you don’t have to. It looks like this:

>>> dialog
Dialog(is_visible=True, message=u'This set-top box is great', title=u'Information')
>>> dialog_fab
Dialog(is_visible=True, message=u'This set-top box is fabulous', title=u'Information')
>>> no_dialog
Dialog(is_visible=False)

This makes it convenient to use doctests for unit-testing your Frame Objects.

Frame Objects with identical property values are equal, even if the backing frames are not:

>>> assert dialog == dialog
>>> assert dialog == dialog_bunnies
>>> assert dialog != dialog_fab
>>> assert dialog != no_dialog

This can be useful for detecting changes in the UI (while ignoring live TV in the background) or waiting for the UI to stop changing before interrogating it.

All falsey Frame Objects of the same type are equal:

>>> assert no_dialog == no_dialog
>>> assert no_dialog == no_dialog_bunnies

FrameObject defines __hash__ too so you can store them in a set or in a dict:

>>> {dialog}
set([Dialog(is_visible=True, message=u'This set-top box is great', title=u'Information')])
>>> len({no_dialog, dialog, dialog, dialog_bunnies})
2

FrameObject was added in stb-tester v25.

stbt.frames

stbt.frames(timeout_secs=None)

Generator that yields video frames captured from the system-under-test.

Parameters:timeout_secs (int or float or None) – A timeout in seconds. After this timeout the iterator will be exhausted. That is, a for loop like for f, t in frames(timeout_secs=10) will terminate after 10 seconds. If timeout_secs is None (the default) then the iterator will yield frames forever. Note that you can stop iterating (for example with break) at any time.
Returns:A (frame, timestamp) tuple for each video frame:
  • frame is a stbt.Frame (that is, an OpenCV image).
  • timestamp (int): DEPRECATED. Timestamp in nanoseconds. Use frame.time instead.

Changed in stb-tester v26: The first item of the tuple is a stbt.Frame instead of a numpy.ndarray.

stbt.get_config

stbt.get_config(section, key, default=None, type_=str)

Read the value of key from section of the test-pack configuration file.

For example, if your configuration file looks like this:

[test_pack]
stbt_version = 27

[my_company_name]
stb_ip = 192.168.1.23

then you can read the value from your test script like this:

stb_ip = stbt.get_config("my_company_name", "stb_ip")

This searches in the .stbt.conf file at the root of your test-pack, and in the config/test-farm/<hostname>.conf file matching the hostname of the stb-tester device where the script is running. Values in the host-specific config file override values in .stbt.conf. See Configuration files for more details.

Raises ConfigurationError if the specified section or key is not found, unless default is specified (in which case default is returned).

stbt.get_frame

stbt.get_frame()

Grabs a video frame captured from the system-under-test.

Returns:The latest video frame in OpenCV format (a stbt.Frame).

Changed in stb-tester v26: Returns a stbt.Frame instead of a numpy.ndarray.

stbt.is_screen_black

stbt.is_screen_black(frame=None, mask=None, threshold=None)

Check for the presence of a black screen in a video frame.

Parameters:
  • frame (numpy.ndarray) – If this is specified it is used as the video frame to check; otherwise a new frame is grabbed from the system-under-test. This is an image in OpenCV format (for example as returned by frames and get_frame).
  • mask (str) – The filename of a black & white image mask. It must have white pixels for parts of the frame to check and black pixels for any parts to ignore.
  • threshold (int) – Even when a video frame appears to be black, the intensity of its pixels is not always 0. To differentiate almost-black from non-black pixels, a binary threshold is applied to the frame. The threshold value is in the range 0 (black) to 255 (white). The global default can be changed by setting threshold in the [is_screen_black] section of stbt.conf.

Before stb-tester v22, the frame parameter had to be passed in explicitly by the caller.

stbt.match

stbt.match(image, frame=None, match_parameters=None, region=Region.ALL)

Search for an image in a single video frame.

Parameters:
  • image (string or numpy.ndarray) –

    The image to search for. It can be the filename of a png file on disk, or a numpy array containing the pixel data in 8-bit BGR format.

    8-bit BGR numpy arrays are the same format that OpenCV uses for images. This allows generating templates on the fly (possibly using OpenCV) or searching for images captured from the system-under-test earlier in the test script.

  • frame (numpy.ndarray) – If this is specified it is used as the video frame to search in; otherwise a new frame is grabbed from the system-under-test. This is an image in OpenCV format (for example as returned by frames and get_frame).
  • match_parameters (MatchParameters) – Customise the image matching algorithm. See MatchParameters for details.
  • region (Region) – Only search within the specified region of the video frame.
Returns:

A MatchResult, which will evaluate to true if a match was found, false otherwise.

stbt.match_all

stbt.match_all(image, frame=None, match_parameters=None, region=Region.ALL)

Search for all instances of an image in a single video frame.

Arguments are the same as match.

Returns:An iterator of zero or more MatchResult objects (one for each position in the frame where image matches).

Examples:

all_buttons = list(stbt.match_all("button.png"))
for match_result in stbt.match_all("button.png"):
    # do something with match_result here
    ...

match_all was added in stb-tester v25.

stbt.match_text

stbt.match_text(text, frame=None, region=Region.ALL, mode=OcrMode.PAGE_SEGMENTATION_WITHOUT_OSD, lang='eng', tesseract_config=None, case_sensitive=False)

Search for the specified text in a single video frame.

This can be used as an alternative to match, searching for text instead of an image.

Parameters:
  • text (unicode) – The text to search for.
  • frame – See ocr.
  • region – See ocr.
  • mode – See ocr.
  • lang – See ocr.
  • tesseract_config – See ocr.
  • bool (case_sensitive) – Ignore case if False (the default).
Returns:

A TextMatchResult, which will evaluate to True if the text was found, false otherwise.

For example, to select a button in a vertical menu by name (in this case “TV Guide”):

m = stbt.match_text("TV Guide")
assert m.match
while not stbt.match('selected-button.png').region.contains(m.region):
    stbt.press('KEY_DOWN')

The case_sensitive parameter was added in v27. Previously it always ignored case.

stbt.MatchParameters

class stbt.MatchParameters(match_method=None, match_threshold=None, confirm_method=None, confirm_threshold=None, erode_passes=None)

Parameters to customise the image processing algorithm used by match, wait_for_match, and press_until_match.

You can change the default values for these parameters by setting a key (with the same name as the corresponding python parameter) in the [match] section of stbt.conf. But we strongly recommend that you don’t change the default values from what is documented here.

You should only need to change these parameters when you’re trying to match a template image that isn’t actually a perfect match – for example if there’s a translucent background with live TV visible behind it; or if you have a template image of a button’s background and you want it to match even if the text on the button doesn’t match.

Parameters:
  • match_method (str) –

    The method to be used by the first pass of stb-tester’s image matching algorithm, to find the most likely location of the “template” image within the larger source image.

    Allowed values are “sqdiff-normed”, “ccorr-normed”, and “ccoeff-normed”. For the meaning of these parameters, see OpenCV’s cvMatchTemplate.

    We recommend that you don’t change this from its default value of “sqdiff-normed”.

  • match_threshold (float) – How strong a result from the first pass must be, to be considered a match. Valid values range from 0 (anything is considered to match) to 1 (the match has to be pixel perfect). This defaults to 0.8.
  • confirm_method (str) –

    The method to be used by the second pass of stb-tester’s image matching algorithm, to confirm that the region identified by the first pass is a good match.

    The first pass often gives false positives (it reports a “match” for an image that shouldn’t match). The second pass is more CPU-intensive, but it only checks the position of the image that the first pass identified. The allowed values are:

    ”none”:Do not confirm the match. Assume that the potential match found is correct.
    ”absdiff”:Compare the absolute difference of each pixel from the template image against its counterpart from the candidate region in the source video frame.
    ”normed-absdiff”:
     Normalise the pixel values from both the template image and the candidate region in the source video frame, then compare the absolute difference as with “absdiff”.

    This gives better results with low-contrast images. We recommend setting this as the default confirm_method in stbt.conf, with a confirm_threshold of 0.30.

  • confirm_threshold (float) –

    The maximum allowed difference between any given pixel from the template image and its counterpart from the candidate region in the source video frame, as a fraction of the pixel’s total luminance range.

    Valid values range from 0 (more strict) to 1.0 (less strict). Useful values tend to be around 0.16 for the “absdiff” method, and 0.30 for the “normed-absdiff” method.

  • erode_passes (int) – After the “absdiff” or “normed-absdiff” absolute difference is taken, stb-tester runs an erosion algorithm that removes single-pixel differences to account for noise. Useful values are 1 (the default) and 0 (to disable this step).

stbt.MatchResult

class stbt.MatchResult

The result from match.

  • time (float): The time at which the video-frame was captured in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).
  • match: Boolean result, the same as evaluating MatchResult as a bool. That is, if match_result: will behave the same as if match_result.match:.
  • region: The Region in the video frame where the image was found.
  • first_pass_result: Value between 0 (poor) and 1.0 (excellent match) from the first pass of stb-tester’s two-pass image matching algorithm (see MatchParameters for details).
  • frame (Frame or numpy.ndarray): The video frame that was searched, as given to match.
  • image: The template image that was searched for, as given to match.
  • timestamp (int): DEPRECATED. Timestamp in nanoseconds. Use time instead.

The time attribute was added in stb-tester v26.

stbt.MatchTimeout

exception stbt.MatchTimeout

Bases: _stbt.core.UITestFailure

Exception raised by wait_for_match.

  • screenshot: The last video frame that wait_for_match checked before timing out.
  • expected: Filename of the image that was being searched for.
  • timeout_secs: Number of seconds that the image was searched for.

stbt.MotionResult

class stbt.MotionResult

The result from detect_motion and wait_for_motion.

  • time (float): The time at which the video-frame was captured in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).
  • motion: Boolean result, the same as evaluating MotionResult as a bool. That is, if result: will behave the same as if result.motion:.
  • region: The Region of the video frame that contained the motion. None if no motion detected.
  • timestamp (int): DEPRECATED. Timestamp in nanoseconds. Use time instead.

The time attribute was added in stb-tester v26.

stbt.MotionTimeout

exception stbt.MotionTimeout

Bases: _stbt.core.UITestFailure

Exception raised by wait_for_motion.

  • screenshot: The last video frame that wait_for_motion checked before timing out.
  • mask: Filename of the mask that was used, if any.
  • timeout_secs: Number of seconds that motion was searched for.

stbt.ocr

stbt.ocr(frame=None, region=Region.ALL, mode=OcrMode.PAGE_SEGMENTATION_WITHOUT_OSD, lang='eng', tesseract_config=None, tesseract_user_words=None, tesseract_user_patterns=None)

Return the text present in the video frame as a Unicode string.

Perform OCR (Optical Character Recognition) using the “Tesseract” open-source OCR engine.

Parameters:
  • frame – The video frame to process. If not specified, take a frame from the system-under-test.
  • region (Region) – Only search within the specified region of the video frame.
  • mode (OcrMode) – Tesseract’s layout analysis mode.
  • lang (str) – The three-letter ISO-639-3 language code of the language you are attempting to read; for example “eng” for English or “deu” for German. More than one language can be specified by joining with ‘+’; for example “eng+deu” means that the text to be read may be in a mixture of English and German. Defaults to English.
  • tesseract_config (dict) – Allows passing configuration down to the underlying OCR engine. See the tesseract documentation for details.
  • tesseract_user_words (list of unicode strings) – List of words to be added to the tesseract dictionary. To replace the tesseract system dictionary altogether, also set tesseract_config={'load_system_dawg': False, 'load_freq_dawg': False}.
  • tesseract_user_patterns (list of unicode strings) –

    List of patterns to add to the tesseract dictionary. The tesseract pattern language corresponds roughly to the following regular expressions:

    tesseract  regex
    =========  ===========
    \c         [a-zA-Z]
    \d         [0-9]
    \n         [a-zA-Z0-9]
    \p         [:punct:]
    \a         [a-z]
    \A         [A-Z]
    \*         *
    

stbt.OcrMode

class stbt.OcrMode

Options to control layout analysis and assume a certain form of image.

For a (brief) description of each option, see the tesseract(1) man page.

ORIENTATION_AND_SCRIPT_DETECTION_ONLY = 0
PAGE_SEGMENTATION_WITH_OSD = 1
PAGE_SEGMENTATION_WITHOUT_OSD_OR_OCR = 2
PAGE_SEGMENTATION_WITHOUT_OSD = 3
SINGLE_COLUMN_OF_TEXT_OF_VARIABLE_SIZES = 4
SINGLE_UNIFORM_BLOCK_OF_VERTICALLY_ALIGNED_TEXT = 5
SINGLE_UNIFORM_BLOCK_OF_TEXT = 6
SINGLE_LINE = 7
SINGLE_WORD = 8
SINGLE_WORD_IN_A_CIRCLE = 9
SINGLE_CHARACTER = 10

stbt.PreconditionError

exception stbt.PreconditionError

Exception raised by as_precondition.

stbt.press

stbt.press(key, interpress_delay_secs=None)

Send the specified key-press to the system under test.

Parameters:
  • key (str) –

    The name of the key/button.

    If you are using infrared control, this is a key name from your lircd.conf configuration file.

    If you are using HDMI CEC control, see the available key names here. Note that some devices might not understand all of the CEC commands in that list.

  • interpress_delay_secs (int or float) –

    The minimum time to wait after a previous key-press, in order to accommodate the responsiveness of the device-under-test.

    This defaults to 0.3. You can override the global default value by setting interpress_delay_secs in the [press] section of stbt.conf.

stbt.press_until_match

stbt.press_until_match(key, image, interval_secs=None, max_presses=None, match_parameters=None)

Call press as many times as necessary to find the specified image.

Parameters:
  • key – See press.
  • image – See match.
  • interval_secs (int or float) –

    The number of seconds to wait for a match before pressing again. Defaults to 3.

    You can override the global default value by setting interval_secs in the [press_until_match] section of stbt.conf.

  • max_presses (int) –

    The number of times to try pressing the key and looking for the image before giving up and raising MatchTimeout. Defaults to 10.

    You can override the global default value by setting max_presses in the [press_until_match] section of stbt.conf.

  • match_parameters – See match.
Returns:

MatchResult when the image is found.

Raises:

MatchTimeout if no match is found after timeout_secs seconds.

stbt.Region

class stbt.Region

Region(x, y, width=width, height=height) or Region(x, y, right=right, bottom=bottom)

Rectangular region within the video frame.

For example, given the following regions a, b, and c:

- 01234567890123
0 ░░░░░░░░
1 ░a░░░░░░
2 ░░░░░░░░
3 ░░░░░░░░
4 ░░░░▓▓▓▓░░▓c▓
5 ░░░░▓▓▓▓░░▓▓▓
6 ░░░░▓▓▓▓░░░░░
7 ░░░░▓▓▓▓░░░░░
8     ░░░░░░b░░
9     ░░░░░░░░░
>>> a = Region(0, 0, width=8, height=8)
>>> b = Region(4, 4, right=13, bottom=10)
>>> c = Region(10, 4, width=3, height=2)
>>> a.right
8
>>> b.bottom
10
>>> b.contains(c), a.contains(b), c.contains(b)
(True, False, False)
>>> b.extend(x=6, bottom=-4) == c
True
>>> a.extend(right=5).contains(c)
True
>>> a.width, a.extend(x=3).width, a.extend(right=-3).width
(8, 5, 5)
>>> c.replace(bottom=10)
Region(x=10, y=4, right=13, bottom=10)
>>> Region.intersect(a, b)
Region(x=4, y=4, right=8, bottom=8)
>>> Region.intersect(a, b) == Region.intersect(b, a)
True
>>> Region.intersect(c, b) == c
True
>>> print Region.intersect(a, c)
None
>>> print Region.intersect(None, a)
None
>>> quadrant = Region(x=float("-inf"), y=float("-inf"), right=0, bottom=0)
>>> quadrant.translate(2, 2)
Region(x=-inf, y=-inf, right=2, bottom=2)
>>> c.translate(x=-9, y=-3)
Region(x=1, y=1, right=4, bottom=3)
>>> Region.intersect(Region.ALL, c) == c
True
>>> Region.ALL
Region.ALL
>>> print Region.ALL
Region.ALL
>>> c.above()
Region(x=10, y=-inf, right=13, bottom=4)
>>> c.below()
Region(x=10, y=6, right=13, bottom=inf)
>>> a.right_of()
Region(x=8, y=0, right=inf, bottom=8)
>>> a.right_of(width=2)
Region(x=8, y=0, right=10, bottom=8)
>>> c.left_of()
Region(x=-inf, y=4, right=10, bottom=6)
x

The x coordinate of the left edge of the region, measured in pixels from the left of the video frame (inclusive).

y

The y coordinate of the top edge of the region, measured in pixels from the top of the video frame (inclusive).

right

The x coordinate of the right edge of the region, measured in pixels from the left of the video frame (exclusive).

bottom

The y coordinate of the bottom edge of the region, measured in pixels from the top of the video frame (exclusive).

x, y, right, and bottom can be infinite – that is, float("inf") or -float("inf").

width

The width of the region, measured in pixels.

height

The height of the region, measured in pixels.

static intersect(a, b)
Returns:The intersection of regions a and b, or None if the regions don’t intersect.

Either a or b can be None so intersect is commutative and associative.

contains(other)
Returns:True if other is entirely contained within self.
extend(x=0, y=0, right=0, bottom=0)
Returns:A new region with the edges of the region adjusted by the given amounts.
replace(x=None, y=None, width=None, height=None, right=None, bottom=None)
Returns:A new region with the edges of the region set to the given coordinates.

This is similar to extend, but it takes absolute coordinates within the image instead of adjusting by a relative number of pixels.

Region.replace was added in stb-tester v24.

translate(x=0, y=0)
Returns:A new region with the position of the region adjusted by the given amounts.
above(height=inf)
Returns:A new region above the current region, extending to the top of the frame (or to the specified height).
below(height=inf)
Returns:A new region below the current region, extending to the bottom of the frame (or to the specified height).
right_of(width=inf)
Returns:A new region to the right of the current region, extending to the right edge of the frame (or to the specified width).
left_of(width=inf)
Returns:A new region to the left of the current region, extending to the left edge of the frame (or to the specified width).

stbt.TextMatchResult

class stbt.TextMatchResult

The result from match_text.

  • time (float): The time at which the video-frame was captured in seconds since 1970-01-01T00:00Z. This timestamp can be compared with system time (time.time()).
  • match: Boolean result, the same as evaluating TextMatchResult as a bool. That is, if result: will behave the same as if result.match:.
  • region: The Region (bounding box) of the text found, or None if no text was found.
  • frame (Frame or numpy.ndarray): The video frame that was searched, as given to match_text.
  • text: The text (unicode string) that was searched for, as given to match_text.
  • timestamp (int): DEPRECATED. Timestamp in nanoseconds. Use time instead.

The time attribute was added in stb-tester v26.

stbt.UITestFailure

exception stbt.UITestFailure

Bases: exceptions.Exception

The test failed because the system under test didn’t behave as expected.

Inherit from this if you need to define your own test-failure exceptions.

stbt.wait_for_match

stbt.wait_for_match(image, timeout_secs=10, consecutive_matches=1, match_parameters=None, region=Region.ALL)

Search for an image in the system-under-test’s video stream.

Parameters:
  • image – The image to search for. See match.
  • timeout_secs (int or float or None) – A timeout in seconds. This function will raise MatchTimeout if no match is found within this time.
  • consecutive_matches (int) – Forces this function to wait for several consecutive frames with a match found at the same x,y position. Increase consecutive_matches to avoid false positives due to noise, or to wait for a moving selection to stop moving.
  • match_parameters – See match.
  • region – See match.
Returns:

MatchResult when the image is found.

Raises:

MatchTimeout if no match is found after timeout_secs seconds.

The region parameter to wait_for_match was added in stb-tester v24.

stbt.wait_for_motion

stbt.wait_for_motion(timeout_secs=10, consecutive_frames=None, noise_threshold=None, mask=None)

Search for motion in the system-under-test’s video stream.

“Motion” is difference in pixel values between two consecutive frames.

Parameters:
  • timeout_secs (int or float or None) – A timeout in seconds. This function will raise MotionTimeout if no motion is detected within this time.
  • consecutive_frames (int or str) –

    Considers the video stream to have motion if there were differences between the specified number of consecutive frames. This can be:

    • a positive integer value, or
    • a string in the form “x/y”, where “x” is the number of frames with motion detected out of a sliding window of “y” frames.

    This defaults to “10/20”. You can override the global default value by setting consecutive_frames in the [motion] section of stbt.conf.

  • noise_threshold (float) – See detect_motion.
  • mask (str) – See detect_motion.
Returns:

MotionResult when motion is detected. The MotionResult’s time attribute is the time of the first frame in which motion was detected.

Raises:

MotionTimeout if no motion is detected after timeout_secs seconds.

stbt.wait_until

stbt.wait_until(callable_, timeout_secs=10, interval_secs=0)

Wait until a condition becomes true, or until a timeout.

callable_ is any python callable, such as a function or a lambda expression. It will be called repeatedly (with a delay of interval_secs seconds between successive calls) until it succeeds (that is, it returns a truthy value) or until timeout_secs seconds have passed. In both cases, wait_until returns the value that callable_ returns.

After you send a remote-control signal to the system-under-test it usually takes a few frames to react, so a test script like this would probably fail:

press("KEY_EPG")
assert match("guide.png")

Instead, use this:

press("KEY_EPG")
assert wait_until(lambda: match("guide.png"))

Note that instead of the above assert wait_until(...) you could use wait_for_match("guide.png"). wait_until is a generic solution that also works with stbt’s other functions, like match_text and is_screen_black.

wait_until allows composing more complex conditions, such as:

# Wait until something disappears:
assert wait_until(lambda: not match("xyz.png"))

# Assert that something doesn't appear within 10 seconds:
assert not wait_until(lambda: match("xyz.png"))

# Assert that two images are present at the same time:
assert wait_until(lambda: match("a.png") and match("b.png"))

# Wait but don't raise an exception:
if not wait_until(lambda: match("xyz.png")):
    do_something_else()

There are some drawbacks to using assert instead of wait_for_match:

  • The exception message won’t contain the reason why the match failed (unless you specify it as a second parameter to assert, which is tedious and we don’t expect you to do it), and
  • The exception won’t have the offending video-frame attached (so the screenshot in the test-run artifacts will be a few frames later than the frame that actually caused the test to fail).

We hope to solve both of the above drawbacks at some point in the future.

wait_until was added in stb-tester v22.