Detection Match Configuration

The Detection match configuration menu contains settings that allow you to adjust the flexibility of the robot’s detection capabilities (in other words, to be more or less accepting of images that differ from the recorded image). This menu also allows you to view the results of the latest wizard run and whether it succeeded or failed. The menu includes the following tabs:

  • Detection Method: Allows you to determine the image/text detection method, the detection match threshold(s), and the size of the search area

  • Detection Behavior: Allows you to include color inversions, limit detection to a specific color and font weight, and optimize detection when screen blocking is activated

  • Last Run Results: Shows you the match percentages detected the last time the wizard or sensor was run Studio (useful for optimization and debugging purposes)

Accessing the Detection match configuration menu

To access the Detection match configuration menu:

  1. Access the core action’s Position tab

  2. In the Position dropdown list, verify that Detected Object is selected

  3. At the top right of the Detection match area, click Configure

Detection Method

The Detection Method tab allows you to determine the image/text detection method, the detection match threshold, and the size of the search area.

Method
  • Image and text:The robot attempts to identify the click position using: (1) the detected object image; (2) the text inside the image; and (3) the text around the image (default setting)

  • Text only: The robot attempts to identify the click position using only the text inside the detected object image

If all else fails...

When the image and text method is selected, you can elect to use the text only method in case of image match failure. To do so, tick the If image matching fails, use text matching checkbox, and configure the desired settings for this alternate method: inner text match and search area.

Match threshold(s)

Match threshold represents the accuracy the robot will require when matching the detected object at runtime to the object detected during recording.

  • The higher the match threshold percentage, the closer the match required

  • If the specified match threshold(s) are not met, the step will fail object detection

Once you select a detection method, you can set an individual match threshold for each element used by the robot in identifying click position.

Image and text

When the image and text method is selected, you can configure the following match thresholds:

  • Image match: The visual image of the object and of any text in it

    • Default threshold = 84%

    • If the image is small, increasing the default percentage is recommended

Decreasing the image match threshold below 70% is not recommended, and in some cases restricted. If the minimum threshold is less than 70%, the robot might click the wrong area of the screen, in which case you must increase the percentage so that the wizard will run properly.

If you use 70% or above and the robot is still unable to detect the correct click position, adding a fallback is recommended.

  • Inner text match: OCR (image-to-text conversion) for text within the detected object (e.g., a sentence on the screen or text inside a button, such as Continue)

    • Default threshold = approximately 60% to 80%, depending on text length

    • You can elect not to use inner text match by clearing its checkbox

    • If the clicked image does not contain text, the inner text match checkbox will be automatically cleared

  • Outer text match: OCR for text surrounding the detected object (used to confirm the location of the object when there are similar images nearby)

    • Default threshold = 50%

    • You can elect not to use outer text match by clearing its checkbox

    • The outer text match threshold affects only a handful of scenarios and, in most cases, won’t be used

    • If outer text could be dynamic, clearing this checkbox is recommend

Text only

  • Inner text match: OCR (image-to-text conversion) for text within the detected object (e.g., a sentence on the screen or text inside a button, such as Continue)

    • Default threshold = approximately 60% to 80%, depending on text length

Search area

The Search area setting instructs the robot which area(s) of the screen to include when searching for the detected object:

  • Click: Search is limited to the click area only

  • Click, Stripe: First the click area is searched. If the click search fails, the search area is expanded to the stripe in which the object is docked

    • Stripe search is applicable only if either vertical or horizontal docking is set. If both or neither direction is set, stripe search is not applicable.

  • Click, Stripe, Full: First the click area is searched. If the click search fails, the stripe area is searched. If both the click and stripe search fail, the search area is expanded to the entire screen.

Smaller search = better performance

For the best performance, use the smallest search area that consistently produces accurate results. If the object location never (or rarely) changes, limiting the search to the click area improves robot speed. If the object location is expected to change, you can expand the search area to include the entire window to maximize the robot's opportunity to detect it.

Detection Behavior

The Detection Behavior tab allows you to customize various additional settings to maximize object detection accuracy.

  • Support dual contrast: Ensures detection if an object’s colors are inverted

Example

This tab, when idle (that is, not clicked), displays dark text on light background:

The tab’s colors get inverted when clicked, resulting in light text on dark background:

The support dual contrast checkbox allows the robot to detect the object in either idle/clicked state.

  • Match image color: Ensures detection of an object only with the same image color as that recorded. Allows you to prevent detection of objects with similar, but not identical colors, and to exclude minor color variations.

    • When this setting is enabled, a color match percentage is also specified (default = 70%). The higher the percentage, the closer the match required. Adjust this setting to increase or decrease the required color match between the recorded object and the object detected at wizard runtime.

  • Match font weight: Ensures detection of text with the same font weight as that recorded. Allows you to prevent detection of objects whose font weight does not match that of the recorded font (e.g., the robot will detect an object when its text is bold, but ignore it if it is not).

    • When this setting is enabled, a font-weight match percentage is also specified (default = 70%). Adjust this setting to increase or decrease the required font-weight match between the recorded object and the object detected at wizard run time.

  • Hide gray glass during detection: For rare cases in which the gray glass effect used for screen blocking might interfere with object detection, this setting enables you to temporarily disable the effect only for the duration of object detection

    • Relevant only for steps in which screen blocking is activated

  • Auto adjust offset for web browsers: Adjusts offset during runtime to account for the web browser's zoom-level

  • Language type: Identifies the language type of text in the recorded object to improve accuracy of the text match at runtime

Last Run Results

The Last Run Results tab displays the match percentages detected the last time the wizard was played from Studio. This data can be used for optimization and debugging purposes by helping you determine the desired match percentages and fix detection errors.