Core Action Position
The Nintex RPA Platform uses a proprietary, patented visual detection algorithm to detect objects on the screen and identify the location in which to perform each step's core action.
For a description of how robots detect objects on the screen, see the Object Detection chapter.
From the Position tab of the Properties Pane, you can optimize object detection, thereby improving performance in terms of speed and accuracy.
Position dropdown list
By default, robots use the Nintex visual detection algorithm identify the object to use for each step. The Position dropdown list allows you to disable this default visual detection method (on a per-step basis) and set the robot to click at a fixed or relative distance from the window borders. The following methods are available for determining core action position:
-
Detected Object: The robot detects the object to click by its visual appearance – an image or text in the object, regardless of its position on the screen. This is the default setting.
-
Fixed Position: The robot clicks a position based on a fixed distance from a selected corner of the window. The fixed distance is determined by the recorded click position.
-
Relative Position: The robot clicks a position based on proportion of window size. The proportion is determined by the recorded click position.
Certain customizable properties on the Position tab vary based on the selected method of determining core action position. For detailed information, see:
Certain customizable properties are available no matter which method is selected. For detailed information, see: