Chat

The Chat component provides an interface for end users to exchange messages with a chat source. This chat source is typically an artificial intelligence (AI) chat service that accepts the user's input and uses a large language model (LLM) to produce a response. However, it can also be used to render model rows as chat messages—using models to parse and create new messages. This can be used for asynchronous message systems (like Salesforce Chatter).

Usage

The Chat component can be dropped into the canvas beside any other component. The Chat source property determines whether the component connects directly to a connection and exchanges messages with that service or if it parses and creates model records for chat messages.

Chatting with an AI service

Important: 

Currently only the OpenAI connector is supported.

To use the Chat component as an interface with an AI service, you must:

  1. In the admin UI, create a connection for that service.

  2. In the Composer, set the component's Chat source to Connection response .

  3. Select the appropriate connection.

  4. Select the appropriate language model.

Once configured, end users can enter messages, which are then sent to the AI service for processing. The messages returned by that service are displayed as replies within the component.

Chatting with model records

The Chat component can render model records as if they were real-time messages. This is intended for objects that have message field, sender name and ID, and date/time fields available . Otherwise, the component cannot parse the necessary information to render messages.

Configuring the component's appearance

In addition to the basic interface to send and receive messages, the Chat component has several configurable elements that can appear at runtime:

  • Welcome messages : An initial message displayed within the component that appears to come from the AI service. This message is not actually sent from the service and it isn't sent to the service either.
  • Custom error messages : Custom error messages that appear instead of any errors returned by the connection or model network request.
  • AI display name : The name that appears beside the AI chatbot.
  • Loading indicator : An ellipses-styled icon to indicate that a message is in transit.
  • Avatars : Small images representing the speakers of the conversation—these are determined by the end user's profile photo. These can be shown by activating the Show avatar property and then configured within the Default avatar (for model-sourced components) or AI avatar (for connection response-sourced components).

Properties

General

  • Unique ID : Nintex Apps automatically generates an alphanumeric ID for the component; if preferred, give it a practical name.
  • Style variant: Style variants are created and set in the Design System Studio. Some components have pre-defined variants for a specific aspect of a component's style. Nintex Apps builders can style and customize elements to create their own themes within the DSS. These themes will dynamically populate as selectable values in the Style variant drop-down menu.

    Note: 
    • To refresh available style variant options, click refresh Refresh style variants.

    • This is useful for when changes to the design system (like style variants or variable options) have been made in another browser window or by another user.

  • Chat source : Determines where messages are sent to and received from.
    • Connection response : Messages are sent to and from a connection directly—with the connection's responses displayed as replies.
    • Model : Messages are queried from and stored as model records.
  • If chat source is set to connection response , then the following properties appear:

    • Connection : The connection to send messages to and return responses from.
    • Language model : If multiple language models are available from the connection, the one to send messages to.
    • System prompt : A prompt to determine how the LLM should behave in each response—typically used to designate desired behaviors, domain expertise, and tone of the LLM. The end user is typically unaware of the system prompt's contents.
    • AI display name : The name that appears beside the responses from the AI service.

    If chat source is set to model , then the following properties appear:

    • Model: Determines which model to use.
    • Message field: Determines which model field to use as the message body.
    • Sender display name field: Determines which field contains the name to display as the sender of a message
    • Sender Id field: Determines which field contains the message sender's Id
    • Date/Time field: Determines which model field to store sent/received time stamps in.
  • Welcome message : Determines the initial message displayed within the component—appearing to be received by the end user.
  • Custom error message : Determines the message to display to the user in lieu of errors returned by the connection or model.
  • Disclaimer footnote : An optional label that appears beneath user input providing usage disclaimers. Typically notes a language model's limitations and the potential for mistakes in its responses.

    For example: <Service> can make mistakes. Check important info.

  • Show avatar : Determines whether or not avatars appear beside at runtime. When enabled, avatar properties are available in the AI avatar/ Avatars tab.