Agents have many settings that affect behavior. Each console provides different settings.
Dialogflow CX console agent settings
To access agent settings:
Console
- Open the Dialogflow CX console.
- Choose your Google Cloud project.
- Select your agent.
- Click Agent Settings.
- Update the settings as desired.
- Click Save.
API
See the get
and patch/update
methods for the Agent
type.
Select a protocol and version for the Agent reference:
Protocol | V3 | V3beta1 |
---|---|---|
REST | Agent resource | Agent resource |
RPC | Agent interface | Agent interface |
C++ | AgentsClient | Not available |
C# | AgentsClient | Not available |
Go | AgentsClient | Not available |
Java | AgentsClient | AgentsClient |
Node.js | AgentsClient | AgentsClient |
PHP | Not available | Not available |
Python | AgentsClient | AgentsClient |
Ruby | Not available | Not available |
The following subsections describe the different categories of agent settings.
General settings
The following general settings are available for agents:
-
A human-readable name for your agent.
-
The default time zone for your agent.
-
The default language supported by your agent. Once an agent is created, the default language cannot be changed. However, you can perform the following:
- Export your agent to the JSON format.
- Unzip the downloaded file.
- Find the
agent.json
file. - Update the
defaultLanguageCode
andsupportedLanguageCodes
fields to the desired values. - Restore the agent to the same or different agent from step 1.
- Update language-specific training phrases and entity values as needed.
-
Lock the agent
Indicates whether the agent is locked. A locked agent cannot be edited.
-
Enable Cloud Logging
Indicates whether Cloud logging is enabled for the agent.
Enable interaction logging
Indicates whether you would like Google to collect and store redacted end-user queries for quality improvement.
Enable consent-based end-user input redaction
If this setting is enabled, it lets you use a special session parameter to control whether end-user input and parameters are redacted from conversation history and Cloud logging, by default the session parameter is
true
. If this setting is disabled, no redaction occurs.User consent is collected using a boolean session parameter:
$session.params.conversation-redaction
. If this setting is enabled, and the session parameter is set tofalse
, no redaction occurs (other redaction strategies still apply). If this setting is enabled, and the session parameter is set totrue
, redaction occurs.An example consent requesting flow could be: first ask the user if they would like to keep end-user input, and match the response with two intents, one is "yes intent" and the other is "no intent". Then, set the session parameter to
false
(no redaction) in the parameter presets of the "yes intent" route in fulfillment, and totrue
(redaction occurs) in the parameter preset of the "no intent" route.
-
Enable BigQuery export
Indicates whether BigQuery export is enabled.
BigQuery dataset
The BigQuery dataset name.
BigQuery table
The BigQuery table name.
-
You can enable intent suggestions.
-
In this section, you can create descriptions and payloads for custom payload templates.
ML settings
Conversational Agents (Dialogflow CX) uses machine learning (ML) algorithms to understand end-user inputs, match them to intents, and extract structured data. Conversational Agents (Dialogflow CX) learns from training phrases that you provide and the language models built into Conversational Agents (Dialogflow CX). Based on this data, it builds a model for making decisions about which intent should be matched to an end-user input. You can apply unique ML settings for each flow of an agent, and the model created by Conversational Agents (Dialogflow CX) is unique for each flow.
The following agent-wide ML settings are available:
-
If this is enabled and end-user input has a spelling or grammar mistake, an intent will be matched as though it was written correctly. The detect intent response will contain the corrected end-user input. For example, if an end-user enters "I want an applle", it will be processed as though the end-user entered "I want an apple". This also applies to matches involving both system and custom entities.
Spell correction is available in English, French, German, Spanish, and Italian. It is available in all Conversational Agents (Dialogflow CX) regions.
Warnings and best practices:
- Spell correction can't correct ASR (automatic speech recognition) errors, so we don't recommend enabling it for agents using ASR inputs.
- It is possible for corrected input to match the wrong intent. You can fix this by adding commonly mismatched phrases to negative examples.
- Spell correction increases the agent's response time slightly.
- If an agent is defined using domain-specific jargon, the corrections may be undesired.
The following flow-specific ML settings are available:
-
This can be one of:
- Advanced NLU (default): Advanced NLU technology. This NLU type works better than standard, especially for large agents and flows.
- Standard NLU: Standard NLU technology. Will no longer receive quality improvements or new features.
-
If enabled, the flow is trained whenever it is updated with the console. For large flows, this may cause console UI delays, so you should disable this setting and manually train as-needed for large flows.
-
To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. This setting controls the minimum intent detection confidence required for an intent match.
If the confidence score for an intent match is less than the threshold value, then a no-match event will be invoked.
You can set a separate classification threshold value for every flow in each language enabled for the agent. This is to accommodate different languages performing best at different classification thresholds. For more information about creating a multilingual agent, see the multilingual agents documentation.
-
Indicates whether the flow has been trained since the latest update to the flow data.
-
Use this button to manually train the flow.
Generative AI settings
The following generative AI settings are available:
General
-
List of phrases that are banned for generative AI. If a banned phrase appears in the prompt or the generated response, the generation will fail.
-
Configure sensitivity levels of safety filters with respect to different Responsible AI (RAI) categories. Content will be assessed against the following four categories:
Category Description Hate speech Negative or harmful comments targeting identity and/or protected attributes. Dangerous content Promotes or enables access to harmful goods, services, and activities Sexually explicit content Contains references to sexual acts or other lewd content Harassment Malicious, intimidating, bullying, or abusive comments targeting another individual Content is blocked based on the probability that it's harmful. The sensitivity level can be customized by choosing one of Block few, Block some, and Block most for each category. You can also get access to the Block none restricted option that disables RAI checks for the category after submitting a risk acknowledgment request for your project and receiving approval.
For more information, see configure safety attributes.
-
You can check the enable prompt security check setting to enable prompt security checks. When enabled, the agent will attempt to prevent prompt injection attacks. These attacks may be used to reveal parts of the agent prompt or to provide responses the agent is not supposed to supply. This is accomplished by sending an additional LLM prompt that checks whether the user query is possibly malicious.
-
Generative Agent
-
Select the model used by generative features. For more information, see model versions.
Playbook context truncation
Playbook context truncation culls some past turns from the playbook prompt in order to keep the prompt size from growing with every sequential turn handled by the playbook. This feature offers a way to mitigate unwanted prompt size growth.
Normally, without truncation, each subsequent turn will be appended into the "conversation history" of the LLM prompt regardless of whether it is relevant to the current turn. This can ultimately lead to the prompt increasing in size with every turn. As more of the prompt is taken up by conversation history, less of the prompt can be used for few-shot examples (so these might get dropped). Eventually, the prompt might also breach current token limits. You can increase token sizes to accommodate this, but keep in mind that increased prompt sizes also add to the LLM response latency.
Playbook context truncation lets you set a percentage of the token budget to be reserved for conversation history, as a maximum. Conversation turns are preserved in most recent to least recent order. This setting can help you prevent token limits from being exceeded. Regardless of which setting you choose, a minimum of two conversation turns are preserved, in most recent to least recent order.
You must first set a token limit before you can modify this setting.
Important: Truncating context might cause some parameters to be inadvertently be lost if they is part of culled turns. Evaluate your playbook interactions carefully after enabling this option.
Token input budget is also used by the following:
- System instructions and examples: Automatically added to the prompt. This behavior cannot be modified.
- Playbook instructions and goals: Any instructions and goals that you write will be added to the prompt in their entirety.
- Playbook few-shot examples: Are added either in order (by default) or by an algorithm that you choose (such as regular expression best match ordering). Examples are culled to fit within input token budget after all other items are included.
- Conversation history made up of user and agent utterances, flow and playbook transition context, tool calls and outputs in the same session from all previous turns sequentially handled by the current playbook.
-
Generative Fallback
Data Store
Speech and IVR settings
The following speech and IVR settings are available:
-
-
You can select the language and voice used for speech synthesis.
You may enable Custom voice for your agent by selecting the custom voice option from the voice selection dropbox and specify the custom voice name in the corresponding field. The custom voice name must follow the following pattern:
projects/PROJECT_ID/locations/LOCATION/models/MODEL_NAME
.- If you are using telephony gateway, make sure the Dialogflow Service Agent service account
service-PROJECT_NUMBER@gcp-sa-dialogflow.iam.gserviceaccount.com
is granted with "AutoML Predictor" in your custom voice project. - For regular API calls, make sure the service account used to call Conversational Agents (Dialogflow CX) is granted with "AutoML Predictor" role in your custom voice project.
- If you are using telephony gateway, make sure the Dialogflow Service Agent service account
-
-
-
For details about advanced speech options, see the Advanced speech settings guide.
DTMF
See DTMF for telephony integrations for more information.
Multimodal
See Call companion.
Share settings
See Access control.
Languages settings
Add additional language support to your agent. For the full list of languages, see the language reference.
Language auto detection
When you configure language auto detection, your chat agent will automatically detect the end-user's language and switch to that language. See the language auto detection documentation for details.
Security settings
See Security settings.
Advanced settings
Currently, the only advanced setting is for sentiment analysis.
Agent Builder console settings
This section describes the settings available for agent apps.
General
The following general settings are available for agent apps:
-
A human-readable name for your agent app.
-
The agent app region.
-
If enabled, changes to the agent app are not permitted.
Logging
The following logging settings are available for agent apps:
-
If enabled, logs will be sent to Cloud Logging.
-
If enabled, conversation history will be available. Indicates whether you would like Google to collect and store redacted end-user queries for quality improvement. This setting does not affect whether conversation history is used to generate agent responses.
-
If enabled, conversation history is exported to BigQuery. The Enable Conversation History setting must also be enabled.
GenAI
The following generative AI settings are available for agent apps:
-
Select the generative model that agents should use by default.
-
Select the input token limit for the generative model. This is the maximum token size for input sent to the model. Depending on the model, a token can be somewhere between one character and one word. Smaller token limits have lower latency, but the model input size is limited. Larger token limits have higher latency, but the model input size can be larger.
-
Select the output token limit for the generative model. This is the maximum token size for output received from the model. Depending on the model, a token can be somewhere between one character and one word. Smaller token limits have lower latency, but the model output size is limited. Larger token limits have higher latency, but the model output size can be larger.
-
The temperature for a LLM lets you control how creative the responses are. A low value provides more predictable responses. A high value provides more creative or random responses.
-
List of phrases that are banned for generative AI. If a banned phrase appears in the prompt or the generated response, the agent will return a fallback response instead.
-
Configure sensitivity levels of safety filters with respect to different Responsible AI (RAI) categories. Content will be assessed against the following four categories:
Category Description Hate speech Negative or harmful comments targeting identity and/or protected attributes. Dangerous content Promotes or enables access to harmful goods, services, and activities Sexually explicit content Contains references to sexual acts or other lewd content Harassment Malicious, intimidating, bullying, or abusive comments targeting another individual Content is blocked based on the probability that it's harmful. The sensitivity level can be customized by choosing one of Block few (blocking only high-probability instances of harmful content), Block some (medium and high probability instances), and Block most (low, medium, and high probability) for each category. You can also get access to the Block none restricted option that disables RAI checks for the category after submitting a risk acknowledgment request for your project and receiving approval.
For more information, see configure safety attributes.
-
You can check the enable prompt security check setting to enable prompt security checks. When enabled, the agent will attempt to prevent prompt injection attacks. These attacks may be used to reveal parts of the agent prompt or to provide responses the agent is not supposed to supply. This is accomplished by sending an additional LLM prompt that checks whether the user query is possibly malicious.
Git
These settings provide a Git integration. Follow the instructions to configure the integration.