This page describes the various settings you can apply to an agent. To access these settings:
- Go to the Dialogflow ES Console
- Select your agent near the top of the left sidebar menu
- Click the settings settings button next to the agent name
- Scroll to the bottom of the left sidebar menu, where the agent edition is shown
- Click the Edit or Upgrade link
- Select a plan
If you use multiple projects, the consumer project is used to determine the edition.
To access the general settings, click the General tab.
The following settings are available:
- Description: Description of the agent.
- Default Time Zone: Default time zone for the agent. Date and time requests are resolved using this timezone if the time zone is not provided in the API requests.
- Agent Avatar URI: A URI for your agent's avatar used by some integrations.
- Google Project: GCP project linked to the agent.
- Agent Webhook Protocol Version: Visible only in V1 API legacy agents. Allows you to switch to the V2 API webhook format.
- Beta Features: Toggle to enable beta features for the agent.
- Log Settings:
Both of the following log settings are only visible to agent owners:
- Log interactions to Dialogflow: Indicates whether Interaction logging is enabled for the agent.
- Log interactions to Google Cloud: Indicates whether Cloud logging is enabled for the agent. This option is only available if Log interactions to Dialogflow is enabled. Disabling Dialogflow's logging will also disable this setting. You can also click the Open logs link to open your agent's logs in Cloud logging.
- Delete Agent: Completely deletes the agent and cannot be undone. If the agent is shared with other users, those users must be removed from the agent before you can delete it. See Agent management.
To access the language settings, click the Language tab.
You can set the default language and multiple additional languages. For some root languages, you can also add one or more locales. For more information, see Multilingual agents.
ML settings (machine learning)
To access the machine learning settings, click the ML Settings tab.
Dialogflow agents use machine learning algorithms to understand end-user expressions, match them to intents, and extract structured data. An agent learns from training phrases that you provide and the language models built into Dialogflow. Based on this data, it builds a model for making decisions about which intent should be matched to an end-user expression. This model is unique to your agent.
By default, Dialogflow updates your agent's machine learning model every time you make changes to intents and entities, import or restore an agent, or train your agent.
The following settings are available:
ML Classification Threshold: To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. This setting controls the minimum intent detection confidence required for an intent match.
Automatic Spell Correction:
If this is enabled and user input has a spelling or grammar mistake, an intent will be matched as though it was written correctly. The detect intent response will contain the corrected user input. For example, if a user enters "I want an aple", it will be processed as though the user entered "I want an apple". This also applies to matches involving both system and custom entities.
Spell correction is available for all languages supported by Dialogflow.
Warnings and best practices:
- Spell correction can't correct ASR (automatic speech recognition) errors, so we don't recommend enabling it for agents using ASR inputs.
- It is possible for corrected input to match the wrong intent. You can fix this by adding commonly mismatched phrases to negative examples.
- Spell correction increases the agent's response time slightly.
- Spell correction should not be used with Actions on Google.
- Spell correction is trained on general user queries. If an agent is defined using domain-specific jargon, the corrections may be undesired.
Automatic Training: Enable or disable automatic agent training each time the agent is modified.
Agent Validation: See the agent validation guide.
Export and import
To access the export and import settings, click the Export and Import tab.
This feature allows you to export/import an agent to/from a zip file for backing up agents or transferring them from one account to another. While you can edit the JSON files directly and re-import them, editing should be done using the Dialogflow Console or API. This ensures that changes are validated by the system and keeps troubleshooting to a minimum.
The following options are available:
- Export as ZIP: Exports the agent as a
- Restore from ZIP: Overwrites the current agent
with the supplied
- Import from ZIP: Adds intents and entities to the current agent
from the supplied
zipfile. If any existing intents or entities have the same name as those in the
zipfile, they will be replaced.
The following are not included in the export of an agent and are not overwritten when importing or restoring:
- Agent name
- Inline editor
- Integration settings
- Knowledge bases and knowledge documents
- Speech settings
Maximum agent size (unzipped content) for agent import/restore should not exceed 50 MB.
To access the environments settings, click the Environments tab.
Versions and environments allow you to deploy multiple versions of your agent to separate, customizable environments. For more information, see Versions and Environments.
To access the speech settings, click the Speech tab.
These are the settings for speech recognition and speech synthesis. The following settings are available:
- Improve Speech Recognition Quality
- Text to Speech
- Enable Automatic Text to Speech: Automatically convert default text responses to speech in all conversations. See Detect intent with audio output.
- Voice Configuration:
- Agent Language: Choose the default language for voice synthesis.
- Voice: Choose a voice synthesis model.
- Speaking Rate: Adjusts the voice speaking rate.
- Pitch: Adjusts the voice pitch.
- Volume Gain: Adjust the audio volume gain.
- Audio Effects Profile: Select audio effects profiles you want applied to the synthesized voice. Speech audio is optimized for the devices associated with the selected profiles (for example, headphones, large speaker, phone call). For a list of available profiles, see Using device profiles for generated audio in Text to Speech documentation.
To access the share settings, click the Share tab.
These settings are used to share agent access with other developers. See Access control for more information.
To access the advanced settings, click the Advanced tab.
Currently, there is only one feature controlled from these settings. For more information, see Sentiment Analysis