Agent settings

This page describes the various settings you can apply to an agent. To access these settings:

  1. Go to the Dialogflow Console
  2. Select your agent near the top of the left sidebar menu
  3. Click the settings settings button next to the agent name

Change an agent edition and pricing plan

Agents default to Dialogflow Standard Edition. To change the edition or pricing plan for an agent:

  1. Scroll to the bottom of the left sidebar menu, where the agent pricing plan is shown
  2. Click the Edit or Upgrade link
  3. Select a plan

Dialogflow Pricing Plan

If you use multiple projects, the consumer project is used to determine the edition and pricing plan.


  • Description: Description of your agent. Displayed in the Web Demo for your agent.
  • Default Time Zone: Default time zone for agent.
  • Google Project:
    • Project ID: GCP project linked to agent.
    • Service Account: Service account for authentication.
  • API Version: API version for agent. Select V2 API for all new agents.
  • Beta Features: Toggle to enable beta features for your agent.
  • Log Settings:
  • Delete Agent: Completely deletes agent and cannot be undone. If the agent is shared with other users, those users must be removed from the agent before you can delete it.


Add multiple languages and their respective locales to make your agent multilingual.

Choose a language from the list and click the Save button. To add a locale, if available, hover over the listed language and click + Add locale.

For more information, see Multi-language Agents.

ML settings (machine learning)

Dialogflow agents use machine learning algorithms to understand end-user expressions, match them to intents, and extract structured data.

An agent learns both from training phrases that you provide and the language models built into Dialogflow. Based on this data, it builds an algorithm for making decisions about which intent should be matched to an end-user expression. This algorithm is unique to your agent.

By default, Dialogflow updates your agent's machine learning algorithm every time you make changes to intents and entities, import or restore an agent, or train your agent.

Select the ML Settings tab to change the following settings:

  • Match Mode: This setting defines the algorithms that should be used for machine learning for all intents in which machine learning is enabled.
    • There are two types of algorithms used to match intents:
      • Rule-based grammar matching:
        • Pros:
          • Accurate with a small or large number of training phrase examples
          • Models are updated quickly
        • Cons:
          • Matching is slow if there are many training phrase examples
          • May produce incorrect results if the agent uses @sys.any or "Allow automated extension" frequently
      • ML matching:
        • Pros:
          • Accurate with a large number of training phrase examples
          • Matching is fast
        • Cons:
          • Inaccurate with a small number of training phrase examples
          • Models are updated slowly
          • Less accurate than grammar matching for agents with many/complex composite entities or training phrase templates
    • You can select one of two modes that use these algorithms:
      • Hybrid: This mode first attempts a rule-based grammar match. If a match is not made, it switches to ML matching. This mode is best for most cases.
      • ML Only: This mode only performs ML matching.
  • ML Classification Threshold: To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. Intent matches have an intent detection confidence value in a range from 0.0 (completely uncertain) to 1.0 (completely certain). If the confidence value is less than the ML classification threshold, a fallback intent is triggered. If there are no fallback intents defined, no intent will be triggered.

  • Automatic Spell Correction:

    If this is enabled and user input has a spelling or grammar mistake, an intent will be matched as though it was written correctly. The detect intent response will contain the corrected user input. For example, if a user enters "I want an aple", it will be processed as though the user entered "I want an apple". This also applies to matches involving both system and developer entities.

    Spell correction is available for all languages supported by Dialogflow.

    Warnings and best practices:

    • Spell correction can't correct ASR (automatic speech recognition) errors, so we don't recommend enabling it for agents using ASR inputs.
    • It is possible for corrected input to match the wrong intent. You can fix this by adding commonly mismatched phrases to negative examples.
    • Spell correction increases the agent's response time slightly.
    • Spell correction should not be used with AoG.
    • Spell correction is trained on general user queries. If an agent is defined using domain-specific jargon, the corrections may be undesired.
  • Automatic Training: Enable or disable automatic agent training each time the agent is modified.

  • Agent Validation: See the agent validation guide.

Export and import

  • Export as ZIP: Exports the agent as a zip file.
  • Restore from ZIP: Overwrites the current agent with the supplied zip file.
  • Import from ZIP: Adds intents and entities to the current agent from the supplied zip file. If any existing intents or enties have the same name as those in the zip file, they will be replaced.

Environments BETA

This feature allows you to publish and make past or current versions of your agent available to a specific environment. Multiple environments can be set up depending on your needs, like "Development", "Testing", and "Production".

Clicking the PUBLISH A VERSION button takes a snapshot of your agent's intents and entities, and lets you optionally deploy that version to one of your environments.

To learn more about this feature, see Versions and Environments.


These settings are used to share agent access with other developers. See Access control for more information.


These are the settings for speech recognition and speech synthesis.

  • Improve Speech Recognition Quality
  • Text to Speech
    • Enable Automatic Text to Speech: Automatically convert default text responses to speech in all conversations. See Detect intent with audio output.
    • Voice Configuration:
      • Agent Language: Choose the default language for voice synthesis.
      • Voice: Choose a voice synthesis model.
      • Speaking Rate: Adjusts the voice speaking rate.
      • Pitch: Adjusts the voice pitch.
      • Volume Gain: Adjust the audio volume gain.
      • Audio Effects Profile: Select audio effects profiles you want applied to the synthesized voice. Speech audio is optimized for the devices associated with the selected profiles (for example, headphones, large speaker, phone call). For a list of available profiles, see Using device profiles for generated audio in Text to Speech documentation.


Currently, there is only one feature controlled from these settings. For more information, see Sentiment Analysis

Kunde den här sidan hjälpa dig? Berätta:

Skicka feedback om ...

Dialogflow Documentation
Behöver du hjälp? Besök vår supportsida.