Agent design

This page provides guidance for designing high quality agents that meet your business objectives.

Before building an agent

This section provides information that you should consider before starting to build an agent.

Objective

Consider the overall objective of your agent:

  • What is your business trying to achieve?
  • What will your users expect from your agent?
  • How often will users interact with your agent?

Platform

Consider how users will access your agent. Review the platforms supported by Dialogflow before creating content. When you choose platforms to support, prepare your content accordingly. Some of Dialogflow's platform integrations support rich messages that can include elements like images, links, and suggestion chips.

Build agents iteratively

If your agent will be large or complex, start by building a dialog that only addresses the top level requests. Once the basic structure is established, iterate on the conversation paths to ensure you're covering all of the possible routes a user may take.

Pre-built agents

Dialogflow offers pre-constructed agents to help you get started. Pre-built agents cover common use-cases like hotel booking, navigation, and online shopping. These agents come with intents and entities to cover the most common user queries. Add responses specific to your business, and you'll quickly build a functioning agent.

System entities

When a user makes a request, there's important information to parse from what they said. In Dialogflow, these are called entities. In particular, system entities are pre-built entities provided by Dialogflow that handle the most popular types of information.

Small talk

When developing your dialog, you may have considered handling requests that are off-topic. Dialogflow provides an optional feature called small talk. With this feature enabled, your agent will respond to general conversation, emotional responses, and questions about the agent itself. All of the small talk responses can be customized to make sure the experience, whether casual, businesslike, or somewhere in between, is representative of your brand.

Agent design best practices

This section provides a list of best practices for a robust, accurate, performant, and usable agent.

Greetings and goodbyes

Best Practice Details
Welcome intents should let users know about the agent's capabilities with branding in mind. Your agent's welcome intent should inform the user of 2-3 tasks the agent can help with, as well as brief descriptions (as needed) of how to use these features.
Agents should have a suitable exit message when a successful interaction has ended. When a user completes a task in your agent, it should summarize the transaction/task and say something like "Until next time", etc.

Machine learning and training

Best Practice Details
Intents should have at least 10 training phrases. The complexity of your agent will determine the actual number of training phrases each intent should have, but 10 is a good minimum. The more parameters you have in your intents, the more phrases you should provide to train the machine learning model.
Training phrases should be varied. Include variations of questions, commands, verbs, and synonyms for common nouns to ensure your phrases cover a broad spectrum of possible requests.
Annotations should be consistent.
  • Review your training phrases and ensure that highlighted annotations are pointing to the correct entities.
  • You should not have text in training phrases that is annotated in some cases but not others.
  • The span of text selected for an annotation should include all of, and no more than, the text that is necessary to match an entity.
  • Be sure that the annotated text in multiple training phrases contains similar portions of the training phrase. For example, consider that you have a training phrase "Set alarm at 6 a.m.", where "6 a.m." is annotated as @sys.date. If you have another training phrase "wake me up at 7 a.m.", do annotate "7 a.m.", but do not annotate "up at 7 a.m.".
Custom developer entities should cover a broad range of examples. Entities are lists of items. Machine learning will take care of grammatical forms, but you have to include all possible items. Also, check the define synonyms option and include some variations.
Disable ML for as few intents as possible. Training phrases for intents with ML disabled are not used when training your agent. A user query that is very similar to a training phrase in an intent with ML disabled may be matched to the wrong intent if other intents with ML enabled have a slight resemblance to the user query. If you are having problems with false positives, raise the ML classification threshold instead of disabling ML.
Do not set a high ML classification threshold for an agent with only a little training data. If the threshold is high, and there is not a lot of training data, only user queries that have near exact matches to training phrases will result in intent matching. You need to provide a lot of training data if you desire a high threshold.
Agents should have a fallback intent. Without fallback intents, unmatched user queries will result in empty responses.
Agents should provide negative examples. Negative examples prevent user queries that are slightly similar to training phrases from unintentionally matching intents.
Do not define entities that match virtually anything. This degrades the performance and quality of ML. Nearly everything in every training phrase will be evaluated as a possible match. Consider using @sys.any instead. Similarly, composite entities should not contain a single @sys.any as a synonym.
Do not define entities that are composed of filler words or meaningless text. Examples of filler words and meaningless text are: "hmmm", "let's see", "please", "could you please". If you are attempting to use entities like this to introduce variety, you are only degrading the performance of ML. Dialogflow already augments data to handle variety like this. You should add phrases like this to your training phrases, not your entities.
Annotated text in training phrases should have variety. For example, if you are providing time values that should be parsed as @sys.time system entities in training phrases, do not provide the same time in all training phrases. Your training phrases should have a variety of time examples like: "7 a.m.", "8 p.m.", "9 o'clock".
Intents with many parameters should also have many training phrases. As a rule, try to have at least three times as many training phrases as parameters and at least 10 training phrases.
Each parameter should be used in many training phrases. As a rule, each parameter should be used in at least 5 training phrases.
Training phrases in example mode should not contain entity references. This is reserved for the deprecated template mode.
Avoid using multiple @sys.any entities in a training phrase. One training phrase shouldn't contain two consecutive @sys.any or a total of three @sys.any entities. Dialogflow may not be able to distinguish them.
Do not use similar training phrases in different intents. Different intents shouldn't contain similar training phrases, because this will prevent Dialogflow from learning how to recognize those phrases.
Enable automatic spell correction. If you are using text input, you should enable automatic spell correction.
Do not nest composite entities Do not use more than one level of nesting in composite entities. Each level of nesting significantly degrades quality.
Avoid special characters in training phrases. Special characters in training phrases, like {, _, #, and [ will be ignored. Emojis are an exception; they work as expected.

Helpful intent features

Best Practice Details
Agents should support contextual requests. For example, if your agent handles requests for the weather and a user asks "Weather in San Francisco", make sure to add contexts to support further requests like "How about tomorrow?"
Agents should have follow-ups for yes, no, cancel, next, back, etc. Follow-up intents are used to reply to common responses. To add a follow-up intent, hover over an intent and click Add follow-up.
Intents should have at least one text response. The response section is at the bottom of the intent's page. Adding variations will shuffle the response chosen, making for a less repetitive experience.
Agents should collect all of the necessary information to fulfill a user's request. Consider making necessary parameters required. Your agent will keep prompting the user until it gets the information it needs. This is called slot filling.
Responses should repeat information as needed, like confirming an order. When a user is making a request like placing an order or changing information, your agent should repeat what's happening for confirmation purposes. When creating these confirmation responses, make sure to include all possible combinations of repeated entities and parameters.

Conversation repair

Best Practice Details
Agents should have helpful recovery prompts for each step of the dialog. For example, if the initial prompt is "What color do you want?" and the user replies with "jungle parrot", a fallback/follow-up intent should rephrase the question, like "Sorry, what color was that?"
Agents should have customized, brand-specific responses in the default fallback intent. When a user says something that isn't matched to an intent, the default fallback intent is matched. This should be customized to reflect your brand, as well as provide information to guide the user to make a valid request.
For customized fulfillment, agents should have an intent that allows users to repeat information. One intent can handle requests like "say that again", "repeat that", "play that again", etc. This can be a follow-up intent.

Persona

Best Practice Details
Agent responses should have a style and tone that fits your brand and are consistent throughout the agent. As your users converse with your agent, it should feel like they're speaking to one persona. Make sure the qualities and personality you've chosen are represented in all of your responses.
Agents should be sensitive about cultures, genders, religious beliefs, abilities and ages. Stereotyping may offend users, even in jokes, and they might not return to your agent.

Testing

Best Practice Details
Test your app thoroughly with someone that was not involved in its development. Having someone unfamiliar with the agent use the app will give you insight to how naturally the conversation flows. Have them look out for accuracy, long pauses, missing conversational paths, pacing, awkward transitions, etc.
Test your app on all platforms you plan on supporting. If your agent will be available on one or more platforms, make sure rich messaging and responses show up as expected across all platforms.

Additional conversation design guides

See the conversation design guide provided by the Actions on Google team.

Was this page helpful? Let us know how we did:

Send feedback about...

Dialogflow Documentation
Need help? Visit our support page.