When your agent is trained, Dialogflow uses your training data to build machine learning models specifically for your agent. This training data primarily consists of intents, intent training phrases, and entities referenced in an agent; which are effectively used as machine learning data labels. However, agent models are built using parameter prompt responses, agent settings, and many other pieces of data associated with your agent.
Whenever you change your agent, you should ensure that the agent is trained before attempting to use it. Depending on your agent settings, training may occur automatically or manually.
You can also use the Training Tool to analyze, import, and export actual conversation data, and to improve your training data.
Draft agent automatic training
By default, agent training for a draft agent is executed automatically every time you update and save the agent from the console. Popup dialogs will display the status of this training.
However, updating your agent with the API does not trigger automatic training.
Draft agent manual training
You can update agent ML settings to disable automatic training for a draft agent.
If your agent has more than 780 intents, or if you have disabled the automatic training setting, you must manually execute training.
To manually train an agent from the console, click the Train button in the ML settings.
To manually train an agent with the API,
train method on the
Agent version automatic training
Whenever a new agent version is created, the new agent version is automatically trained.
To create a new agent version from the console, click the Publish a version button on the Environments tab.
To create a new agent version with the API,
create method for the
to create a new agent version.
The Training Tool is used to review end-users inputs sent to your agent and to improve your training data. Using the tool, you can:
- Review actual end-users inputs and the intents that are matched for each conversational turn with the current agent model.
- Add the end-user expressions from these conversations to the training phrases of the matched intents, different intents, or fallback intents.
- Import end-user expressions you have prepared or captured from actual conversations.
The tool uses agent history data to load conversations, so interaction logging must be enabled to use the tool. The Training Tool only shows end-user expressions. To view both agent and end-user conversation data, see the more complete agent history.
To open the Training Tool:
- Go to the Dialogflow ES Console.
- Select your agent near the top of the left sidebar menu.
- Click Training in the left sidebar menu.
When you open the tool, it shows the conversation list. This is a list of recent conversations in reverse chronological order. Each row in the list provides a summary of a conversation. The following table describes each of the UI elements:
|Conversation||The first end-user expression in the conversation.|
|Date||The date that the conversation occurred or was imported.|
|refresh||When a conversation is used to update training data (as described below), the status indicator for the row shows a green checkmark.|
When you click a row in the conversation list, it opens the conversation in training view. The training view shows a list of conversational turns and provides controls to add this data to your training data.
When you edit the displayed data or click a task button on the right, you create training data update tasks that get queued for saving. Once you are done creating tasks, click the Approve button to execute all queued tasks. Once approved, you should manually train your agent.
The following table describes each of the UI elements:
|Date||The date that the conversation occurred or was imported.|
|Requests||The number of rows for the conversation.|
|No Match||The number of rows for which no intent is matched.|
|User Says||The end-user expression for the row.|
|Intent||The intent for this row matched with the current agent model. You can click the link to change the associated intent to a new or existing intent.|
|check||Queues a task to add the end-user expression for the row as a training phrase to the currently selected intent. The icon turns green when a task is queued.|
|block||Queues a task to add the end-user expression for the row as a training phrase to the default fallback intent. This creates a negative example. The icon turns orange when a task is queued.|
|delete||Queues a task to delete the row. The icon turns red when a task is queued.|
|Approve||Executes queued tasks for all rows.|
When looking at a conversation in training view, end-user expressions show matched entities as highlighted annotations. To add or edit an annotation:
- Click an annotation or select the words you want to annotate.
- Choose an existing entity from the menu.
You can import conversation data files you have prepared or captured to the Training Tool. Importing conversations can be used to improve an existing agent. To upload a conversation, click the Upload button at the top of the page. Then, you can analyze this data for adding to training data as described above.
The following describes the file content format, its limitations, and the results:
- Each uploaded file results in a single conversation in the Training Tool.
- Requests are not sent to the detect intent API, therefore, no contexts are activated and no intents are matched.
- A single text file or a zip archive that can contain up to 10 text files.
- One upload cannot exceed 3 MB.
- The files should only contain end-user expressions delimited by newlines.
- Ideally, files should only include data that is useful as training phrases.
- The order of the end-user expressions is not important.
Here is an example file:
I want information about my account. What is my checking account balance? How do I transfer money to my savings account?
- The Training Tool is only available for the
- The Training Tool does not take the ML Classification Threshold setting into consideration for intent matching. You may see different intents matched at runtime and in the Training tool, even if the agent model has not changed.
- End-user inputs containing required
parameter values may not match to the expected intents in the Training tool,
while matching correctly at runtime. This may happen in the
- There are no annotated training phrases in that intent.
- The input significantly differs from the training phrases.
Use the Training Tool at various stages of development
Use the Training Tool at various stages of agent development, and refine your training data at each stage:
- Before your agent is released to production, test it with a small group of users.
- Shortly after your agent is released to production, examine if real conversations are behaving as expected.
- Whenever significant changes are made to your agent, check that the new changes are behaving as expected.
- Run the tool periodically for production agents, to perform regular analysis.
Import quality data
The following can often be useful sources of data:
- Logs of conversations with human customer service agents.
- Online customer support conversations (email, forums, FAQs).
- Customer questions on social media.
You should avoid the following types of data:
- Long-form, non-conversational end-user expressions.
- End-user expressions that are not relevant to any of the intents in your agent.
- Logs of things not said by end-users (for example, responses from customer service agents).