Experiments are used to compare the performance of multiple flow versions (variant versions) to a control version (normally a production version) while handling live traffic. You can allocate a portion of live traffic to each flow version and monitor the following metrics:
- Contained: Count of sessions that reached END_SESSION without triggering other metrics below. Only available to agents using a telephony integration.
- Live agent handoff rate: Count of sessions handed off to a live agent.
- Callback rate: Count of sessions that were restarted by an end-user. Only available to agents using a telephony integration.
- Abandoned rate: Count of sessions that were abandoned by an end-user. Only available to agents using a telephony integration.
- Session end rate: Count of sessions that reached END_SESSION.
- Total no-match count: Total count of occurrences of a no-match event.
- Total turn count: Total number of conversational turns (one end-user input and one agent response is considered a turn).
- Average turn count: Average number of turns.
Preparation
To prepare for an experiment:
- Decide which flow will be used for the experiment. You cannot run multiple experiments on a single flow, so ensure that you have partitioned your agent into multiple flows.
- Create multiple versions for your flow. The differences between each version could be small or large, depending on what you want to compare.
- Decide on the amount of traffic that will be allocated to your experiment. If you are testing minor changes, you might start with a higher amount of traffic. For large changes that may be disruptive, consider allocating a small amount of traffic to your experiment.
Create an experiment
To create an experiment:
- Open the Dialogflow CX console.
- Select your project to open the agent selector.
- Select your agent to open the agent builder.
- Select the Manage tab.
- Click Experiments to open the Experiments panel.
- Select the Status tab.
- Click Create.
- Enter a description.
- Select the environment that you want to run the experiment from.
- Select the flow for the experiment.
- Optionally enter the number of days in which the experiment will automatically stop.
- Enter the control flow version and the percentage of traffic that will go to the control version.
- Enter one to four variant flow versions, and the percentage of traffic that will go to the variant version.
- Optionally, click Enable auto rollout and steps for a gradual rollout of
traffic to the variant flow. An automated experiment is based on steps,
which are time durations in which a percentage of traffic is increased to
the variant flow. Auto rollout only supports one variant flow.
- Under Rollout rules, you can set one or more conditional
rules to determine how the experiment should proceed through the steps.
- If you select Match at least one rule, the experiment proceeds to the next step if at least one rule and the time duration for the current step are met.
- If you select Match all rules, the experiment proceeds to the next step if all rules and the time duration for the current step are met.
- If you select Steps only, the experiment proceeds according to the time durations for each step.
- Under Increase steps, define a percentage of traffic to allocate to the variant flow and a time duration for each step. The default time duration for each step is 6 hours.
- Select Stop conditions to set one or more conditions under which to stop sending traffic to the variant flow. Note that you cannot restart a stopped experiment.
- Under Rollout rules, you can set one or more conditional
rules to determine how the experiment should proceed through the steps.
- Click Save.
Start and stop an experiment
You can start a saved experiment or manually stop a running experiment at any time. Stopping an experiment will cancel the traffic allocation and will revert traffic to its original state.
To start or stop an experiment:
- Open the Experiments panel.
- Select the Status tab.
- Click Start or Stop for an experiment in the list.
Manage experiments
You can edit or delete experiments at any time:
- Open the Experiments panel.
- Select the Status tab.
- Click the option more_vert menu for an experiment in the list.
- Click Edit or Delete.
Monitor status of experiments
All experiments, regardless of their status, can be found on the experiments panel. Experiments can have four different statuses:
- Draft: Experiment has been created, but it has never run.
- Pending: Experiment has started recently, but results are not available yet.
- Running: Experiment is running and interim results are available.
- Completed: Experiment has been completed due to automatically or manually being stopped.
Viewing experiment results
To see experiment results:
- Open the Dialogflow CX console.
- Select your project to open the agent selector.
- Select your agent to open the agent builder.
- Select the Manage tab.
- Click Experiments to open the Experiments panel.
- Select the Results tab.
- Select an environment and experiment to see the results.
Green colored results suggest a favorable outcome, while red suggests a less favorable result. Notice that in some cases, higher/lower numbers are not necessarily better (high abandonment rate / low abandonment rate).
Limitations
The following limitations apply:
- The Enable interaction logging agent setting must be enabled.