You can use the built-in test feature to uncover bugs and prevent regressions. To test your agent, you create test cases using the simulator to define golden test cases, then you execute test cases as needed. A test execution verifies that agent responses have not changed for end-user inputs defined in the test case.
The instructions below show you how to use the console, but you can also find the same functionality in the API.
Simulator settings
When you first open the simulator, you need to select an agent environment or flow versions and an active flow. In most cases, you should use the draft environment and default start flow.
You can also enable or disable webhook calls at any time with the webhook toggle button. Disabling webhooks is useful when defining test cases.
Simulator input
When interacting with the simulator, you provide end-user input as text, then press enter or click the send
button. In addition to plain text, you can choose alternate input types with the input selector:- Parameter: Inject a parameter value. You can provide new parameters or provide preset values for existing parameters.
- Event: Invoke an event.
- DTMF: Send dual-tone multi-frequency signaling (Touch-Tone) input for telephony interactions.
Create a test case
To create a conversation:
- Open the Dialogflow CX console.
- Choose your project.
- Select your agent.
- Click Test Agent to open the simulator.
- Chat with the agent to create a conversation that covers the functionality you want to test. For each turn, verify correct values for the triggered intent, the agent response, the active page, and the session parameters.
To save a conversation as a test case:
- Click the save button.
- Enter a test case display name. Every test case must have a unique display name.
- Optionally provide a tag name. Tags help you organize your test cases. All tags must start with a "#".
- Optionally provide a note that describes the purpose of the test case.
- Optionally select parameters you want to track in the test case. A list of suggested parameters are provided. You can also input other parameters to track. If you select tracking parameters, the parameter assertion is checked when running the test case. See more details on the parameter assertion in the Run test cases section.
- Click Save to save the test case.
Run test cases
To view all test cases for an agent, click Test Cases in the Manage tab. The test cases table shows the test name, the tags, the latest test time and environment, and the latest test result.
To run test cases:
- Select the test cases you want to run, and click Run. Alternatively, you can click Run all test cases.
- Select the environment you want to run the test cases against.
- The tests start running and you can view the status in the task queue. The test result will be updated when it is completed.
To view the test detail result, click the test case. The golden test case and the latest run conversations are shown side-by-side.
You can click on any agent's conversational turn to see the details for that turn. The test engine checks the following types of data turn by turn to evaluate the test result:
Agent dialogue:
For each conversational turn, the agent dialogue is compared from golden to latest run. If there is any difference, a warning is displayed. These differences do not prevent a test from passing, because agent dialogue often varies for the same agent state.
Matched intent:
The matched intent must be the same for each turn for a test to pass.
Current page:
The active page must be the same for each turn for a test to pass.
Session parameters:
If you added tracking parameters when you created the test case, the test engine will check the corresponding session parameters and fail the test if there are missing/unexpected parameters or parameter value mismatches.
In some situations, a test case may have an expected failure due to an updated agent. If the conversation in the latest run reflects the expected changes, you can click Save as golden to overwrite the golden test case.
Edit test cases
To edit a test case, select the test case from the Test cases table, then click the edit
icon next to the name of the test case. The Update Test Cases dialog appears.To edit the metadata and settings for the test case, click the Settings tab.
You can edit the Test case name, Tags, and Note fields, or add new tracking parameters.
Click Save.
To edit the user input for the test case, click the User Input tab.
Add, remove, or edit the user inputs in JSON format.
Click Confirm. An automatic test run begins, and the updated conversation displays after the test run completes.
Click Save to overwrite the original golden test case, or click Save as to create a new test case with the changes.
View test coverage
To view a test coverage report for all test cases, click Coverage.
The Coverage page includes the following tabs:
Transitions coverage is determined for all state handlers (not including route groups) with a transition target exercised by the test case. The source flow/page and transition target flow/page are listed in the table.
Intents coverage is determined for all intents that are matched by the test case.
Route groups coverage is determined for all route groups matched by the test case.
Import and export test cases
To export test cases:
- Select test cases and click Export or click Export all test cases.
- Click Download to local file, or provide a Cloud Storage bucket URI and click Export to Google Cloud Storage.
When importing test cases, Conversational Agents (Dialogflow CX) always creates new test cases for the target agent and does not overwrite any existing test cases. To import test cases:
- Click Import.
- Choose a local file or provide a Cloud Storage bucket URI.