Test cases

You can use the built-in test feature to uncover bugs and prevent regressions. To test your agent, you create test cases using the simulator to define golden test cases, then you execute test cases as needed. A test execution verifies that agent responses have not changed for end-user inputs defined in the test case.

The instructions below show you how to use the console, but you can also find the same functionality in the API.

Simulator settings

When you first open the simulator, you need to select an agent environment or flow versions and an active flow. In most cases, you should use the draft environment and default start flow.

You can also enable or disable webhook calls at any time with the webhook toggle button. Disabling webhooks is useful when defining test cases.

Simulator input

When interacting with the simulator, you provide end-user input as text, then press enter or click the send button. In addition to plain text, you can choose alternate input types with the input selector:

  • Parameter: Inject a parameter value. You can provide new parameters or provide preset values for existing parameters.
  • Event: Invoke an event.
  • DTMF: Send dual-tone multi-frequency signaling (Touch-Tone) input for telephony interactions.

Create a test case

To create a conversation:

  1. Open the Dialogflow CX Console.
  2. Choose your GCP project.
  3. Select your agent.
  4. Click Test Agent to open the simulator.
  5. Chat with the agent to create a conversation that covers the functionality you want to test. For each turn, verify correct values for the triggered intent, the agent response, the active page, and the session parameters.

Simulator screenshot.

To save a conversation as a test case:

  1. Click the save button.
  2. Enter a test case display name. Every test case must have a unique display name.
  3. Optionally provide a tag name. Tags help you organize your test cases. All tags must start with a "#".
  4. Optionally provide a note that describes the purpose of the test case.
  5. Optionally select parameters you want to track in the test case. A list of suggested parameters are provided. You can also input other parameters to track. If you select tracking parameters, the parameter assertion is checked when running the test case. See more details on the parameter assertion in the Run test cases section.
  6. Click Save to save the test case.

Run test cases

To view all test cases for an agent, click Test Cases in the Manage tab. The test cases table shows the test name, the tags, the latest test time and environment, and the latest test result.

To run test cases:

  1. Select the test cases you want to run, and click Run. Alternatively, you can click Run all test cases.
  2. Select the environment you want to run the test cases against.
  3. The tests start running and you can view the status in the task queue. The test result will be updated when it is completed.

To view the test detail result, click the test case. The golden test case and the latest run conversations are shown side-by-side.

Simulator screenshot.

You can click on any agent's conversational turn to see the details for that turn. The test engine checks the following types of data turn by turn to evaluate the test result:

  • Agent dialogue:

    For each conversational turn, the agent dialogue is compared from golden to latest run. If there is any difference, a warning is displayed. These differences do not prevent a test from passing, because agent dialogue often varies for the same agent state.

  • Matched intent:

    The matched intent must be the same for each turn for a test to pass.

  • Current page:

    The active page must be the same for each turn for a test to pass.

  • Session parameters:

    If you added tracking parameters when you created the test case, the test engine will check the corresponding session parameters and fail the test if there are missing/unexpected parameters or parameter value mismatches.

In some situations, a test case may have an expected failure due to an updated agent. If the conversation in the latest run reflects the expected changes, you can click Save as golden to overwrite the golden test case.

Edit test cases

To edit a test case, select the test case from the Test cases table, then click the edit icon next to the name of the test case. The Update Test Cases dialog appears.

To edit the metadata and settings for the test case, click the Settings tab.

  1. You can edit the Test case name, Tags, and Note fields, or add new tracking parameters.

  2. Click Save.

To edit the user input for the test case, click the User Input tab.

  1. Add, remove, or edit the user inputs in JSON format.

  2. Click Confirm. An automatic test run begins, and the updated conversation displays after the test run completes.

  3. Click Save to overwrite the original golden test case, or click Save as to create a new test case with the changes.

View test coverage

To view a test coverage report for all test cases, click Coverage.

The Coverage page includes the following tabs:

  • Transitions coverage is determined for all state handlers (not including route groups) with a transition target exercised by the test case. The source flow/page and transition target flow/page are listed in the table.

  • Intents coverage is determined for all intents that are matched by the test case.

  • Route groups coverage is determined for all route groups matched by the test case.

Import and export test cases

To export test cases:

  1. Select test cases and click Export or click Export all test cases.
  2. Click Download to local file, or provide a Cloud Storage bucket URI and click Export to Google Cloud Storage.

When importing test cases, Dialogflow always creates new test cases for the target agent and does not overwrite any existing test cases. To import test cases:

  1. Click Import.
  2. Choose a local file or provide a Cloud Storage bucket URI.