Generative fallback

The generative fallback feature uses Google's latest generative large language models (LLMs) to generate virtual agent responses when end-user input does not match an intent or parameter for form filling.

The feature can be configured with a text prompt that instructs the LLM how to respond. You can use a predefined text prompt or add your own prompts. With the predefined prompt, the virtual agent is able to handle basic conversational situations. For example:

  • Greet and say goodbye to the user.
  • Repeat what the agent said in case the user didn't understand.
  • Hold the line when the user asks for it.
  • Summarize the conversation.

You can enable generative fallback on no-match event handlers used in flows, pages, or during parameter filling. When generative fallback is enabled for a no-match event, whenever that event triggers, Dialogflow will attempt to produce a generated response that will be said back to the user. If the response generation is unsuccessful, the regular prescribed agent response will be issued instead.

Limitations

The feature is currently available in the languages supported by Vertex AI PaLM API.

Enable generative fallback

You can enable generative fallback in your agent on no-match event handlers, which can be used in flow, page or parameter fulfillment.

Enable generative fallback for an entire flow's no-match events:

  1. Go to the Dialogflow CX Console.
  2. Select a project.
  3. Select an agent and a flow.
  4. Expand the Start page of the flow.
  5. Click sys.no-match-default under Event handlers.
  6. Check Enable generative fallback under Agent responses
  7. Click Save.

Enable generative fallback on specific no-match events:

  1. Navigate to the target No-match event handler (any event starting with No-match, such as No-match default, No-match 1, and so on).
  2. Check Enable generative fallback under Agent responses.
  3. Click Save.

Configure generative fallback

As mentioned above, the generative fallback feature passes a request to a large language model in order to produce the generated response. The request takes the form of a text prompt that is a mix of natural language and information about the current state of the agent and of the conversation. The prompt and the generated response are checked against a list of banned phrases. If they contain any banned phrase, or are otherwise deemed unsafe, generation will be unsuccessful, and the regular prescribed response (under Agent says in the same fulfilment) will be issued instead.

The feature can be configured in multiple ways:

  1. Select a predefined prompt.
  2. Define a custom prompt.
  3. Add or remove phrases from the list of banned phrases.

When creating a prompt, in addition to a natural language description of what kind of context should be generated, the following placeholders can also be used:

Term Definition
$conversation The conversation between the agent and the user, excluding the very last user utterance.
$last-user-utterance The last user utterance.
$flow-description The flow description of the active flow.
$route-descriptions The intent descriptions of the active intents.

Make sure to have good flow and intent descriptions.

Choose a predefined prompt

  1. In Agent Settings, navigate to the ML tab, and then the Generative AI sub-tab.
  2. Select one of the options in the Template dropdown.
  3. Click Save.

The feature provides two template prompts, the Default template (which is not visible) and the Example template that can serve as a guide for writing your own prompts. Note that if you chose the Default template, and you see the Data store prompt on the Generative AI sub-tab, you can add information about the agent that influences the agent responses.

Define your own prompt

  1. In Agent Settings, navigate to the ML tab, and then the Generative AI sub-tab.
  2. Select + new template in the Template dropdown.
  3. Add a Template name.
  4. Add a Text prompt.
  5. Press Save.

You can also start by editing the Example template and saving it as a new template:

  1. Select Example in the Template dropdown.
  2. Click Edit.
  3. Edit the Template name.
  4. Edit the Text prompt.
  5. Press Save.

Modify the list of banned phrases

  1. In Agent Settings, navigate to the ML tab, and then the Generative AI sub-tab.
  2. In the Banned phrases section, inspect, add to, or remove from the list.
  3. Click Save.

Test generative fallback

The generative fallback feature can be directly tested in the simulator. Whenever a user utterance leads to no-match on a flow/page where the no-match event was configured to produce a generative response (and the generation succeeds), the agent will output the generated response.

Codelab

Also see the Generative fallback Codelab.