Generative versus deterministic

During a conversation, Conversational Agents (Dialogflow CX) agents always use language models for understanding end-user intention, but you can choose whether and how language models are used for agent responses. You can decide between fully generative, partly generative, and deterministic features when designing your agent.

This guide provides an overview of these features. It helps to decide which of these features you plan to use, so you know which documentation will be relevant to you.

Fully generative

The fully generative features are built on Vertex AI large language models (LLMs) for both understanding end-user intention as well as generating agent responses. These features are easy to use and provide a very natural conversation. The following is an overview of the fully generative features:

X Item
Playbooks Playbooks provide a new way for creating virtual agents using LLMs. You only need to provide natural language instructions and structured data. This can significantly reduce the virtual agent creation and maintenance time, and enable brand new types of conversational experiences for your business.
Data stores Data stores parse and comprehend your public or private content (website, internal documents, and so on). Once this information is indexed, your agent can answer questions and have conversations about the content. You just need to provide the content.

Deterministic flows

If you require more deterministic control over the conversation and all responses generated by the agent, you can design your agent with flows.

X Item
Flows Flows use language models for understanding end-user intention during a conversation, which may not be completely deterministic. However, once intention is established, you have complete control over the conversation flow and agent responses. Designing an agent with deterministic flows typically takes more design time, but this is a good option for agents that require explicit control over agent responses.

Partly generative flows

Flows have some optional generative features that you can use when you don't need deterministic control over agent responses in certain conversation scenarios. The following is an overview of these features:

X Item
Generators Generators are used to generate agent responses. Rather than providing the agent response explicitly, you provide a LLM prompt that can handle many scenarios including conversation summarization, question answering, customer information retrieval, and escalation to a human.
Generative fallback Generative fallback is used to generate agent responses when end-user input does not match an expected intention. You can enable generative fallback in certain scenarios by providing a LLM prompt to generate the response.