Generative versus deterministic features

Conversational Agents (Dialogflow CX) agents can use generative AI and/or more deterministic conversation control features when having conversations with end-users.

Fully generative features

The fully generative features are built on Vertex AI large language models (LLMs) for both understanding end-user intention as well as generating agent responses. These features are easy to use and provide a very natural conversation. The following is an overview of the fully generative features:

X Item
Vertex AI Builder agent apps Agent apps provide a new way for creating virtual agents using LLMs. You only need to provide natural language instructions and structured data. This can significantly reduce the virtual agent creation and maintenance time, and enable brand new types of conversational experiences for your business.
Vertex AI Builder chat apps
Also known as Data store agents
Data store agents parse and comprehend your public or private content (website, internal documents, and so on). Once this information is indexed, your agent can answer questions and have conversations about the content. You just need to provide the content.

Deterministic flow features

Using basic flow features, you can have more deterministic control over the conversation and all responses generated by the agent. Flows use language models for understanding end-user intention during a conversation, which may not be completely deterministic. However, once intention is established, you have complete control over the conversation flow and agent responses. Designing an agent with deterministic flows typically takes more design time, but this is a good option for agents that require explicit control over agent responses.

Partly generative flow features

Flows have some optional generative features that you can use when you don't need deterministic control over agent responses in certain conversation scenarios. The following is an overview of these features:

X Item
Generators Generators are used to generate agent responses. Rather than providing the agent response explicitly, you provide a LLM prompt that can handle many scenarios including conversation summarization, question answering, customer information retrieval, and escalation to a human.
Generative fallback Generative fallback is used to generate agent responses when end-user input does not match an expected intention. You can enable generative fallback in certain scenarios by providing a LLM prompt to generate the response.