Package google.cloud.dialogflow.v2beta1

Index

Agents

Service for managing Agents.

DeleteAgent

rpc DeleteAgent(DeleteAgentRequest) returns (Empty)

Deletes the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ExportAgent

rpc ExportAgent(ExportAgentRequest) returns (Operation)

Exports the specified agent to a ZIP file.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetAgent

rpc GetAgent(GetAgentRequest) returns (Agent)

Retrieves the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetValidationResult

rpc GetValidationResult(GetValidationResultRequest) returns (ValidationResult)

Gets agent validation result. Agent validation is performed during training time and is updated automatically when training is completed.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ImportAgent

rpc ImportAgent(ImportAgentRequest) returns (Operation)

Imports the specified agent from a ZIP file.

Uploads new intents and entity types without deleting the existing ones. Intents and entity types with the same name are replaced with the new versions from ImportAgentRequest. After the import, the imported draft agent will be trained automatically (unless disabled in agent settings). However, once the import is done, training may not be completed yet. Please call TrainAgent and wait for the operation it returns in order to train explicitly.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

The operation only tracks when importing is complete, not when it is done training.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

RestoreAgent

rpc RestoreAgent(RestoreAgentRequest) returns (Operation)

Restores the specified agent from a ZIP file.

Replaces the current agent version with a new one. All the intents and entity types in the older version are deleted. After the restore, the restored draft agent will be trained automatically (unless disabled in agent settings). However, once the restore is done, training may not be completed yet. Please call TrainAgent and wait for the operation it returns in order to train explicitly.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

The operation only tracks when restoring is complete, not when it is done training.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SearchAgents

rpc SearchAgents(SearchAgentsRequest) returns (SearchAgentsResponse)

Returns the list of agents. Since there is at most one conversational agent per project, this method is useful primarily for listing all agents across projects the caller has access to. One can achieve that with a wildcard project collection id "-". Refer to List Sub-Collections.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SetAgent

rpc SetAgent(SetAgentRequest) returns (Agent)

Creates/updates the specified agent.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

TrainAgent

rpc TrainAgent(TrainAgentRequest) returns (Operation)

Trains the specified agent.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

AnswerRecords

Service for managing AnswerRecords.

GetAnswerRecord

rpc GetAnswerRecord(GetAnswerRecordRequest) returns (AnswerRecord)

Deprecated. Retrieves a specific answer record.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListAnswerRecords

rpc ListAnswerRecords(ListAnswerRecordsRequest) returns (ListAnswerRecordsResponse)

Returns the list of all answer records in the specified project in reverse chronological order.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateAnswerRecord

rpc UpdateAnswerRecord(UpdateAnswerRecordRequest) returns (AnswerRecord)

Updates the specified answer record.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Contexts

Service for managing Contexts.

CreateContext

rpc CreateContext(CreateContextRequest) returns (Context)

Creates a context.

If the specified context already exists, overrides the context.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteAllContexts

rpc DeleteAllContexts(DeleteAllContextsRequest) returns (Empty)

Deletes all active contexts in the specified session.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteContext

rpc DeleteContext(DeleteContextRequest) returns (Empty)

Deletes the specified context.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetContext

rpc GetContext(GetContextRequest) returns (Context)

Retrieves the specified context.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListContexts

rpc ListContexts(ListContextsRequest) returns (ListContextsResponse)

Returns the list of all contexts in the specified session.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateContext

rpc UpdateContext(UpdateContextRequest) returns (Context)

Updates the specified context.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ConversationProfiles

Service for managing ConversationProfiles.

ClearSuggestionFeatureConfig

rpc ClearSuggestionFeatureConfig(ClearSuggestionFeatureConfigRequest) returns (Operation)

Clears a suggestion feature from a conversation profile for the given participant role.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CreateConversationProfile

rpc CreateConversationProfile(CreateConversationProfileRequest) returns (ConversationProfile)

Creates a conversation profile in the specified project.

[ConversationProfile.CreateTime][] and [ConversationProfile.UpdateTime][] aren't populated in the response. You can retrieve them via GetConversationProfile API.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteConversationProfile

rpc DeleteConversationProfile(DeleteConversationProfileRequest) returns (Empty)

Deletes the specified conversation profile.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetConversationProfile

rpc GetConversationProfile(GetConversationProfileRequest) returns (ConversationProfile)

Retrieves the specified conversation profile.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListConversationProfiles

rpc ListConversationProfiles(ListConversationProfilesRequest) returns (ListConversationProfilesResponse)

Returns the list of all conversation profiles in the specified project.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SetSuggestionFeatureConfig

rpc SetSuggestionFeatureConfig(SetSuggestionFeatureConfigRequest) returns (Operation)

Adds or updates a suggestion feature in a conversation profile. If the conversation profile contains the type of suggestion feature for the participant role, it will update it. Otherwise it will insert the suggestion feature.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

If a long running operation to add or update suggestion feature config for the same conversation profile, participant role and suggestion feature type exists, please cancel the existing long running operation before sending such request, otherwise the request will be rejected.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateConversationProfile

rpc UpdateConversationProfile(UpdateConversationProfileRequest) returns (ConversationProfile)

Updates the specified conversation profile.

[ConversationProfile.CreateTime][] and [ConversationProfile.UpdateTime][] aren't populated in the response. You can retrieve them via GetConversationProfile API.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Conversations

Service for managing Conversations.

BatchCreateMessages

rpc BatchCreateMessages(BatchCreateMessagesRequest) returns (BatchCreateMessagesResponse)

Batch ingests messages to conversation. Customers can use this RPC to ingest historical messages to conversation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CompleteConversation

rpc CompleteConversation(CompleteConversationRequest) returns (Conversation)

Completes the specified conversation. Finished conversations are purged from the database after 30 days.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CreateConversation

rpc CreateConversation(CreateConversationRequest) returns (Conversation)

Creates a new conversation. Conversations are auto-completed after 24 hours.

Conversation Lifecycle: There are two stages during a conversation: Automated Agent Stage and Assist Stage.

For Automated Agent Stage, there will be a dialogflow agent responding to user queries.

For Assist Stage, there's no dialogflow agent responding to user queries. But we will provide suggestions which are generated from conversation.

If Conversation.conversation_profile is configured for a dialogflow agent, conversation will start from Automated Agent Stage, otherwise, it will start from Assist Stage. And during Automated Agent Stage, once an Intent with Intent.live_agent_handoff is triggered, conversation will transfer to Assist Stage.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GenerateStatelessSummary

rpc GenerateStatelessSummary(GenerateStatelessSummaryRequest) returns (GenerateStatelessSummaryResponse)

Generates and returns a summary for a conversation that does not have a resource created for it.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetConversation

rpc GetConversation(GetConversationRequest) returns (Conversation)

Retrieves the specific conversation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListConversations

rpc ListConversations(ListConversationsRequest) returns (ListConversationsResponse)

Returns the list of all conversations in the specified project.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListMessages

rpc ListMessages(ListMessagesRequest) returns (ListMessagesResponse)

Lists messages that belong to a given conversation. messages are ordered by create_time in descending order. To fetch updates without duplication, send request with filter create_time_epoch_microseconds > [first item's create_time of previous request] and empty page_token.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SearchKnowledge

rpc SearchKnowledge(SearchKnowledgeRequest) returns (SearchKnowledgeResponse)

Get answers for the given query based on knowledge documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SuggestConversationSummary

rpc SuggestConversationSummary(SuggestConversationSummaryRequest) returns (SuggestConversationSummaryResponse)

Suggest summary for a conversation based on specific historical messages. The range of the messages to be used for summary can be specified in the request.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Documents

Service for managing knowledge Documents.

CreateDocument

rpc CreateDocument(CreateDocumentRequest) returns (Operation)

Creates a new document.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteDocument

rpc DeleteDocument(DeleteDocumentRequest) returns (Operation)

Deletes the specified document.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetDocument

rpc GetDocument(GetDocumentRequest) returns (Document)

Retrieves the specified document.

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ImportDocuments

rpc ImportDocuments(ImportDocumentsRequest) returns (Operation)

Create documents by importing data from external sources. Dialogflow supports up to 350 documents in each request. If you try to import more, Dialogflow will return an error.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListDocuments

rpc ListDocuments(ListDocumentsRequest) returns (ListDocumentsResponse)

Returns the list of all documents of the knowledge base.

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ReloadDocument

rpc ReloadDocument(ReloadDocumentRequest) returns (Operation)

Reloads the specified document from its specified source, content_uri or content. The previously loaded content of the document will be deleted. Note: Even when the content of the document has not changed, there still may be side effects because of internal implementation changes. Note: If the document source is Google Cloud Storage URI, its metadata will be replaced with the custom metadata from Google Cloud Storage if the import_gcs_custom_metadata field is set to true in the request.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateDocument

rpc UpdateDocument(UpdateDocumentRequest) returns (Operation)

Updates the specified document.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

EntityTypes

Service for managing EntityTypes.

BatchCreateEntities

rpc BatchCreateEntities(BatchCreateEntitiesRequest) returns (Operation)

Creates multiple new entities in the specified entity type.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

BatchDeleteEntities

rpc BatchDeleteEntities(BatchDeleteEntitiesRequest) returns (Operation)

Deletes entities in the specified entity type.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

BatchDeleteEntityTypes

rpc BatchDeleteEntityTypes(BatchDeleteEntityTypesRequest) returns (Operation)

Deletes entity types in the specified agent.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

BatchUpdateEntities

rpc BatchUpdateEntities(BatchUpdateEntitiesRequest) returns (Operation)

Updates or creates multiple entities in the specified entity type. This method does not affect entities in the entity type that aren't explicitly specified in the request.

Note: You should always train an agent prior to sending it queries. See the training documentation.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

BatchUpdateEntityTypes

rpc BatchUpdateEntityTypes(BatchUpdateEntityTypesRequest) returns (Operation)

Updates/Creates multiple entity types in the specified agent.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CreateEntityType

rpc CreateEntityType(CreateEntityTypeRequest) returns (EntityType)

Creates an entity type in the specified agent.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteEntityType

rpc DeleteEntityType(DeleteEntityTypeRequest) returns (Empty)

Deletes the specified entity type.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetEntityType

rpc GetEntityType(GetEntityTypeRequest) returns (EntityType)

Retrieves the specified entity type.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListEntityTypes

rpc ListEntityTypes(ListEntityTypesRequest) returns (ListEntityTypesResponse)

Returns the list of all entity types in the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateEntityType

rpc UpdateEntityType(UpdateEntityTypeRequest) returns (EntityType)

Updates the specified entity type.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Environments

Service for managing Environments.

CreateEnvironment

rpc CreateEnvironment(CreateEnvironmentRequest) returns (Environment)

Creates an agent environment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteEnvironment

rpc DeleteEnvironment(DeleteEnvironmentRequest) returns (Empty)

Deletes the specified agent environment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetEnvironment

rpc GetEnvironment(GetEnvironmentRequest) returns (Environment)

Retrieves the specified agent environment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetEnvironmentHistory

rpc GetEnvironmentHistory(GetEnvironmentHistoryRequest) returns (EnvironmentHistory)

Gets the history of the specified environment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListEnvironments

rpc ListEnvironments(ListEnvironmentsRequest) returns (ListEnvironmentsResponse)

Returns the list of all non-draft environments of the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateEnvironment

rpc UpdateEnvironment(UpdateEnvironmentRequest) returns (Environment)

Updates the specified agent environment.

This method allows you to deploy new agent versions into the environment. When an environment is pointed to a new agent version by setting environment.agent_version, the environment is temporarily set to the LOADING state. During that time, the environment keeps on serving the previous version of the agent. After the new agent version is done loading, the environment is set back to the RUNNING state. You can use "-" as Environment ID in environment name to update version in "draft" environment. WARNING: this will negate all recent changes to draft and can't be undone. You may want to save the draft to a version before calling this function.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Fulfillments

Service for managing Fulfillments.

GetFulfillment

rpc GetFulfillment(GetFulfillmentRequest) returns (Fulfillment)

Retrieves the fulfillment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateFulfillment

rpc UpdateFulfillment(UpdateFulfillmentRequest) returns (Fulfillment)

Updates the fulfillment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Intents

Service for managing Intents.

BatchDeleteIntents

rpc BatchDeleteIntents(BatchDeleteIntentsRequest) returns (Operation)

Deletes intents in the specified agent.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

BatchUpdateIntents

rpc BatchUpdateIntents(BatchUpdateIntentsRequest) returns (Operation)

Updates/Creates multiple intents in the specified agent.

This method is a long-running operation. The returned Operation type has the following method-specific fields:

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CreateIntent

rpc CreateIntent(CreateIntentRequest) returns (Intent)

Creates an intent in the specified agent.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteIntent

rpc DeleteIntent(DeleteIntentRequest) returns (Empty)

Deletes the specified intent and its direct or indirect followup intents.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetIntent

rpc GetIntent(GetIntentRequest) returns (Intent)

Retrieves the specified intent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListIntents

rpc ListIntents(ListIntentsRequest) returns (ListIntentsResponse)

Returns the list of all intents in the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateIntent

rpc UpdateIntent(UpdateIntentRequest) returns (Intent)

Updates the specified intent.

Note: You should always train an agent prior to sending it queries. See the training documentation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

KnowledgeBases

Service for managing KnowledgeBases.

CreateKnowledgeBase

rpc CreateKnowledgeBase(CreateKnowledgeBaseRequest) returns (KnowledgeBase)

Creates a knowledge base.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteKnowledgeBase

rpc DeleteKnowledgeBase(DeleteKnowledgeBaseRequest) returns (Empty)

Deletes the specified knowledge base.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetKnowledgeBase

rpc GetKnowledgeBase(GetKnowledgeBaseRequest) returns (KnowledgeBase)

Retrieves the specified knowledge base.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListKnowledgeBases

rpc ListKnowledgeBases(ListKnowledgeBasesRequest) returns (ListKnowledgeBasesResponse)

Returns the list of all knowledge bases of the specified agent.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateKnowledgeBase

rpc UpdateKnowledgeBase(UpdateKnowledgeBaseRequest) returns (KnowledgeBase)

Updates the specified knowledge base.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Participants

Service for managing Participants.

AnalyzeContent

rpc AnalyzeContent(AnalyzeContentRequest) returns (AnalyzeContentResponse)

Adds a text (chat, for example), or audio (phone recording, for example) message from a participant into the conversation.

Note: Always use agent versions for production traffic sent to virtual agents. See Versions and environments.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CompileSuggestion

rpc CompileSuggestion(CompileSuggestionRequest) returns (CompileSuggestionResponse)

Deprecated. use SuggestArticles and SuggestFaqAnswers instead.

Gets suggestions for a participant based on specific historical messages.

Note that ListSuggestions will only list the auto-generated suggestions, while CompileSuggestion will try to compile suggestion based on the provided conversation context in the real time.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

CreateParticipant

rpc CreateParticipant(CreateParticipantRequest) returns (Participant)

Creates a new participant in a conversation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetParticipant

rpc GetParticipant(GetParticipantRequest) returns (Participant)

Retrieves a conversation participant.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListParticipants

rpc ListParticipants(ListParticipantsRequest) returns (ListParticipantsResponse)

Returns the list of all participants in the specified conversation.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListSuggestions

rpc ListSuggestions(ListSuggestionsRequest) returns (ListSuggestionsResponse)

Deprecated: Use inline suggestion, event based suggestion or Suggestion* API instead. See HumanAgentAssistantConfig.name for more details. Removal Date: 2020-09-01.

Retrieves suggestions for live agents.

This method should be used by human agent client software to fetch auto generated suggestions in real-time, while the conversation with an end user is in progress. The functionality is implemented in terms of the list pagination design pattern. The client app should use the next_page_token field to fetch the next batch of suggestions. suggestions are sorted by create_time in descending order. To fetch latest suggestion, just set page_size to 1. To fetch new suggestions without duplication, send request with filter create_time_epoch_microseconds > [first item's create_time of previous request] and empty page_token.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

StreamingAnalyzeContent

rpc StreamingAnalyzeContent(StreamingAnalyzeContentRequest) returns (StreamingAnalyzeContentResponse)

Adds a text (e.g., chat) or audio (e.g., phone recording) message from a participant into the conversation. Note: This method is only available through the gRPC API (not REST).

The top-level message sent to the client by the server is StreamingAnalyzeContentResponse. Multiple response messages can be returned in order. The first one or more messages contain the recognition_result field. Each result represents a more complete transcript of what the user said. The next message contains the reply_text field, and potentially the reply_audio and/or the automated_agent_reply fields.

Note: Always use agent versions for production traffic sent to virtual agents. See Versions and environments.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SuggestArticles

rpc SuggestArticles(SuggestArticlesRequest) returns (SuggestArticlesResponse)

Gets suggested articles for a participant based on specific historical messages.

Note that ListSuggestions will only list the auto-generated suggestions, while CompileSuggestion will try to compile suggestion based on the provided conversation context in the real time.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SuggestFaqAnswers

rpc SuggestFaqAnswers(SuggestFaqAnswersRequest) returns (SuggestFaqAnswersResponse)

Gets suggested faq answers for a participant based on specific historical messages.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SuggestSmartReplies

rpc SuggestSmartReplies(SuggestSmartRepliesRequest) returns (SuggestSmartRepliesResponse)

Gets smart replies for a participant based on specific historical messages.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateParticipant

rpc UpdateParticipant(UpdateParticipantRequest) returns (Participant)

Updates the specified participant.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

SessionEntityTypes

Service for managing SessionEntityTypes.

CreateSessionEntityType

rpc CreateSessionEntityType(CreateSessionEntityTypeRequest) returns (SessionEntityType)

Creates a session entity type.

If the specified session entity type already exists, overrides the session entity type.

This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteSessionEntityType

rpc DeleteSessionEntityType(DeleteSessionEntityTypeRequest) returns (Empty)

Deletes the specified session entity type.

This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetSessionEntityType

rpc GetSessionEntityType(GetSessionEntityTypeRequest) returns (SessionEntityType)

Retrieves the specified session entity type.

This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListSessionEntityTypes

rpc ListSessionEntityTypes(ListSessionEntityTypesRequest) returns (ListSessionEntityTypesResponse)

Returns the list of all session entity types in the specified session.

This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateSessionEntityType

rpc UpdateSessionEntityType(UpdateSessionEntityTypeRequest) returns (SessionEntityType)

Updates the specified session entity type.

This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Sessions

A service used for session interactions.

For more information, see the API interactions guide.

DetectIntent

rpc DetectIntent(DetectIntentRequest) returns (DetectIntentResponse)

Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.

If you might use Agent Assist or other CCAI products now or in the future, consider using AnalyzeContent instead of DetectIntent. AnalyzeContent has additional functionality for Agent Assist and other CCAI products.

Note: Always use agent versions for production traffic. See Versions and environments.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

StreamingDetectIntent

rpc StreamingDetectIntent(StreamingDetectIntentRequest) returns (StreamingDetectIntentResponse)

Processes a natural language query in audio format in a streaming fashion and returns structured, actionable data as a result. This method is only available via the gRPC API (not REST).

If you might use Agent Assist or other CCAI products now or in the future, consider using StreamingAnalyzeContent instead of StreamingDetectIntent. StreamingAnalyzeContent has additional functionality for Agent Assist and other CCAI products.

Note: Always use agent versions for production traffic. See Versions and environments.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Versions

Service for managing Versions.

CreateVersion

rpc CreateVersion(CreateVersionRequest) returns (Version)

Creates an agent version.

The new version points to the agent instance in the "default" environment.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

DeleteVersion

rpc DeleteVersion(DeleteVersionRequest) returns (Empty)

Delete the specified agent version.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

GetVersion

rpc GetVersion(GetVersionRequest) returns (Version)

Retrieves the specified agent version.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

ListVersions

rpc ListVersions(ListVersionsRequest) returns (ListVersionsResponse)

Returns the list of all versions of the specified agent.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

UpdateVersion

rpc UpdateVersion(UpdateVersionRequest) returns (Version)

Updates the specified agent version.

Note that this method does not allow you to update the state of the agent the given version points to. It allows you to update only mutable properties of the version resource.

Authorization scopes

Requires one of the following OAuth scopes:

  • https://www.googleapis.com/auth/cloud-platform
  • https://www.googleapis.com/auth/dialogflow

For more information, see the Authentication Overview.

Agent

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system.

For more information about agents, see the Agent guide.

Fields
parent

string

Required. The project of this agent. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>

display_name

string

Required. The name of this agent.

default_language_code

string

Required. The default language of the agent as a language tag. See Language Support for a list of the currently supported language codes. This field cannot be set by the Update method.

supported_language_codes[]

string

Optional. The list of all languages supported by this agent (except for the default_language_code).

time_zone

string

Required. The time zone of this agent from the time zone database, e.g., America/New_York, Europe/Paris.

description

string

Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.

avatar_uri

string

Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted Web Demo integration.

enable_logging

bool

Optional. Determines whether this agent should log conversation queries.

match_mode
(deprecated)

MatchMode

Optional. Determines how intents are detected from user queries.

classification_threshold

float

Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.

api_version

ApiVersion

Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version.

tier

Tier

Optional. The agent tier. If not specified, TIER_STANDARD is assumed.

ApiVersion

API version for the agent.

Enums
API_VERSION_UNSPECIFIED Not specified.
API_VERSION_V1 Legacy V1 API.
API_VERSION_V2 V2 API.
API_VERSION_V2_BETA_1 V2beta1 API.

MatchMode

Match mode determines how intents are detected from user queries.

Enums
MATCH_MODE_UNSPECIFIED Not specified.
MATCH_MODE_HYBRID Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities.
MATCH_MODE_ML_ONLY Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large custom entities.

Tier

Represents the agent tier.

Enums
TIER_UNSPECIFIED Not specified. This value should never be used.
TIER_STANDARD Trial Edition, previously known as Standard Edition.
TIER_ENTERPRISE Essentials Edition, previously known as Enterprise Essential Edition.
TIER_ENTERPRISE_PLUS

Essentials Edition (same as TIER_ENTERPRISE), previously known as Enterprise Plus Edition.

AgentAssistantFeedback

Detail feedback of Agent Assistant result.

Fields
answer_relevance

AnswerRelevance

Optional. Whether or not the suggested answer is relevant.

For example:

document_correctness

DocumentCorrectness

Optional. Whether or not the information in the document is correct.

For example:

  • Query: "Can I return the package in 2 days once received?"
  • Suggested document says: "Items must be returned/exchanged within 60 days of the purchase date."
  • Ground truth: "No return or exchange is allowed."
document_efficiency

DocumentEfficiency

Optional. Whether or not the suggested document is efficient. For example, if the document is poorly written, hard to understand, hard to use or too long to find useful information, document_efficiency is DocumentEfficiency.INEFFICIENT.

summarization_feedback

SummarizationFeedback

Feedback for conversation summarization.

knowledge_search_feedback

KnowledgeSearchFeedback

Optional. Feedback for knowledge search.

AnswerRelevance

Relevance of an answer.

Enums
ANSWER_RELEVANCE_UNSPECIFIED Answer relevance unspecified.
IRRELEVANT Answer is irrelevant to query.
RELEVANT Answer is relevant to query.

DocumentCorrectness

Correctness of document.

Enums
DOCUMENT_CORRECTNESS_UNSPECIFIED Document correctness unspecified.
INCORRECT Information in document is incorrect.
CORRECT Information in document is correct.

DocumentEfficiency

Efficiency of document.

Enums
DOCUMENT_EFFICIENCY_UNSPECIFIED Document efficiency unspecified.
INEFFICIENT Document is inefficient.
EFFICIENT Document is efficient.

KnowledgeSearchFeedback

Feedback for knowledge search.

Fields
answer_copied

bool

Whether the answer was copied by the human agent or not. If the value is set to be true, AnswerFeedback.clicked will be updated to be true.

clicked_uris[]

string

The URIs clicked by the human agent. The value is appended for each UpdateAnswerRecordRequest. If the value is not empty, AnswerFeedback.clicked will be updated to be true.

SummarizationFeedback

Feedback for conversation summarization.

Fields
start_timestamp

Timestamp

Timestamp when composing of the summary starts.

submit_timestamp

Timestamp

Timestamp when the summary was submitted.

summary_text

string

Text of actual submitted summary.

text_sections

map<string, string>

Optional. Actual text sections of submitted summary.

AgentAssistantRecord

Represents a record of a human agent assistant answer.

Fields
Union field answer. Output only. The agent assistant answer. answer can be only one of the following:
article_suggestion_answer

ArticleAnswer

Output only. The article suggestion answer.

faq_answer

FaqAnswer

Output only. The FAQ answer.

dialogflow_assist_answer

DialogflowAssistAnswer

Output only. The Dialogflow assist answer.

AnalyzeContentRequest

The request message for Participants.AnalyzeContent.

Fields
participant

string

Required. The name of the participant this text comes from. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource participant:

  • dialogflow.participants.analyzeContent
reply_audio_config

OutputAudioConfig

Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling AnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.

query_params

QueryParameters

Parameters for a Dialogflow virtual-agent query.

assist_query_params

AssistQueryParameters

Parameters for a human assist query.

cx_parameters

Struct

Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.

Note: this field should only be used if you are connecting to a Dialogflow CX agent.

cx_current_page

string

The unique identifier of the CX page to override the current_page in the session. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>.

If cx_current_page is specified, the previous state of the session will be ignored by Dialogflow CX, including the [previous page][QueryResult.current_page] and the [previous session parameters][QueryResult.parameters]. In most cases, cx_current_page and cx_parameters should be configured together to direct a session to a specific state.

Note: this field should only be used if you are connecting to a Dialogflow CX agent.

message_send_time

Timestamp

Optional. The send time of the message from end user or human agent's perspective. It is used for identifying the same message under one participant.

Given two messages under the same participant: * If send time are different regardless of whether the content of the messages are exactly the same, the conversation will regard them as two distinct messages sent by the participant. * If send time is the same regardless of whether the content of the messages are exactly the same, the conversation will regard them as same message, and ignore the message received later.

If the value is not provided, a new request will always be regarded as a new message without any de-duplication.

request_id

string

A unique identifier for this request. Restricted to 36 ASCII characters. A random UUID is recommended. This request is only idempotent if a request_id is provided.

Union field input. Required. The input content. input can be only one of the following:
text_input

TextInput

The natural language text to be processed.

audio_input

AudioInput

The natural language speech audio to be processed.

event_input

EventInput

An input event to send to Dialogflow.

suggestion_input

SuggestionInput

An input representing the selection of a suggestion.

intent_input

IntentInput

The intent to be triggered on V3 agent.

AnalyzeContentResponse

The response message for Participants.AnalyzeContent.

Fields
reply_text

string

Output only. The output text content. This field is set if the automated agent responded with text to show to the user.

reply_audio

OutputAudio

Optional. The audio data bytes encoded as specified in the request. This field is set if:

  • reply_audio_config was specified in the request, or
  • The automated agent responded with audio to play to the user. In such case, reply_audio.config contains settings used to synthesize the speech.

In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

automated_agent_reply

AutomatedAgentReply

Optional. Only set if a Dialogflow automated agent has responded. Note that: [AutomatedAgentReply.detect_intent_response.output_audio][] and [AutomatedAgentReply.detect_intent_response.output_audio_config][] are always empty, use reply_audio instead.

message

Message

Output only. Message analyzed by CCAI.

human_agent_suggestion_results[]

SuggestionResult

The suggestions for most recent human agent. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.human_agent_suggestion_config.

Note that any failure of Agent Assist features will not lead to the overall failure of an AnalyzeContent API call. Instead, the features will fail silently with the error field set in the corresponding SuggestionResult.

end_user_suggestion_results[]

SuggestionResult

The suggestions for end user. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.end_user_suggestion_config.

Same as human_agent_suggestion_results, any failure of Agent Assist features will not lead to the overall failure of an AnalyzeContent API call. Instead, the features will fail silently with the error field set in the corresponding SuggestionResult.

dtmf_parameters

DtmfParameters

Indicates the parameters of DTMF.

AnnotatedMessagePart

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

Fields
text

string

Required. A part of a message possibly annotated with an entity.

entity_type

string

Optional. The Dialogflow system entity type of this message part. If this is empty, Dialogflow could not annotate the phrase part with a system entity.

formatted_value

Value

Optional. The Dialogflow system entity formatted value of this message part. For example for a system entity of type @sys.unit-currency, this may contain:

{
  "amount": 5,
  "currency": "USD"
}

AnswerFeedback

Represents feedback the customer has about the quality & correctness of a certain answer in a conversation.

Fields
correctness_level

CorrectnessLevel

The correctness level of the specific answer.

clicked

bool

Indicates whether the answer/item was clicked by the human agent or not. Default to false. For knowledge search, the answer record is considered to be clicked if the answer was copied or any URI was clicked.

click_time

Timestamp

Time when the answer/item was clicked.

displayed

bool

Indicates whether the answer/item was displayed to the human agent in the agent desktop UI. Default to false.

display_time

Timestamp

Time when the answer/item was displayed.

Union field detail_feedback. Normally, detail feedback is provided when answer is not fully correct. detail_feedback can be only one of the following:
agent_assistant_detail_feedback

AgentAssistantFeedback

Optional. Detail feedback of agent assistant suggestions.

CorrectnessLevel

The correctness level of an answer.

Enums
CORRECTNESS_LEVEL_UNSPECIFIED Correctness level unspecified.
NOT_CORRECT Answer is totally wrong.
PARTIALLY_CORRECT Answer is partially correct.
FULLY_CORRECT Answer is fully correct.

AnswerRecord

Answer records are records to manage answer history and feedbacks for Dialogflow.

Currently, answer record includes:

  • human agent assistant article suggestion
  • human agent assistant faq article

It doesn't include:

  • DetectIntent intent matching
  • DetectIntent knowledge

Answer records are not related to the conversation history in the Dialogflow Console. A Record is generated even when the end-user disables conversation history in the console. Records are created when there's a human agent assistant suggestion generated.

A typical workflow for customers provide feedback to an answer is:

  1. For human agent assistant, customers get suggestion via ListSuggestions API. Together with the answers, AnswerRecord.name are returned to the customers.
  2. The customer uses the AnswerRecord.name to call the [UpdateAnswerRecord][] method to send feedback about a specific answer that they believe is wrong.
Fields
name

string

The unique identifier of this answer record. Required for AnswerRecords.UpdateAnswerRecord method. Format: projects/<Project ID>/locations/<Location ID>/answerRecords/<Answer Record ID>.

answer_feedback

AnswerFeedback

Optional. The AnswerFeedback for this record. You can set this with AnswerRecords.UpdateAnswerRecord in order to give us feedback about this answer.

Union field record. Output only. The record for this answer. record can be only one of the following:
agent_assistant_record

AgentAssistantRecord

Output only. The record for human agent assistant.

ArticleAnswer

Represents article answer.

Fields
title

string

The article title.

uri

string

The article URI.

snippets[]

string

Output only. Article snippets.

metadata

map<string, string>

A map that contains metadata about the answer and the document from which it originates.

answer_record

string

The name of answer record, in the format of "projects//locations//answerRecords/"

AssistQueryParameters

Represents the parameters of human assist query.

Fields
documents_metadata_filters

map<string, string>

Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have 'US' or 'CA' in their market metadata values and 'agent' in their user metadata values will be

documents_metadata_filters {
  key: "market"
  value: "US,CA"
}
documents_metadata_filters {
  key: "user"
  value: "agent"
}

AudioEncoding

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Enums
AUDIO_ENCODING_UNSPECIFIED Not specified.
AUDIO_ENCODING_LINEAR_16 Uncompressed 16-bit signed little-endian samples (Linear PCM).
AUDIO_ENCODING_FLAC FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.
AUDIO_ENCODING_MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
AUDIO_ENCODING_AMR Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000.
AUDIO_ENCODING_AMR_WB Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000.
AUDIO_ENCODING_OGG_OPUS Opus encoded audio frames in Ogg container (OggOpus). sample_rate_hertz must be 16000.
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz must be 16000.

AudioInput

Represents the natural language speech audio to be processed.

Fields
config

InputAudioConfig

Required. Instructs the speech recognizer how to process the speech audio.

audio

bytes

Required. The natural language speech audio to be processed. A single request can contain up to 1 minute of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.

AutomatedAgentConfig

Defines the Automated Agent to connect to a conversation.

Fields
agent

string

Required. ID of the Dialogflow agent environment to use.

This project needs to either be the same project as the conversation or you need to grant service-<Conversation Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com the Dialogflow API Service Agent role in this project.

  • For ES agents, use format: projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID or '-'>. If environment is not specified, the default draft environment is used. Refer to DetectIntentRequest for more details.

  • For CX agents, use format projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/environments/<Environment ID or '-'>. If environment is not specified, the default draft environment is used.

session_ttl

Duration

Optional. Configure lifetime of the Dialogflow session. By default, a Dialogflow CX session remains active and its data is stored for 30 minutes after the last request is sent for the session. This value should be no longer than 1 day.

AutomatedAgentReply

Represents a response from an automated agent.

Fields
response_messages[]

ResponseMessage

Response messages from the automated agent.

match_confidence

float

The confidence of the match. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation.

parameters

Struct

The collection of current parameters at the time of this response.

cx_session_parameters
(deprecated)

Struct

The collection of current Dialogflow CX agent session parameters at the time of this response. Deprecated: Use parameters instead.

automated_agent_reply_type

AutomatedAgentReplyType

AutomatedAgentReply type.

allow_cancellation

bool

Indicates whether the partial automated agent reply is interruptible when a later reply message arrives. e.g. if the agent specified some music as partial response, it can be cancelled.

cx_current_page

string

The unique identifier of the current Dialogflow CX conversation page. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>.

Union field response. Required. response can be only one of the following:
detect_intent_response

DetectIntentResponse

Response of the Dialogflow Sessions.DetectIntent call.

Union field match. Info on the query match for the automated agent response. match can be only one of the following:
intent

string

Name of the intent if an intent is matched for the query. For a V2 query, the value format is projects/<Project ID>/locations/ <Location ID>/agent/intents/<Intent ID>. For a V3 query, the value format is projects/<Project ID>/locations/ <Location ID>/agents/<Agent ID>/intents/<Intent ID>.

event

string

Event name if an event is triggered for the query.

AutomatedAgentReplyType

Represents different automated agent reply types.

Enums
AUTOMATED_AGENT_REPLY_TYPE_UNSPECIFIED Not specified. This should never happen.
PARTIAL Partial reply. e.g. Aggregated responses in a Fulfillment that enables return_partial_response can be returned as partial reply. WARNING: partial reply is not eligible for barge-in.
FINAL Final reply.

BargeInConfig

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request.

The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases:

  • No barge-in phase: which goes first and during which speech detection should not be carried out.

  • Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase.

The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the start of the input audio.

The flow goes like below:

--> Time

without speech detection  | utterance only | utterance or no-speech event
                          |                |
          +-------------+ | +------------+ | +---------------+
----------+ no barge-in +-|-+  barge-in  +-|-+ normal period +-----------
          +-------------+ | +------------+ | +---------------+

No-speech event is a response with END_OF_UTTERANCE without any transcript following up.

Fields
no_barge_in_duration

Duration

Duration that is not eligible for barge-in at the beginning of the input audio.

total_duration

Duration

Total duration for the playback at the beginning of the input audio.

BatchCreateEntitiesRequest

The request message for EntityTypes.BatchCreateEntities.

Fields
parent

string

Required. The name of the entity type to create entities in. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.batchCreateEntities
entities[]

Entity

Required. The entities to create.

language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

BatchCreateMessagesRequest

The request message for [Conversations.BatchCreateMessagesRequest][].

Fields
parent

string

Required. Resource identifier of the conversation to create message. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.messages.batchCreate
requests[]

CreateMessageRequest

Required. A maximum of 300 messages can be created in a batch. [CreateMessageRequest.message.send_time][] is required. All created messages will have identical Message.create_time.

BatchCreateMessagesResponse

The request message for [Conversations.BatchCreateMessagesResponse][].

Fields
messages[]

Message

Messages created.

BatchDeleteEntitiesRequest

The request message for EntityTypes.BatchDeleteEntities.

Fields
parent

string

Required. The name of the entity type to delete entries for. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.batchDeleteEntities
entity_values[]

string

Required. The reference values of the entities to delete. Note that these are not fully-qualified names, i.e. they don't start with projects/<Project ID>.

language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

BatchDeleteEntityTypesRequest

The request message for EntityTypes.BatchDeleteEntityTypes.

Fields
parent

string

Required. The name of the agent to delete all entities types for. Supported formats: - projects/<Project ID>/agent, - projects/<Project ID>/locations/<Location ID>/agent.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.batchDelete
entity_type_names[]

string

Required. The names entity types to delete. All names must point to the same agent as parent.

BatchDeleteIntentsRequest

The request message for Intents.BatchDeleteIntents.

Fields
parent

string

Required. The name of the agent to delete all entities types for. Supported formats:

  • projects/<Project ID>/agent
  • projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.intents.batchDelete
intents[]

Intent

Required. The collection of intents to delete. Only intent name must be filled in.

BatchUpdateEntitiesRequest

The request message for EntityTypes.BatchUpdateEntities.

Fields
parent

string

Required. The name of the entity type to update or create entities in. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.batchUpdateEntities
entities[]

Entity

Required. The entities to update or create.

language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

update_mask

FieldMask

Optional. The mask to control which fields get updated.

BatchUpdateEntityTypesRequest

The request message for EntityTypes.BatchUpdateEntityTypes.

Fields
parent

string

Required. The name of the agent to update or create entity types in. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.batchUpdate
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

update_mask

FieldMask

Optional. The mask to control which fields get updated.

Union field entity_type_batch. The source of the entity type batch.

For each entity type in the batch:

  • If name is specified, we update an existing entity type.
  • If name is not specified, we create a new entity type. entity_type_batch can be only one of the following:
entity_type_batch_uri

string

The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://".

entity_type_batch_inline

EntityTypeBatch

The collection of entity types to update or create.

BatchUpdateEntityTypesResponse

The response message for EntityTypes.BatchUpdateEntityTypes.

Fields
entity_types[]

EntityType

The collection of updated or created entity types.

BatchUpdateIntentsRequest

The request message for Intents.BatchUpdateIntents.

Fields
parent

string

Required. The name of the agent to update or create intents in. Supported formats:

  • projects/<Project ID>/agent
  • projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.intents.batchUpdate
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

update_mask

FieldMask

Optional. The mask to control which fields get updated.

intent_view

IntentView

Optional. The resource view to apply to the returned intent.

Union field intent_batch. Required. The source of the intent batch.

For each intent in the batch:

  • If name is specified, we update an existing intent.
  • If name is not specified, we create a new intent. intent_batch can be only one of the following:
intent_batch_uri

string

The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".

intent_batch_inline

IntentBatch

The collection of intents to update or create.

BatchUpdateIntentsResponse

The response message for Intents.BatchUpdateIntents.

Fields
intents[]

Intent

The collection of updated or created intents.

ClearSuggestionFeatureConfigOperationMetadata

Metadata for a [ConversationProfile.ClearSuggestionFeatureConfig][] operation.

Fields
conversation_profile

string

The resource name of the conversation profile. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>

participant_role

Role

Required. The participant role to remove the suggestion feature config. Only HUMAN_AGENT or END_USER can be used.

suggestion_feature_type

Type

Required. The type of the suggestion feature to remove.

create_time

Timestamp

Timestamp whe the request was created. The time is measured on server side.

ClearSuggestionFeatureConfigRequest

The request message for [ConversationProfiles.ClearFeature][].

Fields
conversation_profile

string

Required. The Conversation Profile to add or update the suggestion feature config. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

participant_role

Role

Required. The participant role to remove the suggestion feature config. Only HUMAN_AGENT or END_USER can be used.

suggestion_feature_type

Type

Required. The type of the suggestion feature to remove.

CloudConversationDebuggingInfo

Cloud conversation info for easier debugging. It will get populated in StreamingDetectIntentResponse or StreamingAnalyzeContentResponse when the flag enable_debugging_info is set to true in corresponding requests.

Fields
audio_data_chunks

int32

Number of input audio data chunks in streaming requests.

result_end_time_offset

Duration

Time offset of the end of speech utterance relative to the beginning of the first audio chunk.

first_audio_duration

Duration

Duration of first audio chunk.

single_utterance

bool

Whether client used single utterance mode.

speech_partial_results_end_times[]

Duration

Time offsets of the speech partial results relative to the beginning of the stream.

speech_final_results_end_times[]

Duration

Time offsets of the speech final results (is_final=true) relative to the beginning of the stream.

partial_responses

int32

Total number of partial responses.

speaker_id_passive_latency_ms_offset

int32

Time offset of Speaker ID stream close time relative to the Speech stream close time in milliseconds. Only meaningful for conversations involving passive verification.

bargein_event_triggered

bool

Whether a barge-in event is triggered in this request.

speech_single_utterance

bool

Whether speech uses single utterance mode.

dtmf_partial_results_times[]

Duration

Time offsets of the DTMF partial results relative to the beginning of the stream.

dtmf_final_results_times[]

Duration

Time offsets of the DTMF final results relative to the beginning of the stream.

single_utterance_end_time_offset

Duration

Time offset of the end-of-single-utterance signal relative to the beginning of the stream.

no_speech_timeout

Duration

No speech timeout settings for the stream.

endpointing_timeout

Duration

Speech endpointing timeout settings for the stream.

is_input_text

bool

Whether the streaming terminates with an injected text query.

client_half_close_time_offset

Duration

Client half close time in terms of input audio duration.

client_half_close_streaming_time_offset

Duration

Client half close time in terms of API streaming duration.

CompileSuggestionRequest

The request message for Participants.CompileSuggestion.

Fields
parent

string

Required. The name of the participant to fetch suggestion for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.suggestions.list
latest_message

string

Optional. The name of the latest conversation message to compile suggestion for. If empty, it will be the latest message of the conversation.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Optional. Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. If zero or less than zero, 20 is used.

CompileSuggestionResponse

The response message for Participants.CompileSuggestion.

Fields
suggestion

Suggestion

The compiled suggestion.

latest_message

string

The name of the latest conversation message used to compile suggestion for.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including latest_message to compile the suggestion. It may be smaller than the CompileSuggestionRequest.context_size field in the request if there aren't that many messages in the conversation.

CompleteConversationRequest

The request message for Conversations.CompleteConversation.

Fields
name

string

Required. Resource identifier of the conversation to close. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.conversations.complete

Context

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent.

Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts.

For more information about context, see the Contexts guide.

Fields
name

string

Required. The unique identifier of the context. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>,

The Context ID is always converted to lowercase, may only contain characters in a-zA-Z0-9_-% and may be at most 250 bytes long.

If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

The following context names are reserved for internal use by Dialogflow. You should not use these contexts or create contexts with these names:

  • __system_counters__
  • *_id_dialog_context
  • *_dialog_params_size
lifespan_count

int32

Optional. The number of conversational query requests after which the context expires. The default is 0. If set to 0, the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.

parameters

Struct

Optional. The collection of parameters associated with this context.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

Conversation

Represents a conversation. A conversation is an interaction between an agent, including live agents and Dialogflow agents, and a support customer. Conversations can include phone calls and text-based chat sessions.

Fields
name

string

Output only. The unique identifier of this conversation. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

lifecycle_state

LifecycleState

Output only. The current state of the Conversation.

conversation_profile

string

Required. The Conversation Profile to be used to configure this Conversation. This field cannot be updated. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

phone_number

ConversationPhoneNumber

Output only. Required if the conversation is to be connected over telephony.

conversation_stage

ConversationStage

The stage of a conversation. It indicates whether the virtual agent or a human agent is handling the conversation.

If the conversation is created with the conversation profile that has Dialogflow config set, defaults to ConversationStage.VIRTUAL_AGENT_STAGE; Otherwise, defaults to ConversationStage.HUMAN_ASSIST_STAGE.

If the conversation is created with the conversation profile that has Dialogflow config set but explicitly sets conversation_stage to ConversationStage.HUMAN_ASSIST_STAGE, it skips ConversationStage.VIRTUAL_AGENT_STAGE stage and directly goes to ConversationStage.HUMAN_ASSIST_STAGE.

start_time

Timestamp

Output only. The time the conversation was started.

end_time

Timestamp

Output only. The time the conversation was finished.

ConversationStage

Enumeration of the different conversation stages a conversation can be in. Reference: https://cloud.google.com/dialogflow/priv/docs/contact-center/basics#stages

Enums
CONVERSATION_STAGE_UNSPECIFIED Unknown. Should never be used after a conversation is successfully created.
VIRTUAL_AGENT_STAGE The conversation should return virtual agent responses into the conversation.
HUMAN_ASSIST_STAGE The conversation should not provide responses, just listen and provide suggestions.

LifecycleState

Enumeration of the completion status of the conversation.

Enums
LIFECYCLE_STATE_UNSPECIFIED Unknown.
IN_PROGRESS Conversation is currently open for media analysis.
COMPLETED Conversation has been completed.

ConversationEvent

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

Fields
conversation

string

Required. The unique identifier of the conversation this notification refers to. Format: projects/<Project ID>/conversations/<Conversation ID>.

type

Type

Required. The type of the event that this notification refers to.

error_status

Status

Optional. More detailed information about an error. Only set for type UNRECOVERABLE_ERROR_IN_PHONE_CALL.

Union field payload. Payload of conversation event. payload can be only one of the following:
new_message_payload

Message

Payload of NEW_MESSAGE event.

Type

Enumeration of the types of events available.

Enums
TYPE_UNSPECIFIED Type not set.
CONVERSATION_STARTED A new conversation has been opened. This is fired when a telephone call is answered, or a conversation is created via the API.
CONVERSATION_FINISHED An existing conversation has closed. This is fired when a telephone call is terminated, or a conversation is closed via the API.
HUMAN_INTERVENTION_NEEDED An existing conversation has received notification from Dialogflow that human intervention is required.
NEW_MESSAGE An existing conversation has received a new message, either from API or telephony. It is configured in ConversationProfile.new_message_event_notification_config
UNRECOVERABLE_ERROR

Unrecoverable error during a telephone call.

In general non-recoverable errors only occur if something was misconfigured in the ConversationProfile corresponding to the call. After a non-recoverable error, Dialogflow may stop responding.

We don't fire this event:

  • in an API call because we can directly return the error, or,
  • when we can recover from an error.

ConversationPhoneNumber

Represents a phone number for telephony integration. It allows for connecting a particular conversation over telephony.

Fields
phone_number

string

Output only. The phone number to connect to this conversation.

ConversationProfile

Defines the services to connect to incoming Dialogflow conversations.

Fields
name

string

The unique identifier of this conversation profile. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

display_name

string

Required. Human readable name for this profile. Max length 1024 bytes.

create_time

Timestamp

Output only. Create time of the conversation profile.

update_time

Timestamp

Output only. Update time of the conversation profile.

automated_agent_config

AutomatedAgentConfig

Configuration for an automated agent to use with this profile.

human_agent_assistant_config

HumanAgentAssistantConfig

Configuration for agent assistance to use with this profile.

human_agent_handoff_config

HumanAgentHandoffConfig

Configuration for connecting to a live agent.

Currently, this feature is not general available, please contact Google to get access.

notification_config

NotificationConfig

Configuration for publishing conversation lifecycle events.

logging_config

LoggingConfig

Configuration for logging conversation lifecycle events.

new_message_event_notification_config

NotificationConfig

Configuration for publishing new message events. Event will be sent in format of ConversationEvent

stt_config

SpeechToTextConfig

Settings for speech transcription.

language_code

string

Language code for the conversation profile. If not specified, the language is en-US. Language at ConversationProfile should be set for all non en-us languages. This should be a BCP-47 language tag. Example: "en-US".

time_zone

string

The time zone of this conversational profile from the time zone database, e.g., America/New_York, Europe/Paris. Defaults to America/New_York.

security_settings

string

Name of the CX SecuritySettings reference for the agent. Format: projects/<Project ID>/locations/<Location ID>/securitySettings/<Security Settings ID>.

tts_config

SynthesizeSpeechConfig

Configuration for Text-to-Speech synthesization.

Used by Phone Gateway to specify synthesization options. If agent defines synthesization options as well, agent settings overrides the option here.

CreateContextRequest

The request message for Contexts.CreateContext.

Fields
parent

string

Required. The session to create a context for. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.contexts.create
context

Context

Required. The context to create.

CreateConversationProfileRequest

The request message for ConversationProfiles.CreateConversationProfile.

Fields
parent

string

Required. The project to create a conversation profile for. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.conversationProfiles.create
conversation_profile

ConversationProfile

Required. The conversation profile to create.

CreateConversationRequest

The request message for Conversations.CreateConversation.

Fields
parent

string

Required. Resource identifier of the project creating the conversation. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.conversations.create
conversation

Conversation

Required. The conversation to create.

conversation_id

string

Optional. Identifier of the conversation. Generally it's auto generated by Google. Only set it if you cannot wait for the response to return a auto-generated one to you.

The conversation ID must be compliant with the regression fomula [a-zA-Z][a-zA-Z0-9_-]* with the characters length in range of [3,64]. If the field is provided, the caller is resposible for 1. the uniqueness of the ID, otherwise the request will be rejected. 2. the consistency for whether to use custom ID or not under a project to better ensure uniqueness.

CreateDocumentRequest

Request message for Documents.CreateDocument.

Fields
parent

string

Required. The knowledge base to create a document for. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.documents.create
document

Document

Required. The document to create.

import_gcs_custom_metadata

bool

Whether to import custom metadata from Google Cloud Storage. Only valid when the document source is Google Cloud Storage URI.

CreateEntityTypeRequest

The request message for EntityTypes.CreateEntityType.

Fields
parent

string

Required. The agent to create a entity type for. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.create
entity_type

EntityType

Required. The entity type to create.

language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

CreateEnvironmentRequest

The request message for Environments.CreateEnvironment.

Fields
parent

string

Required. The agent to create an environment for. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.environments.create
environment

Environment

Required. The environment to create.

environment_id

string

Required. The unique id of the new environment.

CreateIntentRequest

The request message for Intents.CreateIntent.

Fields
parent

string

Required. The agent to create a intent for. Supported formats:

  • projects/<Project ID>/agent
  • projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.intents.create
intent

Intent

Required. The intent to create.

language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

intent_view

IntentView

Optional. The resource view to apply to the returned intent.

CreateKnowledgeBaseRequest

Request message for KnowledgeBases.CreateKnowledgeBase.

Fields
parent

string

Required. The project to create a knowledge base for. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.knowledgeBases.create
knowledge_base

KnowledgeBase

Required. The knowledge base to create.

CreateMessageRequest

The request message to create one Message. Currently it is only used in BatchCreateMessagesRequest.

Fields
parent

string

Required. Resource identifier of the conversation to create message. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.messages.create
message

Message

Required. The message to create. Message.participant is required.

CreateParticipantRequest

The request message for Participants.CreateParticipant.

Fields
parent

string

Required. Resource identifier of the conversation adding the participant. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.participants.create
participant

Participant

Required. The participant to create.

CreateSessionEntityTypeRequest

The request message for SessionEntityTypes.CreateSessionEntityType.

Fields
parent

string

Required. The session to create a session entity type for. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.sessionEntityTypes.create
session_entity_type

SessionEntityType

Required. The session entity type to create.

CreateVersionRequest

The request message for Versions.CreateVersion.

Fields
parent

string

Required. The agent to create a version for. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.versions.create
version

Version

Required. The version to create.

DeleteAgentRequest

The request message for Agents.DeleteAgent.

Fields
parent

string

Required. The project that the agent to delete is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.delete

DeleteAllContextsRequest

The request message for Contexts.DeleteAllContexts.

Fields
parent

string

Required. The name of the session to delete all contexts from. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.contexts.deleteAll

DeleteContextRequest

The request message for Contexts.DeleteContext.

Fields
name

string

Required. The name of the context to delete. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.contexts.delete

DeleteConversationProfileRequest

The request message for ConversationProfiles.DeleteConversationProfile.

This operation fails if the conversation profile is still referenced from a phone number.

Fields
name

string

Required. The name of the conversation profile to delete. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.conversationProfiles.delete

DeleteDocumentRequest

Request message for Documents.DeleteDocument.

Fields
name

string

Required. The name of the document to delete. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.documents.delete

DeleteEntityTypeRequest

The request message for EntityTypes.DeleteEntityType.

Fields
name

string

Required. The name of the entity type to delete. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.entityTypes.delete

DeleteEnvironmentRequest

The request message for Environments.DeleteEnvironment.

Fields
name

string

Required. The name of the environment to delete. / Format: - projects/<Project ID>/agent/environments/<Environment ID> - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.environments.delete

DeleteIntentRequest

The request message for Intents.DeleteIntent.

Fields
name

string

Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them.

Supported formats:

  • projects/<Project ID>/agent/intents/<Intent ID>
  • projects/<Project ID>/locations/<Location ID>/agent/intents/<Intent ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.intents.delete

DeleteKnowledgeBaseRequest

Request message for KnowledgeBases.DeleteKnowledgeBase.

Fields
name

string

Required. The name of the knowledge base to delete. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.knowledgeBases.delete
force

bool

Optional. Force deletes the knowledge base. When set to true, any documents in the knowledge base are also deleted.

DeleteSessionEntityTypeRequest

The request message for SessionEntityTypes.DeleteSessionEntityType.

Fields
name

string

Required. The name of the entity type to delete. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/environments/ <Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.sessionEntityTypes.delete

DeleteVersionRequest

The request message for Versions.DeleteVersion.

Fields
name

string

Required. The name of the version to delete. Supported formats: - projects/<Project ID>/agent/versions/<Version ID> - projects/<Project ID>/locations/<Location ID>/agent/versions/<Version ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.versions.delete

DetectIntentRequest

The request to detect user's intent.

Fields
session

string

Required. The name of the session this query is sent to. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment (Environment ID might be referred to as environment name at some places). If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters. For more information, see the API interactions guide.

Note: Always use agent versions for production traffic. See Versions and environments.

Authorization requires the following IAM permission on the specified resource session:

  • dialogflow.sessions.detectIntent
query_params

QueryParameters

The parameters of this query.

query_input

QueryInput

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

output_audio_config

OutputAudioConfig

Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

output_audio_config_mask

FieldMask

Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.

If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.

input_audio

bytes

The natural language speech audio to be processed. This field should be populated iff query_input is set to an input audio config. A single request can contain up to 1 minute of speech audio data.

DetectIntentResponse

The message returned from the DetectIntent method.

Fields
response_id

string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

query_result

QueryResult

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

alternative_query_results[]

QueryResult

If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing QueryResult.intent_detection_confidence. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.

webhook_status

Status

Specifies the status of the webhook request.

output_audio

bytes

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

output_audio_config

OutputAudioConfig

The config used by the speech synthesizer to generate the output audio.

DialogflowAssistAnswer

Represents a Dialogflow assist answer.

Fields
answer_record

string

The name of answer record, in the format of "projects//locations//answerRecords/"

Union field result. Result from DetectIntent for one matched intent. result can be only one of the following:
query_result

QueryResult

Result from v2 agent.

intent_suggestion

IntentSuggestion

An intent suggestion generated from conversation.

Document

A knowledge document to be used by a KnowledgeBase.

For more information, see the knowledge base guide.

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

Fields
name

string

Optional. The document resource name. The name must be empty when creating a document. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

display_name

string

Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails.

mime_type

string

Required. The MIME type of this document.

knowledge_types[]

KnowledgeType

Required. The knowledge type of document content.

enable_auto_reload

bool

Optional. If true, we try to automatically reload the document every day (at a time picked by the system). If false or unspecified, we don't try to automatically reload the document.

Currently you can only enable automatic reload for documents sourced from a public url, see source field for the source types.

Reload status can be tracked in latest_reload_status. If a reload fails, we will keep the document unchanged.

If a reload fails with internal errors, the system will try to reload the document on the next day. If a reload fails with non-retriable errors (e.g. PERMISSION_DENIED), the system will not try to reload the document anymore. You need to manually reload the document successfully by calling ReloadDocument and clear the errors.

latest_reload_status

ReloadStatus

Output only. The time and status of the latest reload. This reload may have been triggered automatically or manually and may not have succeeded.

metadata

map<string, string>

Optional. Metadata for the document. The metadata supports arbitrary key-value pairs. Suggested use cases include storing a document's title, an external URL distinct from the document's content_uri, etc. The max size of a key or a value of the metadata is 1024 bytes.

state

State

Output only. The current state of the document.

Union field source. The source of this document. source can be only one of the following:
content_uri

string

The URI where the file content is located.

For documents stored in Google Cloud Storage, these URIs must have the form gs://<bucket-name>/<object-name>.

NOTE: External URLs must correspond to public webpages, i.e., they must be indexed by Google Search. In particular, URLs for showing documents in Google Cloud Storage (i.e. the URL in your browser) are not supported. Instead use the gs:// format URI described above.

content
(deprecated)

string

The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. Note: This field is in the process of being deprecated, please use raw_content instead.

raw_content

bytes

The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types.

KnowledgeType

The knowledge type of document content.

Enums
KNOWLEDGE_TYPE_UNSPECIFIED The type is unspecified or arbitrary.
FAQ

The document content contains question and answer pairs as either HTML or CSV. Typical FAQ HTML formats are parsed accurately, but unusual formats may fail to be parsed.

CSV must have questions in the first column and answers in the second, with no header. Because of this explicit format, they are always parsed accurately.

EXTRACTIVE_QA Documents for which unstructured text is extracted and used for question answering.
ARTICLE_SUGGESTION The entire document content as a whole can be used for query results. Only for Contact Center Solutions on Dialogflow.
AGENT_FACING_SMART_REPLY The document contains agent-facing Smart Reply entries.
SMART_REPLY The legacy enum for agent-facing smart reply feature.

ReloadStatus

The status of a reload attempt.

Fields
time

Timestamp

Output only. The time of a reload attempt. This reload may have been triggered automatically or manually and may not have succeeded.

status

Status

Output only. The status of a reload attempt or the initial load.

State

Possible states of the document

Enums
STATE_UNSPECIFIED The document state is unspecified.
CREATING The document creation is in progress.
ACTIVE The document is active and ready to use.
UPDATING The document updation is in progress.
RELOADING The document is reloading.
DELETING The document deletion is in progress.

DtmfParameters

The message in the response that indicates the parameters of DTMF.

Fields
accepts_dtmf_input

bool

Indicates whether DTMF input can be handled in the next request.

EntityType

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted.

Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent.

For more information, see the Entity guide.

Fields
name

string

The unique identifier of the entity type. Required for EntityTypes.UpdateEntityType and EntityTypes.BatchUpdateEntityTypes methods. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

display_name

string

Required. The name of the entity type.

kind

Kind

Required. Indicates the kind of entity type.

auto_expansion_mode

AutoExpansionMode

Optional. Indicates whether the entity type can be automatically expanded.

entities[]

Entity

Optional. The collection of entity entries associated with the entity type.

enable_fuzzy_extraction

bool

Optional. Enables fuzzy entity extraction during classification.

AutoExpansionMode

Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).

Enums
AUTO_EXPANSION_MODE_UNSPECIFIED Auto expansion disabled for the entity.
AUTO_EXPANSION_MODE_DEFAULT Allows an agent to recognize values that have not been explicitly listed in the entity.

Entity

An entity entry for an associated entity type.

Fields
value

string

Required. The primary value associated with this entity entry. For example, if the entity type is vegetable, the value could be scallions.

For KIND_MAP entity types:

  • A reference value to be used in place of synonyms.

For KIND_LIST entity types:

  • A string that can contain references to other entity types (with or without aliases).
synonyms[]

string

Required. A collection of value synonyms. For example, if the entity type is vegetable, and value is scallions, a synonym could be green onions.

For KIND_LIST entity types:

  • This collection must contain exactly one synonym equal to value.

Kind

Represents kinds of entities.

Enums
KIND_UNSPECIFIED Not specified. This value should be never used.
KIND_MAP Map entity types allow mapping of a group of synonyms to a reference value.
KIND_LIST List entity types contain a set of entries that do not map to reference values. However, list entity types can contain references to other entity types (with or without aliases).
KIND_REGEXP Regexp entity types allow to specify regular expressions in entries values.

EntityTypeBatch

This message is a wrapper around a collection of entity types.

Fields
entity_types[]

EntityType

A collection of entity types.

Environment

You can create multiple versions of your agent and publish them to separate environments.

When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent.

When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for:

  • testing
  • development
  • production
  • etc.

For more information, see the versions and environments guide.

Fields
name

string

Output only. The unique identifier of this agent environment. Supported formats: - projects/<Project ID>/agent/environments/<Environment ID> - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>

description

string

Optional. The developer-provided description for this environment. The maximum length is 500 characters. If exceeded, the request is rejected.

agent_version

string

Optional. The agent version loaded into this environment. Supported formats: - projects/<Project ID>/agent/versions/<Version ID> - projects/<Project ID>/locations/<Location ID>/agent/versions/<Version ID>

state

State

Output only. The state of this environment. This field is read-only, i.e., it cannot be set by create and update methods.

update_time

Timestamp

Output only. The last update time of this environment. This field is read-only, i.e., it cannot be set by create and update methods.

text_to_speech_settings

TextToSpeechSettings

Optional. Text to speech settings for this environment.

fulfillment

Fulfillment

Optional. The fulfillment settings to use for this environment.

State

Represents an environment state. When an environment is pointed to a new agent version, the environment is temporarily set to the LOADING state. During that time, the environment keeps on serving the previous version of the agent. After the new agent version is done loading, the environment is set back to the RUNNING state.

Enums
STATE_UNSPECIFIED Not specified. This value is not used.
STOPPED Stopped.
LOADING Loading.
RUNNING Running.

EnvironmentHistory

The response message for Environments.GetEnvironmentHistory.

Fields
parent

string

Output only. The name of the environment this history is for. Supported formats: - projects/<Project ID>/agent/environments/<Environment ID> - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>

entries[]

Entry

Output only. The list of agent environments. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Output only. Token to retrieve the next page of results, or empty if there are no more results in the list.

Entry

Represents an environment history entry.

Fields
agent_version

string

The agent version loaded into this environment history entry.

description

string

The developer-provided description for this environment history entry.

create_time

Timestamp

The creation time of this environment history entry.

EventInput

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

Fields
name

string

Required. The unique identifier of the event.

parameters

Struct

The collection of parameters associated with the event.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
language_code

string

Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

This field is ignored when used in the context of a WebhookResponse.followup_event_input field, because the language was already defined in the originating detect intent request.

ExportAgentRequest

The request message for Agents.ExportAgent.

Fields
parent

string

Required. The project that the agent to export is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.export
agent_uri

string

Optional. The Google Cloud Storage URI to export the agent to. The format of this URI must be gs://<bucket-name>/<object-name>. If left unspecified, the serialized agent is returned inline.

Dialogflow performs a write operation for the Cloud Storage object on the caller's behalf, so your request authentication must have write permissions for the object. For more information, see Dialogflow access control.

ExportAgentResponse

The response message for Agents.ExportAgent.

Fields
Union field agent. The exported agent. agent can be only one of the following:
agent_uri

string

The URI to a file containing the exported agent. This field is populated only if agent_uri is specified in ExportAgentRequest.

agent_content

bytes

Zip compressed raw byte content for agent.

ExportOperationMetadata

Metadata related to the Export Data Operations (e.g. ExportDocument).

Fields
exported_gcs_destination

GcsDestination

Cloud Storage file path of the exported data.

FaqAnswer

Represents answer from "frequently asked questions".

Fields
answer

string

The piece of text from the source knowledge base document.

confidence

float

The system's confidence score that this Knowledge answer is a good match for this conversational query, range from 0.0 (completely uncertain) to 1.0 (completely certain).

question

string

The corresponding FAQ question.

source

string

Indicates which Knowledge Document this answer was extracted from. Format: projects/<Project ID>/locations/<Location ID>/agent/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

metadata

map<string, string>

A map that contains metadata about the answer and the document from which it originates.

answer_record

string

The name of answer record, in the format of "projects//locations//answerRecords/"

Fulfillment

By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday.

For more information, see the fulfillment guide.

Fields
name

string

Required. The unique identifier of the fulfillment. Supported formats:

  • projects/<Project ID>/agent/fulfillment
  • projects/<Project ID>/locations/<Location ID>/agent/fulfillment

This field is not used for Fulfillment in an Environment.

display_name

string

The human-readable name of the fulfillment, unique within the agent.

This field is not used for Fulfillment in an Environment.

enabled

bool

Whether fulfillment is enabled.

features[]

Feature

The field defines whether the fulfillment is enabled for certain features.

Union field fulfillment. Required. The fulfillment configuration. fulfillment can be only one of the following:
generic_web_service

GenericWebService

Configuration for a generic web service.

Feature

Whether fulfillment is enabled for the specific feature.

Fields
type

Type

The type of the feature that enabled for fulfillment.

Type

The type of the feature.

Enums
TYPE_UNSPECIFIED Feature type not specified.
SMALLTALK Fulfillment is enabled for SmallTalk.

GenericWebService

Represents configuration for a generic web service. Dialogflow supports two mechanisms for authentications:

  • Basic authentication with username and password.
  • Authentication with additional authentication headers.

More information could be found at: https://cloud.google.com/dialogflow/docs/fulfillment-configure.

Fields
uri

string

Required. The fulfillment URI for receiving POST requests. It must use https protocol.

username

string

The user name for HTTP Basic authentication.

password

string

The password for HTTP Basic authentication.

request_headers

map<string, string>

The HTTP request headers to send together with fulfillment requests.

is_cloud_function
(deprecated)

bool

Optional. Indicates if generic web service is created through Cloud Functions integration. Defaults to false.

is_cloud_function is deprecated. Cloud functions can be configured by its uri as a regular web service now.

GcsDestination

Google Cloud Storage location for the output.

Fields
uri

string

Required. The Google Cloud Storage URIs for the output. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case. The requesting user must have "write-permission" to the bucket.

GcsSource

Google Cloud Storage location for single input.

Fields
uri

string

Required. The Google Cloud Storage URIs for the inputs. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case.

GcsSources

Google Cloud Storage locations for the inputs.

Fields
uris[]

string

Required. Google Cloud Storage URIs for the inputs. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case.

GenerateStatelessSummaryRequest

The request message for Conversations.GenerateStatelessSummary.

Fields
stateless_conversation

MinimalConversation

Required. The conversation to suggest a summary for.

conversation_profile

ConversationProfile

Required. A ConversationProfile containing information required for Summary generation. Required fields: {language_code, security_settings} Optional fields: {agent_assistant_config}

latest_message

string

The name of the latest conversation message used as context for generating a Summary. If empty, the latest message of the conversation will be used. The format is specific to the user and the names of the messages provided.

max_context_size

int32

Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 500 and at most 1000.

MinimalConversation

The minimum amount of information required to generate a Summary without having a Conversation resource created.

Fields
messages[]

Message

Required. The messages that the Summary will be generated from. It is expected that this message content is already redacted and does not contain any PII. Required fields: {content, language_code, participant, participant_role} Optional fields: {send_time} If send_time is not provided, then the messages must be provided in chronological order.

parent

string

Required. The parent resource to charge for the Summary's generation. Format: projects/<Project ID>/locations/<Location ID>.

GenerateStatelessSummaryResponse

The response message for Conversations.GenerateStatelessSummary.

Fields
summary

Summary

Generated summary.

latest_message

string

The name of the latest conversation message used as context for compiling suggestion. The format is specific to the user and the names of the messages provided.

context_size

int32

Number of messages prior to and including [last_conversation_message][] used to compile the suggestion. It may be smaller than the [GenerateStatelessSummaryRequest.context_size][] field in the request if there weren't that many messages in the conversation.

Summary

Generated summary for a conversation.

Fields
text

string

The summary content that is concatenated into one string.

text_sections

map<string, string>

The summary content that is divided into sections. The key is the section's name and the value is the section's content. There is no specific format for the key or value.

baseline_model_version

string

The baseline model version used to generate this summary. It is empty if a baseline model was not used to generate this summary.

GetAgentRequest

The request message for Agents.GetAgent.

Fields
parent

string

Required. The project that the agent to fetch is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.get

GetAnswerRecordRequest

Request message for AnswerRecords.GetAnswerRecord.

Fields
name

string

Required. The name of the answer record to retrieve. Format: projects/<Project ID>/locations/<Location ID>/answerRecords/<Answer Record Id>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.answerrecords.get

GetContextRequest

The request message for Contexts.GetContext.

Fields
name

string

Required. The name of the context. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>, - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.contexts.get

GetConversationProfileRequest

The request message for ConversationProfiles.GetConversationProfile.

Fields
name

string

Required. The resource name of the conversation profile. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.conversationProfiles.get

GetConversationRequest

The request message for Conversations.GetConversation.

Fields
name

string

Required. The name of the conversation. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.conversations.get

GetDocumentRequest

Request message for Documents.GetDocument.

Fields
name

string

Required. The name of the document to retrieve. Format projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.documents.get

GetEntityTypeRequest

The request message for EntityTypes.GetEntityType.

Fields
name

string

Required. The name of the entity type. Supported formats: - projects/<Project ID>/agent/entityTypes/<Entity Type ID> - projects/<Project ID>/locations/<Location ID>/agent/entityTypes/<Entity Type ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.entityTypes.get
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

GetEnvironmentHistoryRequest

The request message for Environments.GetEnvironmentHistory.

Fields
parent

string

Required. The name of the environment to retrieve history for. Supported formats: - projects/<Project ID>/agent/environments/<Environment ID> - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.environments.getHistory
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

GetEnvironmentRequest

The request message for Environments.GetEnvironment.

Fields
name

string

Required. The name of the environment. Supported formats: - projects/<Project ID>/agent/environments/<Environment ID> - projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.environments.get

GetFulfillmentRequest

The request message for Fulfillments.GetFulfillment.

Fields
name

string

Required. The name of the fulfillment. Supported formats:

  • projects/<Project ID>/agent/fulfillment
  • projects/<Project ID>/locations/<Location ID>/agent/fulfillment

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.fulfillments.get

GetIntentRequest

The request message for Intents.GetIntent.

Fields
name

string

Required. The name of the intent. Supported formats:

  • projects/<Project ID>/agent/intents/<Intent ID>
  • projects/<Project ID>/locations/<Location ID>/agent/intents/<Intent ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.intents.get
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

intent_view

IntentView

Optional. The resource view to apply to the returned intent.

GetKnowledgeBaseRequest

Request message for KnowledgeBases.GetKnowledgeBase.

Fields
name

string

Required. The name of the knowledge base to retrieve. Format projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.knowledgeBases.get

GetParticipantRequest

The request message for Participants.GetParticipant.

Fields
name

string

Required. The name of the participant. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.participants.get

GetSessionEntityTypeRequest

The request message for SessionEntityTypes.GetSessionEntityType.

Fields
name

string

Required. The name of the session entity type. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/environments/ <Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.sessionEntityTypes.get

GetValidationResultRequest

The request message for Agents.GetValidationResult.

Fields
parent

string

Required. The project that the agent is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.get
language_code

string

Optional. The language for which you want a validation result. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used.

GetVersionRequest

The request message for Versions.GetVersion.

Fields
name

string

Required. The name of the version. Supported formats: - projects/<Project ID>/agent/versions/<Version ID> - projects/<Project ID>/locations/<Location ID>/agent/versions/<Version ID>

Authorization requires the following IAM permission on the specified resource name:

  • dialogflow.versions.get

HumanAgentAssistantConfig

Defines the Human Agent Assistant to connect to a conversation.

Fields
notification_config

NotificationConfig

Pub/Sub topic on which to publish new agent assistant events.

human_agent_suggestion_config

SuggestionConfig

Configuration for agent assistance of human agent participant.

end_user_suggestion_config

SuggestionConfig

Configuration for agent assistance of end user participant.

Currently, this feature is not general available, please contact Google to get access.

message_analysis_config

MessageAnalysisConfig

Configuration for message analysis.

ConversationModelConfig

Custom conversation models used in agent assist feature.

Supported feature: ARTICLE_SUGGESTION, SMART_COMPOSE, SMART_REPLY, CONVERSATION_SUMMARIZATION.

Fields
model

string

Conversation model resource name. Format: projects/<Project ID>/conversationModels/<Model ID>.

baseline_model_version

string

Version of current baseline model. It will be ignored if model is set. Valid versions are: Article Suggestion baseline model: - 0.9 - 1.0 (default) Summarization baseline model: - 1.0

ConversationProcessConfig

Config to process conversation.

Fields
recent_sentences_count

int32

Number of recent non-small-talk sentences to use as context for article and FAQ suggestion

MessageAnalysisConfig

Configuration for analyses to run on each conversation message.

Fields
enable_entity_extraction

bool

Enable entity extraction in conversation messages on agent assist stage. If unspecified, defaults to false.

Currently, this feature is not general available, please contact Google to get access.

enable_sentiment_analysis

bool

Enable sentiment analysis in conversation messages on agent assist stage. If unspecified, defaults to false. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral: https://cloud.google.com/natural-language/docs/basics#sentiment_analysis For Participants.StreamingAnalyzeContent method, result will be in StreamingAnalyzeContentResponse.message.SentimentAnalysisResult. For Participants.AnalyzeContent method, result will be in AnalyzeContentResponse.message.SentimentAnalysisResult For Conversations.ListMessages method, result will be in ListMessagesResponse.messages.SentimentAnalysisResult If Pub/Sub notification is configured, result will be in ConversationEvent.new_message_payload.SentimentAnalysisResult.

SuggestionConfig

Detail human agent assistant config.

Fields
feature_configs[]

SuggestionFeatureConfig

Configuration of different suggestion features. One feature can have only one config.

group_suggestion_responses

bool

If group_suggestion_responses is false, and there are multiple feature_configs in event based suggestion or StreamingAnalyzeContent, we will try to deliver suggestions to customers as soon as we get new suggestion. Different type of suggestions based on the same context will be in separate Pub/Sub event or StreamingAnalyzeContentResponse.

If group_suggestion_responses set to true. All the suggestions to the same participant based on the same context will be grouped into a single Pub/Sub event or StreamingAnalyzeContentResponse.

SuggestionFeatureConfig

Config for suggestion features.

Fields
suggestion_feature

SuggestionFeature

The suggestion feature.

enable_event_based_suggestion

bool

Automatically iterates all participants and tries to compile suggestions.

Supported features: ARTICLE_SUGGESTION, FAQ, DIALOGFLOW_ASSIST, ENTITY_EXTRACTION, KNOWLEDGE_ASSIST.

disable_agent_query_logging

bool

Optional. Disable the logging of search queries sent by human agents. It can prevent those queries from being stored at answer records.

Supported features: KNOWLEDGE_SEARCH.

enable_conversation_augmented_query

bool

Optional. Enable including conversation context during query answer generation. Supported features: KNOWLEDGE_SEARCH.

suggestion_trigger_settings

SuggestionTriggerSettings

Settings of suggestion trigger.

Currently, only ARTICLE_SUGGESTION, FAQ, and DIALOGFLOW_ASSIST will use this field.

query_config

SuggestionQueryConfig

Configs of query.

conversation_model_config

ConversationModelConfig

Configs of custom conversation model.

conversation_process_config

ConversationProcessConfig

Configs for processing conversation.

SuggestionQueryConfig

Config for suggestion query.

Fields
max_results

int32

Maximum number of results to return. Currently, if unset, defaults to 10. And the max number is 20.

confidence_threshold

float

Confidence threshold of query result.

Agent Assist gives each suggestion a score in the range [0.0, 1.0], based on the relevance between the suggestion and the current conversation context. A score of 0.0 has no relevance, while a score of 1.0 has high relevance. Only suggestions with a score greater than or equal to the value of this field are included in the results.

For a baseline model (the default), the recommended value is in the range [0.05, 0.1].

For a custom model, there is no recommended value. Tune this value by starting from a very low value and slowly increasing until you have desired results.

If this field is not set, it is default to 0.0, which means that all suggestions are returned.

Supported features: ARTICLE_SUGGESTION, FAQ, SMART_REPLY, SMART_COMPOSE, KNOWLEDGE_SEARCH, KNOWLEDGE_ASSIST, ENTITY_EXTRACTION.

context_filter_settings

ContextFilterSettings

Determines how recent conversation context is filtered when generating suggestions. If unspecified, no messages will be dropped.

sections

Sections

Optional. The customized sections chosen to return when requesting a summary of a conversation.

Union field query_source. Source of query. query_source can be only one of the following:
knowledge_base_query_source

KnowledgeBaseQuerySource

Query from knowledgebase. It is used by: ARTICLE_SUGGESTION, FAQ.

document_query_source

DocumentQuerySource

Query from knowledge base document. It is used by: SMART_REPLY, SMART_COMPOSE.

dialogflow_query_source

DialogflowQuerySource

Query from Dialogflow agent. It is used by DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.

ContextFilterSettings

Settings that determine how to filter recent conversation context when generating suggestions.

Fields
drop_handoff_messages

bool

If set to true, the last message from virtual agent (hand off message) and the message before it (trigger message of hand off) are dropped.

drop_virtual_agent_messages

bool

If set to true, all messages from virtual agent are dropped.

drop_ivr_messages

bool

If set to true, all messages from ivr stage are dropped.

DialogflowQuerySource

Dialogflow source setting.

Supported feature: DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.

Fields
agent

string

Required. The name of a dialogflow virtual agent used for end user side intent detection and suggestion. Format: projects/<Project ID>/locations/<Location ID>/agent. When multiple agents are allowed in the same Dialogflow project.

human_agent_side_config

HumanAgentSideConfig

The Dialogflow assist configuration for human agent.

HumanAgentSideConfig

The configuration used for human agent side Dialogflow assist suggestion.

Fields
agent

string

Optional. The name of a dialogflow virtual agent used for intent detection and suggestion triggered by human agent. Format: projects/<Project ID>/locations/<Location ID>/agent.

DocumentQuerySource

Document source settings.

Supported features: SMART_REPLY, SMART_COMPOSE.

Fields
documents[]

string

Required. Knowledge documents to query from. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<KnowledgeBase ID>/documents/<Document ID>. Currently, only one document is supported.

KnowledgeBaseQuerySource

Knowledge base source settings.

Supported features: ARTICLE_SUGGESTION, FAQ.

Fields
knowledge_bases[]

string

Required. Knowledge bases to query. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>. Currently, only one knowledge base is supported.

Sections

Custom sections to return when requesting a summary of a conversation. This is only supported when baseline_model_version == '2.0'.

Supported features: CONVERSATION_SUMMARIZATION, CONVERSATION_SUMMARIZATION_VOICE.

Fields
section_types[]

SectionType

The selected sections chosen to return when requesting a summary of a conversation. A duplicate selected section will be treated as a single selected section. If section types are not provided, the default will be {SITUATION, ACTION, RESULT}.

SectionType

Selectable sections to return when requesting a summary of a conversation.

Enums
SECTION_TYPE_UNSPECIFIED Undefined section type, does not return anything.
SITUATION What the customer needs help with or has question about. Section name: "situation".
ACTION What the agent does to help the customer. Section name: "action".
RESOLUTION Result of the customer service. A single word describing the result of the conversation. Section name: "resolution".
REASON_FOR_CANCELLATION Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation".
CUSTOMER_SATISFACTION "Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction".
ENTITIES Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/".

SuggestionTriggerSettings

Settings of suggestion trigger.

Fields
no_small_talk

bool

Do not trigger if last utterance is small talk.

only_end_user

bool

Only trigger suggestion if participant role of last utterance is END_USER.

HumanAgentAssistantEvent

Output only. Represents a notification sent to Pub/Sub subscribers for agent assistant events in a specific conversation.

Fields
conversation

string

The conversation this notification refers to. Format: projects/<Project ID>/conversations/<Conversation ID>.

participant

string

The participant that the suggestion is compiled for. And This field is used to call Participants.ListSuggestions API. Format: projects/<Project ID>/conversations/<Conversation ID>/participants/<Participant ID>. It will not be set in legacy workflow. HumanAgentAssistantConfig.name for more information.

suggestion_results[]

SuggestionResult

The suggestion results payload that this notification refers to. It will only be set when HumanAgentAssistantConfig.SuggestionConfig.group_suggestion_responses sets to true.

HumanAgentHandoffConfig

Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation.

Currently, this feature is not general available, please contact Google to get access.

Fields
Union field agent_service. Required. Specifies which agent service to connect for human agent handoff. agent_service can be only one of the following:
live_person_config

LivePersonConfig

Uses LivePerson (https://www.liveperson.com).

salesforce_live_agent_config

SalesforceLiveAgentConfig

Uses Salesforce Live Agent.

LivePersonConfig

Configuration specific to LivePerson (https://www.liveperson.com).

Fields
account_number

string

Required. Account number of the LivePerson account to connect. This is the account number you input at the login page.

SalesforceLiveAgentConfig

Configuration specific to Salesforce Live Agent.

Fields
organization_id

string

Required. The organization ID of the Salesforce account.

deployment_id

string

Required. Live Agent deployment ID.

button_id

string

Required. Live Agent chat button ID.

endpoint_domain

string

Required. Domain of the Live Agent endpoint for this agent. You can find the endpoint URL in the Live Agent settings page. For example if URL has the form https://d.la4-c2-phx.salesforceliveagent.com/..., you should fill in d.la4-c2-phx.salesforceliveagent.com.

ImportAgentRequest

The request message for Agents.ImportAgent.

Fields
parent

string

Required. The project that the agent to import is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.import
Union field agent. Required. The agent to import. agent can be only one of the following:
agent_uri

string

The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://".

Dialogflow performs a read operation for the Cloud Storage object on the caller's behalf, so your request authentication must have read permissions for the object. For more information, see Dialogflow access control.

agent_content

bytes

Zip compressed raw byte content for agent.

ImportDocumentTemplate

The template used for importing documents.

Fields
mime_type

string

Required. The MIME type of the document.

knowledge_types[]

KnowledgeType

Required. The knowledge type of document content.

metadata

map<string, string>

Metadata for the document. The metadata supports arbitrary key-value pairs. Suggested use cases include storing a document's title, an external URL distinct from the document's content_uri, etc. The max size of a key or a value of the metadata is 1024 bytes.

ImportDocumentsRequest

Request message for Documents.ImportDocuments.

Fields
parent

string

Required. The knowledge base to import documents into. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.documents.create
document_template

ImportDocumentTemplate

Required. Document template used for importing all the documents.

import_gcs_custom_metadata

bool

Whether to import custom metadata from Google Cloud Storage. Only valid when the document source is Google Cloud Storage URI.

Union field source. Required. The source to use for importing documents.

If the source captures multiple objects, then multiple documents will be created, one corresponding to each object, and all of these documents will be created using the same document template.

Dialogflow supports up to 350 documents in each request. If you try to import more, Dialogflow will return an error. source can be only one of the following:

gcs_source

GcsSources

Optional. The Google Cloud Storage location for the documents. The path can include a wildcard.

These URIs may have the forms gs://<bucket-name>/<object-name>. gs://<bucket-name>/<object-path>/*.<extension>.

ImportDocumentsResponse

Response message for Documents.ImportDocuments.

Fields
warnings[]

Status

Includes details about skipped documents or any other warnings.

InputAudioConfig

Instructs the speech recognizer on how to process the audio content.

Fields
audio_encoding

AudioEncoding

Required. Audio encoding of the audio content to process.

sample_rate_hertz

int32

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

language_code

string

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

enable_word_info

bool

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

phrase_hints[]
(deprecated)

string

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

speech_contexts[]

SpeechContext

Context information to assist speech recognition.

See the Cloud Speech documentation for more details.

model

string

Optional. Which Speech model to select for the given request. For more information, see Speech models.

model_variant

SpeechModelVariant

Which variant of the Speech model to use.

single_utterance

bool

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

disable_no_speech_recognized_event

bool

Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.

barge_in_config

BargeInConfig

Configuration of barge-in behavior during the streaming of input audio.

enable_automatic_punctuation

bool

Enable automatic punctuation option at the speech backend.

opt_out_conformer_model_migration

bool

If true, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to Dialogflow ES Speech model migration.

InputTextConfig

Defines the language used in the input text.

Fields
language_code

string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.

Intent

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification.

For more information, see the intent guide.

Fields
name

string

Optional. The unique identifier of this intent. Required for Intents.UpdateIntent and Intents.BatchUpdateIntents methods. Supported formats:

  • projects/<Project ID>/agent/intents/<Intent ID>
  • projects/<Project ID>/locations/<Location ID>/agent/intents/<Intent ID>
display_name

string

Required. The name of this intent.

webhook_state

WebhookState

Optional. Indicates whether webhooks are enabled for the intent.

priority

int32

Optional. The priority of this intent. Higher numbers represent higher priorities.

  • If the supplied value is unspecified or 0, the service translates the value to 500,000, which corresponds to the Normal priority in the console.
  • If the supplied value is negative, the intent is ignored in runtime detect intent requests.
is_fallback

bool

Optional. Indicates whether this is a fallback intent.

ml_enabled
(deprecated)

bool

Optional. Indicates whether Machine Learning is enabled for the intent. Note: If ml_enabled setting is set to false, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off. DEPRECATED! Please use ml_disabled field instead. NOTE: If both ml_enabled and ml_disabled are either not set or false, then the default value is determined as follows:

  • Before April 15th, 2018 the default is: ml_enabled = false / ml_disabled = true.
  • After April 15th, 2018 the default is: ml_enabled = true / ml_disabled = false.
ml_disabled

bool

Optional. Indicates whether Machine Learning is disabled for the intent. Note: If ml_disabled setting is set to true, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off.

live_agent_handoff

bool

Optional. Indicates that a live agent should be brought in to handle the interaction with the user. In most cases, when you set this flag to true, you would also want to set end_interaction to true as well. Default is false.

end_interaction

bool

Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.

input_context_names[]

string

Optional. The list of context names required for this intent to be triggered. Formats:

  • projects/<Project ID>/agent/sessions/-/contexts/<Context ID>
  • projects/<Project ID>/locations/<Location ID>/agent/sessions/-/contexts/<Context ID>
events[]

string

Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent. Event names are limited to 150 characters.

training_phrases[]

TrainingPhrase

Optional. The collection of examples that the agent is trained on.

action

string

Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.

output_contexts[]

Context

Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the lifespan_count to 0 will reset the context when the intent is matched. Format: projects/<Project ID>/agent/sessions/-/contexts/<Context ID>.

reset_contexts

bool

Optional. Indicates whether to delete all contexts in the current session when this intent is matched.

parameters[]

Parameter

Optional. The collection of parameters associated with the intent.

messages[]

Message

Optional. The collection of rich messages corresponding to the Response field in the Dialogflow console.

default_response_platforms[]

Platform

Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).

root_followup_intent_name

string

Output only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent.

Format: projects/<Project ID>/agent/intents/<Intent ID>.

parent_followup_intent_name

string

Optional. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with CreateIntent or BatchUpdateIntents, in order to make this intent a followup intent.

It identifies the parent followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

followup_intent_info[]

FollowupIntentInfo

Output only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.

FollowupIntentInfo

Represents a single followup intent in the chain.

Fields
followup_intent_name

string

The unique identifier of the followup intent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

parent_followup_intent_name

string

The unique identifier of the followup intent's parent. Format: projects/<Project ID>/agent/intents/<Intent ID>.

Message

Corresponds to the Response field in the Dialogflow console.

Fields
platform

Platform

Optional. The platform that this message is intended for.

Union field message. Required. The rich response message. message can be only one of the following:
text

Text

Returns a text response.

image

Image

Displays an image.

quick_replies

QuickReplies

Displays quick replies.

card

Card

Displays a card.

payload

Struct

A custom platform-specific response.

simple_responses

SimpleResponses

Returns a voice or text-only response for Actions on Google.

basic_card

BasicCard

Displays a basic card for Actions on Google.

suggestions

Suggestions

Displays suggestion chips for Actions on Google.

list_select

ListSelect

Displays a list card for Actions on Google.

carousel_select

CarouselSelect

Displays a carousel card for Actions on Google.

telephony_play_audio

TelephonyPlayAudio

Plays audio from a file in Telephony Gateway.

telephony_synthesize_speech

TelephonySynthesizeSpeech

Synthesizes speech in Telephony Gateway.

telephony_transfer_call

TelephonyTransferCall

Transfers the call in Telephony Gateway.

rbm_text

RbmText

Rich Business Messaging (RBM) text response.

RBM allows businesses to send enriched and branded versions of SMS. See https://jibe.google.com/business-messaging.

rbm_standalone_rich_card

RbmStandaloneCard

Standalone Rich Business Messaging (RBM) rich card response.

table_card

TableCard

Table card for Actions on Google.

media_content

MediaContent

The media content card for Actions on Google.

BasicCard

The basic card message. Useful for displaying information.

Fields
title

string

Optional. The title of the card.

subtitle

string

Optional. The subtitle of the card.

formatted_text

string

Required, unless image is present. The body text of the card.

image

Image

Optional. The image for the card.

buttons[]

Button

Optional. The collection of card buttons.

Button

The button object that appears at the bottom of a card.

Fields
title

string

Required. The title of the button.

open_uri_action

OpenUriAction

Required. Action to take when a user taps on the button.

OpenUriAction

Opens the given URI.

Fields
uri

string

Required. The HTTP or HTTPS scheme URI.

BrowseCarouselCard

Browse Carousel Card for Actions on Google. https://developers.google.com/actions/assistant/responses#browsing_carousel

Fields
items[]

BrowseCarouselCardItem

Required. List of items in the Browse Carousel Card. Minimum of two items, maximum of ten.

image_display_options

ImageDisplayOptions

Optional. Settings for displaying the image. Applies to every image in items.

BrowseCarouselCardItem

Browsing carousel tile

Fields
open_uri_action

OpenUrlAction

Required. Action to present to the user.

title

string

Required. Title of the carousel item. Maximum of two lines of text.

description

string

Optional. Description of the carousel item. Maximum of four lines of text.

image

Image

Optional. Hero image for the carousel item.

footer

string

Optional. Text that appears at the bottom of the Browse Carousel Card. Maximum of one line of text.

OpenUrlAction

Actions on Google action to open a given url.

Fields
url

string

Required. URL

url_type_hint

UrlTypeHint

Optional. Specifies the type of viewer that is used when opening the URL. Defaults to opening via web browser.

UrlTypeHint

Type of the URI.

Enums
URL_TYPE_HINT_UNSPECIFIED Unspecified
AMP_ACTION Url would be an amp action
AMP_CONTENT URL that points directly to AMP content, or to a canonical URL which refers to AMP content via .

ImageDisplayOptions

Image display options for Actions on Google. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.

Enums
IMAGE_DISPLAY_OPTIONS_UNSPECIFIED Fill the gaps between the image and the image container with gray bars.
GRAY Fill the gaps between the image and the image container with gray bars.
WHITE Fill the gaps between the image and the image container with white bars.
CROPPED Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video.
BLURRED_BACKGROUND Pad the gaps between image and image frame with a blurred copy of the same image.

Card

The card response message.

Fields
title

string

Optional. The title of the card.

subtitle

string

Optional. The subtitle of the card.

image_uri

string

Optional. The public URI to an image file for the card.

buttons[]

Button

Optional. The collection of card buttons.

Button

Optional. Contains information about a button.

Fields
text

string

Optional. The text to show on the button.

postback

string

Optional. The text to send back to the Dialogflow API or a URI to open.

CarouselSelect

The card for presenting a carousel of options to select from.

Fields
items[]

Item

Required. Carousel items.

Item

An item in the carousel.

Fields
info

SelectItemInfo

Required. Additional info about the option item.

title

string

Required. Title of the carousel item.

description

string

Optional. The body text of the card.

image

Image

Optional. The image to display.

ColumnProperties

Column properties for TableCard.

Fields
header

string

Required. Column heading.

horizontal_alignment

HorizontalAlignment

Optional. Defines text alignment for all cells in this column.

HorizontalAlignment

Text alignments within a cell.

Enums
HORIZONTAL_ALIGNMENT_UNSPECIFIED Text is aligned to the leading edge of the column.
LEADING Text is aligned to the leading edge of the column.
CENTER Text is centered in the column.
TRAILING Text is aligned to the trailing edge of the column.

Image

The image response message.

Fields
image_uri

string

Optional. The public URI to an image file.

accessibility_text

string

A text description of the image to be used for accessibility, e.g., screen readers. Required if image_uri is set for CarouselSelect.

LinkOutSuggestion

The suggestion chip message that allows the user to jump out to the app or website associated with this agent.

Fields
destination_name

string

Required. The name of the app or site this chip is linking to.

uri

string

Required. The URI of the app or site to open when the user taps the suggestion chip.

ListSelect

The card for presenting a list of options to select from.

Fields
title

string

Optional. The overall title of the list.

items[]

Item

Required. List items.

subtitle

string

Optional. Subtitle of the list.

Item

An item in the list.

Fields
info

SelectItemInfo

Required. Additional information about this option.

title

string

Required. The title of the list item.

description

string

Optional. The main text describing the item.

image

Image

Optional. The image to display.

MediaContent

The media content card for Actions on Google.

Fields
media_type

ResponseMediaType

Optional. What type of media is the content (ie "audio").

media_objects[]

ResponseMediaObject

Required. List of media objects.

ResponseMediaObject

Response media object for media content card.

Fields
name

string

Required. Name of media card.

description

string

Optional. Description of media card.

content_url

string

Required. Url where the media is stored.

Union field image. Image to show with the media card. image can be only one of the following:
large_image

Image

Optional. Image to display above media content.

icon

Image

Optional. Icon to display above media content.

ResponseMediaType

Format of response media type.

Enums
RESPONSE_MEDIA_TYPE_UNSPECIFIED Unspecified.
AUDIO Response media type is audio.

Platform

Represents different platforms that a rich message can be intended for.

Enums
PLATFORM_UNSPECIFIED Not specified.
FACEBOOK Facebook.
SLACK Slack.
TELEGRAM Telegram.
KIK Kik.
SKYPE Skype.
LINE Line.
VIBER Viber.
ACTIONS_ON_GOOGLE Google Assistant See Dialogflow webhook format
TELEPHONY Telephony Gateway.
GOOGLE_HANGOUTS Google Hangouts.

QuickReplies

The quick replies response message.

Fields
title

string

Optional. The title of the collection of quick replies.

quick_replies[]

string

Optional. The collection of quick replies.

RbmCardContent

Rich Business Messaging (RBM) Card content

Fields
title

string

Optional. Title of the card (at most 200 bytes).

At least one of the title, description or media must be set.

description

string

Optional. Description of the card (at most 2000 bytes).

At least one of the title, description or media must be set.

media

RbmMedia

Optional. However at least one of the title, description or media must be set. Media (image, GIF or a video) to include in the card.

suggestions[]

RbmSuggestion

Optional. List of suggestions to include in the card.

RbmMedia

Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported:

Image Types

  • image/jpeg
  • image/jpg'
  • image/gif
  • image/png

Video Types

  • video/h263
  • video/m4v
  • video/mp4
  • video/mpeg
  • video/mpeg4
  • video/webm
Fields
file_uri

string

Required. Publicly reachable URI of the file. The RBM platform determines the MIME type of the file from the content-type field in the HTTP headers when the platform fetches the file. The content-type field must be present and accurate in the HTTP response from the URL.

thumbnail_uri

string

Optional. Publicly reachable URI of the thumbnail.If you don't provide a thumbnail URI, the RBM platform displays a blank placeholder thumbnail until the user's device downloads the file. Depending on the user's setting, the file may not download automatically and may require the user to tap a download button.

height

Height

Required for cards with vertical orientation. The height of the media within a rich card with a vertical layout. For a standalone card with horizontal layout, height is not customizable, and this field is ignored.

Height

Media height

Enums
HEIGHT_UNSPECIFIED Not specified.
SHORT 112 DP.
MEDIUM 168 DP.
TALL 264 DP. Not available for rich card carousels when the card width is set to small.

RbmCarouselCard

Carousel Rich Business Messaging (RBM) rich card.

Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.

If you want to show a single card with more control over the layout, please use RbmStandaloneCard instead.

Fields
card_width

CardWidth

Required. The width of the cards in the carousel.

card_contents[]

RbmCardContent

Required. The cards in the carousel. A carousel must have at least 2 cards and at most 10.

CardWidth

The width of the cards in the carousel.

Enums
CARD_WIDTH_UNSPECIFIED Not specified.
SMALL 120 DP. Note that tall media cannot be used.
MEDIUM 232 DP.

RbmStandaloneCard

Standalone Rich Business Messaging (RBM) rich card.

Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.

You can group multiple rich cards into one using RbmCarouselCard but carousel cards will give you less control over the card layout.

Fields
card_orientation

CardOrientation

Required. Orientation of the card.

thumbnail_image_alignment

ThumbnailImageAlignment

Required if orientation is horizontal. Image preview alignment for standalone cards with horizontal layout.

card_content

RbmCardContent

Required. Card content.

CardOrientation

Orientation of the card.

Enums
CARD_ORIENTATION_UNSPECIFIED Not specified.
HORIZONTAL Horizontal layout.
VERTICAL Vertical layout.

ThumbnailImageAlignment

Thumbnail preview alignment for standalone cards with horizontal layout.

Enums
THUMBNAIL_IMAGE_ALIGNMENT_UNSPECIFIED Not specified.
LEFT Thumbnail preview is left-aligned.
RIGHT Thumbnail preview is right-aligned.

RbmSuggestedAction

Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.

Fields
text

string

Text to display alongside the action.

postback_data

string

Opaque payload that the Dialogflow receives in a user event when the user taps the suggested action. This data will be also forwarded to webhook to allow performing custom business logic.

Union field action. Action that needs to be triggered. action can be only one of the following:
dial

RbmSuggestedActionDial

Suggested client side action: Dial a phone number

open_url

RbmSuggestedActionOpenUri

Suggested client side action: Open a URI on device

share_location

RbmSuggestedActionShareLocation

Suggested client side action: Share user location

RbmSuggestedActionDial

Opens the user's default dialer app with the specified phone number but does not dial automatically.

Fields
phone_number

string

Required. The phone number to fill in the default dialer app. This field should be in E.164 format. An example of a correctly formatted phone number: +15556767888.

RbmSuggestedActionOpenUri

Opens the user's default web browser app to the specified uri If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.

Fields
uri

string

Required. The uri to open on the user device

RbmSuggestedActionShareLocation

This type has no fields.

Opens the device's location chooser so the user can pick a location to send back to the agent.

RbmSuggestedReply

Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.

Fields
text

string

Suggested reply text.

postback_data

string

Opaque payload that the Dialogflow receives in a user event when the user taps the suggested reply. This data will be also forwarded to webhook to allow performing custom business logic.

RbmSuggestion

Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).

Fields
Union field suggestion. Predefined suggested response or action for user to choose suggestion can be only one of the following:
reply

RbmSuggestedReply

Predefined replies for user to select instead of typing

action

RbmSuggestedAction

Predefined client side actions that user can choose

RbmText

Rich Business Messaging (RBM) text response with suggestions.

Fields
text

string

Required. Text sent and displayed to the user.

rbm_suggestion[]

RbmSuggestion

Optional. One or more suggestions to show to the user.

SelectItemInfo

Additional info about the select item for when it is triggered in a dialog.

Fields
key

string

Required. A unique key that will be sent back to the agent if this response is given.

synonyms[]

string

Optional. A list of synonyms that can also be used to trigger this item in dialog.

SimpleResponse

The simple response message containing speech or text.

Fields
text_to_speech

string

One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml.

ssml

string

One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech.

display_text

string

Optional. The text to display.

SimpleResponses

The collection of simple response candidates. This message in QueryResult.fulfillment_messages and WebhookResponse.fulfillment_messages should contain only one SimpleResponse.

Fields
simple_responses[]

SimpleResponse

Required. The list of simple responses.

Suggestion

The suggestion chip message that the user can tap to quickly post a reply to the conversation.

Fields
title

string

Required. The text shown the in the suggestion chip.

Suggestions

The collection of suggestions.

Fields
suggestions[]

Suggestion

Required. The list of suggested replies.

TableCard

Table card for Actions on Google.

Fields
title

string

Required. Title of the card.

subtitle

string

Optional. Subtitle to the title.

image

Image

Optional. Image which should be displayed on the card.

column_properties[]

ColumnProperties

Optional. Display properties for the columns in this table.

rows[]

TableCardRow

Optional. Rows in this table of data.

buttons[]

Button

Optional. List of buttons for the card.

TableCardCell

Cell of TableCardRow.

Fields
text

string

Required. Text in this cell.

TableCardRow

Row of TableCard.

Fields
cells[]

TableCardCell

Optional. List of cells that make up this row.

divider_after

bool

Optional. Whether to add a visual divider after this row.

TelephonyPlayAudio

Plays audio from a file in Telephony Gateway.

Fields
audio_uri

string

Required. URI to a Google Cloud Storage object containing the audio to play, e.g., "gs://bucket/object". The object must contain a single channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.

This object must be readable by the service-<Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com service account where is the number of the Telephony Gateway project (usually the same as the Dialogflow agent project). If the Google Cloud Storage bucket is in the Telephony Gateway project, this permission is added by default when enabling the Dialogflow V2 API.

For audio from other sources, consider using the TelephonySynthesizeSpeech message with SSML.

TelephonySynthesizeSpeech

Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway.

Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config which can either be set at request-level or can come from the agent-level synthesizer config.

Fields
Union field source. Required. The source to be synthesized. source can be only one of the following:
text

string

The raw text to be synthesized.

ssml

string

The SSML to be synthesized. For more information, see SSML.

TelephonyTransferCall

Transfers the call in Telephony Gateway.

Fields
phone_number

string

Required. The phone number to transfer the call to in E.164 format.

We currently only allow transferring to US numbers (+1xxxyyyzzzz).

Text

The text response message.

Fields
text[]

string

Optional. The collection of the agent's responses.

Parameter

Represents intent parameters.

Fields
name

string

The unique identifier of this parameter.

display_name

string

Required. The name of the parameter.

value

string

Optional. The definition of the parameter value. It can be:

  • a constant string,
  • a parameter value defined as $parameter_name,
  • an original parameter value defined as $parameter_name.original,
  • a parameter value from some context defined as #context_name.parameter_name.
default_value

string

Optional. The default value to use when the value yields an empty result. Default values can be extracted from contexts by using the following syntax: #context_name.parameter_name.

entity_type_display_name

string

Optional. The name of the entity type, prefixed with @, that describes values of the parameter. If the parameter is required, this must be provided.

mandatory

bool

Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value.

prompts[]

string

Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter.

is_list

bool

Optional. Indicates whether the parameter represents a list of values.

TrainingPhrase

Represents an example that the agent is trained on.

Fields
name

string

Output only. The unique identifier of this training phrase.

type

Type

Required. The type of the training phrase.

parts[]

Part

Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase.

Note: The API does not automatically annotate training phrases like the Dialogflow Console does.

Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated.

If the training phrase does not need to be annotated with parameters, you just need a single part with only the Part.text field set.

If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways:

  • Part.text is set to a part of the phrase that has no parameters.
  • Part.text is set to a part of the phrase that you want to annotate, and the entity_type, alias, and user_defined fields are all set.
times_added_count

int32

Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased.

Part

Represents a part of a training phrase.

Fields
text

string

Required. The text for this part.

entity_type

string

Optional. The entity type name prefixed with @. This field is required for annotated parts of the training phrase.

alias

string

Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase.

user_defined

bool

Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true.

Type

Represents different types of training phrases.

Enums
TYPE_UNSPECIFIED Not specified. This value should never be used.
EXAMPLE Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types.
TEMPLATE

Templates are not annotated with entity types, but they can contain @-prefixed entity type names as substrings. Note: Template mode has been deprecated. Example mode is the only supported way to create new training phrases. If you have existing training phrases in template mode, they will be removed during training and it can cause a drop in agent performance.

WebhookState

Represents the different states that webhooks can be in.

Enums
WEBHOOK_STATE_UNSPECIFIED Webhook is disabled in the agent and in the intent.
WEBHOOK_STATE_ENABLED Webhook is enabled in the agent and in the intent.
WEBHOOK_STATE_ENABLED_FOR_SLOT_FILLING Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook.

IntentBatch

This message is a wrapper around a collection of intents.

Fields
intents[]

Intent

A collection of intents.

IntentInput

Represents the intent to trigger programmatically rather than as a result of natural language processing. The intent input is only used for V3 agent.

Fields
intent

string

Required. The unique identifier of the intent in V3 agent. Format: projects/<Project ID>/locations/<Location ID>/locations/<Location ID>/agents/<Agent ID>/intents/<Intent ID>.

language_code

string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes.

IntentSuggestion

Represents an intent suggestion.

Fields
display_name

string

The display name of the intent.

description

string

Human readable description for better understanding an intent like its scope, content, result etc. Maximum character limit: 140 characters.

Union field intent. The name of the intent. intent can be only one of the following:
intent_v2

string

The unique identifier of this intent. Format: projects/<Project ID>/locations/<Location ID>/agent/intents/<Intent ID>.

IntentView

Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.

Enums
INTENT_VIEW_UNSPECIFIED Training phrases field is not populated in the response.
INTENT_VIEW_FULL All fields are populated.

KnowledgeAnswers

Represents the result of querying a Knowledge base.

Fields
answers[]

Answer

A list of answers from Knowledge Connector.

Answer

An answer from Knowledge Connector.

Fields
source

string

Indicates which Knowledge Document this answer was extracted from. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

faq_question

string

The corresponding FAQ question if the answer was extracted from a FAQ Document, empty otherwise.

answer

string

The piece of text from the source knowledge base document that answers this conversational query.

match_confidence_level

MatchConfidenceLevel

The system's confidence level that this knowledge answer is a good match for this conversational query. NOTE: The confidence level for a given <query, answer> pair may change without notice, as it depends on models that are constantly being improved. However, it will change less frequently than the confidence score below, and should be preferred for referencing the quality of an answer.

match_confidence

float

The system's confidence score that this Knowledge answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). Note: The confidence score is likely to vary somewhat (possibly even for identical requests), as the underlying model is under constant improvement. It may be deprecated in the future. We recommend using match_confidence_level which should be generally more stable.

MatchConfidenceLevel

Represents the system's confidence that this knowledge answer is a good match for this conversational query.

Enums
MATCH_CONFIDENCE_LEVEL_UNSPECIFIED Not specified.
LOW Indicates that the confidence is low.
MEDIUM Indicates our confidence is medium.
HIGH Indicates our confidence is high.

KnowledgeBase

A knowledge base represents a collection of knowledge documents that you provide to Dialogflow. Your knowledge documents contain information that may be useful during conversations with end-users. Some Dialogflow features use knowledge bases when looking for a response to an end-user input.

For more information, see the knowledge base guide.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

Fields
name

string

The knowledge base resource name. The name must be empty when creating a knowledge base. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

display_name

string

Required. The display name of the knowledge base. The name must be 1024 bytes or less; otherwise, the creation request fails.

language_code

string

Language which represents the KnowledgeBase. When the KnowledgeBase is created/updated, this is populated for all non en-us languages. If not populated, the default language en-us applies.

KnowledgeOperationMetadata

Metadata in google::longrunning::Operation for Knowledge operations.

Fields
state

State

Required. Output only. The current state of this operation.

knowledge_base

string

The name of the knowledge base interacted with during the operation.

Union field operation_metadata. Additional metadata for the Knowledge operation. operation_metadata can be only one of the following:
export_operation_metadata

ExportOperationMetadata

Metadata for the Export Data Operation such as the destination of export.

State

States of the operation.

Enums
STATE_UNSPECIFIED State unspecified.
PENDING The operation has been created.
RUNNING The operation is currently running.
DONE The operation is done, either cancelled or completed.

ListAnswerRecordsRequest

Request message for AnswerRecords.ListAnswerRecords.

Fields
parent

string

Required. The project to list all answer records for in reverse chronological order. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.answerrecords.list
filter
(deprecated)

string

Optional. Filters to restrict results to specific answer records.

For more information about filtering, see API Filtering.

page_size

int32

Optional. The maximum number of records to return in a single page. The server may return fewer records than this. If unspecified, we use 10. The maximum is 100.

page_token

string

Optional. The ListAnswerRecordsResponse.next_page_token value returned from a previous list request used to continue listing on the next page.

ListAnswerRecordsResponse

Response message for AnswerRecords.ListAnswerRecords.

Fields
answer_records[]

AnswerRecord

The list of answer records.

next_page_token

string

A token to retrieve next page of results. Or empty if there are no more results. Pass this value in the ListAnswerRecordsRequest.page_token field in the subsequent call to ListAnswerRecords method to retrieve the next page of results.

ListContextsRequest

The request message for Contexts.ListContexts.

Fields
parent

string

Required. The session to list all contexts from. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.contexts.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListContextsResponse

The response message for Contexts.ListContexts.

Fields
contexts[]

Context

The list of contexts. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListConversationProfilesRequest

The request message for ConversationProfiles.ListConversationProfiles.

Fields
parent

string

Required. The project to list all conversation profiles from. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.conversationProfiles.list
page_size

int32

The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

The next_page_token value returned from a previous list request.

ListConversationProfilesResponse

The response message for ConversationProfiles.ListConversationProfiles.

Fields
conversation_profiles[]

ConversationProfile

The list of project conversation profiles. There is a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListConversationsRequest

The request message for Conversations.ListConversations.

Fields
parent

string

Required. The project from which to list all conversation. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.conversations.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

filter

string

A filter expression that filters conversations listed in the response. In general, the expression must specify the field name, a comparison operator, and the value to use for filtering:

  • The value must be a string, a number, or a boolean.
  • The comparison operator must be either =,!=, >, or <.
  • To filter on multiple expressions, separate the expressions with AND or OR (omitting both implies AND).
  • For clarity, expressions can be enclosed in parentheses.
Only lifecycle_state can be filtered on in this way. For example, the following expression only returns COMPLETED conversations:

lifecycle_state = "COMPLETED"

For more information about filtering, see API Filtering.

ListConversationsResponse

The response message for Conversations.ListConversations.

Fields
conversations[]

Conversation

The list of conversations. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListDocumentsRequest

Request message for Documents.ListDocuments.

Fields
parent

string

Required. The knowledge base to list all documents for. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.documents.list
page_size

int32

The maximum number of items to return in a single page. By default 10 and at most 100.

page_token

string

The next_page_token value returned from a previous list request.

filter

string

The filter expression used to filter documents returned by the list method. The expression has the following syntax:

[AND ] ...

The following fields and operators are supported:

  • knowledge_types with has(:) operator
  • display_name with has(:) operator
  • state with equals(=) operator

Examples:

  • "knowledge_types:FAQ" matches documents with FAQ knowledge type.
  • "display_name:customer" matches documents whose display name contains "customer".
  • "state=ACTIVE" matches documents with ACTIVE state.
  • "knowledge_types:FAQ AND state=ACTIVE" matches all active FAQ documents.

For more information about filtering, see API Filtering.

ListDocumentsResponse

Response message for Documents.ListDocuments.

Fields
documents[]

Document

The list of documents.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListEntityTypesRequest

The request message for EntityTypes.ListEntityTypes.

Fields
parent

string

Required. The agent to list all entity types from. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.entityTypes.list
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListEntityTypesResponse

The response message for EntityTypes.ListEntityTypes.

Fields
entity_types[]

EntityType

The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListEnvironmentsRequest

The request message for Environments.ListEnvironments.

Fields
parent

string

Required. The agent to list all environments from. Format: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.environments.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListEnvironmentsResponse

The response message for Environments.ListEnvironments.

Fields
environments[]

Environment

The list of agent environments. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListIntentsRequest

The request message for Intents.ListIntents.

Fields
parent

string

Required. The agent to list all intents from. Format: projects/<Project ID>/agent or projects/<Project ID>/locations/<Location ID>/agent.

Alternatively, you can specify the environment to list intents for. Format: projects/<Project ID>/agent/environments/<Environment ID> or projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID>. Note: training phrases of the intents will not be returned for non-draft environment.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.intents.list
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

intent_view

IntentView

Optional. The resource view to apply to the returned intent.

page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListIntentsResponse

The response message for Intents.ListIntents.

Fields
intents[]

Intent

The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListKnowledgeBasesRequest

Request message for KnowledgeBases.ListKnowledgeBases.

Fields
parent

string

Required. The project to list of knowledge bases for. Format: projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.knowledgeBases.list
page_size

int32

The maximum number of items to return in a single page. By default 10 and at most 100.

page_token

string

The next_page_token value returned from a previous list request.

filter

string

The filter expression used to filter knowledge bases returned by the list method. The expression has the following syntax:

[AND ] ...

The following fields and operators are supported:

  • display_name with has(:) operator
  • language_code with equals(=) operator

Examples:

  • 'language_code=en-us' matches knowledge bases with en-us language code.
  • 'display_name:articles' matches knowledge bases whose display name contains "articles".
  • 'display_name:"Best Articles"' matches knowledge bases whose display name contains "Best Articles".
  • 'language_code=en-gb AND display_name=articles' matches all knowledge bases whose display name contains "articles" and whose language code is "en-gb".

Note: An empty filter string (i.e. "") is a no-op and will result in no filtering.

For more information about filtering, see API Filtering.

ListKnowledgeBasesResponse

Response message for KnowledgeBases.ListKnowledgeBases.

Fields
knowledge_bases[]

KnowledgeBase

The list of knowledge bases.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListMessagesRequest

The request message for Conversations.ListMessages.

Fields
parent

string

Required. The name of the conversation to list messages for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.messages.list
filter

string

Optional. Filter on message fields. Currently predicates on create_time and create_time_epoch_microseconds are supported. create_time only support milliseconds accuracy. E.g., create_time_epoch_microseconds > 1551790877964485 or create_time > "2017-01-15T01:30:15.01Z".

For more information about filtering, see API Filtering.

page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListMessagesResponse

The response message for Conversations.ListMessages.

Fields
messages[]

Message

Required. The list of messages. There will be a maximum number of items returned based on the page_size field in the request. messages is sorted by create_time in descending order.

next_page_token

string

Optional. Token to retrieve the next page of results, or empty if there are no more results in the list.

ListParticipantsRequest

The request message for Participants.ListParticipants.

Fields
parent

string

Required. The conversation to list all participants from. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.participants.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListParticipantsResponse

The response message for Participants.ListParticipants.

Fields
participants[]

Participant

The list of participants. There is a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results or empty if there are no more results in the list.

ListSessionEntityTypesRequest

The request message for SessionEntityTypes.ListSessionEntityTypes.

Fields
parent

string

Required. The session to list all session entity types from. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.sessionEntityTypes.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListSessionEntityTypesResponse

The response message for SessionEntityTypes.ListSessionEntityTypes.

Fields
session_entity_types[]

SessionEntityType

The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListSuggestionsRequest

The request message for Participants.ListSuggestions.

Fields
parent

string

Required. The name of the participant to fetch suggestions for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.suggestions.list
page_size

int32

Optional. The maximum number of items to return in a single page. The default value is 100; the maximum value is 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

filter

string

Optional. Filter on suggestions fields. Currently predicates on create_time and create_time_epoch_microseconds are supported. create_time only support milliseconds accuracy. E.g., create_time_epoch_microseconds > 1551790877964485 or create_time > "2017-01-15T01:30:15.01Z"

For more information about filtering, see API Filtering.

ListSuggestionsResponse

The response message for Participants.ListSuggestions.

Fields
suggestions[]

Suggestion

Required. The list of suggestions. There will be a maximum number of items returned based on the page_size field in the request. suggestions is sorted by create_time in descending order.

next_page_token

string

Optional. Token to retrieve the next page of results or empty if there are no more results in the list.

ListVersionsRequest

The request message for Versions.ListVersions.

Fields
parent

string

Required. The agent to list all versions from. Supported formats: - projects/<Project ID>/agent - projects/<Project ID>/locations/<Location ID>/agent

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.versions.list
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

ListVersionsResponse

The response message for Versions.ListVersions.

Fields
versions[]

Version

The list of agent versions. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

LoggingConfig

Defines logging behavior for conversation lifecycle events.

Fields
enable_stackdriver_logging

bool

Whether to log conversation events like CONVERSATION_STARTED to Stackdriver in the conversation project as JSON format ConversationEvent protos.

Message

Represents a message posted into a conversation.

Fields
name

string

Optional. The unique identifier of the message. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

content

string

Required. The message content.

language_code

string

Optional. The message language. This should be a BCP-47 language tag. Example: "en-US".

participant

string

Output only. The participant that sends this message.

participant_role

Role

Output only. The role of the participant.

create_time

Timestamp

Output only. The time when the message was created in Contact Center AI.

send_time

Timestamp

Optional. The time when the message was sent.

message_annotation

MessageAnnotation

Output only. The annotation for the message.

sentiment_analysis

SentimentAnalysisResult

Output only. The sentiment analysis result for the message.

MessageAnnotation

Represents the result of annotation for the message.

Fields
parts[]

AnnotatedMessagePart

Optional. The collection of annotated message parts ordered by their position in the message. You can recover the annotated message by concatenating [AnnotatedMessagePart.text].

contain_entities

bool

Required. Indicates whether the text message contains entities.

NotificationConfig

Defines notification behavior.

Fields
topic

string

Name of the Pub/Sub topic to publish conversation events like CONVERSATION_STARTED as serialized ConversationEvent protos.

For telephony integration to receive notification, make sure either this topic is in the same project as the conversation or you grant service-<Conversation Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com the Dialogflow Service Agent role in the topic project.

For chat integration to receive notification, make sure API caller has been granted the Dialogflow Service Agent role for the topic.

Format: projects/<Project ID>/locations/<Location ID>/topics/<Topic ID>.

message_format

MessageFormat

Format of message.

MessageFormat

Format of cloud pub/sub message.

Enums
MESSAGE_FORMAT_UNSPECIFIED If it is unspecified, PROTO will be used.
PROTO Pub/Sub message will be serialized proto.
JSON Pub/Sub message will be json.

OriginalDetectIntentRequest

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Fields
source

string

The source of this request, e.g., google, facebook, slack. It is set by Dialogflow-owned servers.

version

string

Optional. The version of the protocol used for this request. This field is AoG-specific.

payload

Struct

Optional. This field is set to the value of the QueryParameters.payload field passed in the request. Some integrations that query a Dialogflow agent may provide additional information in the payload.

In particular, for the Dialogflow Phone Gateway integration, this field has the form:

{
 "telephony": {
   "caller_id": "+18558363987"
 }
}

Note: The caller ID field (caller_id) will be redacted for Trial Edition agents and populated with the caller ID in E.164 format for Essentials Edition agents.

OutputAudio

Represents the natural language speech audio to be played to the end user.

Fields
config

OutputAudioConfig

Required. Instructs the speech synthesizer how to generate the speech audio.

audio

bytes

Required. The natural language speech audio.

OutputAudioConfig

Instructs the speech synthesizer how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

Fields
audio_encoding

OutputAudioEncoding

Required. Audio encoding of the synthesized audio content.

sample_rate_hertz

int32

The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

synthesize_speech_config

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

OutputAudioEncoding

Audio encoding of the output audio format in Text-To-Speech.

Enums
OUTPUT_AUDIO_ENCODING_UNSPECIFIED Not specified.
OUTPUT_AUDIO_ENCODING_LINEAR_16 Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.
OUTPUT_AUDIO_ENCODING_MP3 MP3 audio at 32kbps.
OUTPUT_AUDIO_ENCODING_MP3_64_KBPS MP3 audio at 64kbps.
OUTPUT_AUDIO_ENCODING_OGG_OPUS Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.
OUTPUT_AUDIO_ENCODING_MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.

Participant

Represents a conversation participant (human agent, virtual agent, end-user).

Fields
name

string

Optional. The unique identifier of this participant. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

role

Role

Immutable. The role this participant plays in the conversation. This field must be set during participant creation and is then immutable.

obfuscated_external_user_id

string

Optional. Obfuscated user id that should be associated with the created participant.

You can specify a user id as follows:

  1. If you set this field in CreateParticipantRequest or UpdateParticipantRequest, Dialogflow adds the obfuscated user id with the participant.

  2. If you set this field in AnalyzeContent or StreamingAnalyzeContent, Dialogflow will update Participant.obfuscated_external_user_id.

Dialogflow uses this user id for billing and measurement. If a user with the same obfuscated_external_user_id is created in a later conversation, Dialogflow will know it's the same user.

Dialogflow also uses this user id for Agent Assist suggestion personalization. For example, Dialogflow can use it to provide personalized smart reply suggestions for this user.

Note:

  • Please never pass raw user ids to Dialogflow. Always obfuscate your user id first.
  • Dialogflow only accepts a UTF-8 encoded string, e.g., a hex digest of a hash function like SHA-512.
  • The length of the user id must be <= 256 characters.
documents_metadata_filters

map<string, string>

Optional. Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have 'US' or 'CA' in their market metadata values and 'agent' in their user metadata values will be

documents_metadata_filters {
  key: "market"
  value: "US,CA"
}
documents_metadata_filters {
  key: "user"
  value: "agent"
}

Role

Enumeration of the roles a participant can play in a conversation.

Enums
ROLE_UNSPECIFIED Participant role not set.
HUMAN_AGENT Participant is a human agent.
AUTOMATED_AGENT Participant is an automated agent, such as a Dialogflow agent.
END_USER Participant is an end user that has called or chatted with Dialogflow services.

QueryInput

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text.

  3. An event that specifies which intent to trigger.

Fields
Union field input. Required. The input specification. input can be only one of the following:
audio_config

InputAudioConfig

Instructs the speech recognizer how to process the speech audio.

text

TextInput

The natural language text to be processed.

event

EventInput

The event to be processed.

dtmf

TelephonyDtmfEvents

The DTMF digits used to invoke intent and fill in parameter value.

QueryParameters

Represents the parameters of the conversational query.

Fields
time_zone

string

The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.

geo_location

LatLng

The geo location of this conversational query.

contexts[]

Context

The collection of contexts to be activated before this query is executed.

reset_contexts

bool

Specifies whether to delete all contexts in the current session before the new ones are activated.

session_entity_types[]

SessionEntityType

Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.

payload

Struct

This field can be used to pass custom data to your webhook. Arbitrary JSON objects are supported. If supplied, the value is used to populate the WebhookRequest.original_detect_intent_request.payload field sent to your webhook.

knowledge_base_names[]

string

KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: projects/<Project ID>/knowledgeBases/<Knowledge Base ID>.

sentiment_analysis_request_config

SentimentAnalysisRequestConfig

Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed. Note: Sentiment Analysis is only currently available for Essentials Edition agents.

sub_agents[]

SubAgent

For mega agent query, directly specify which sub agents to query. If any specified sub agent is not linked to the mega agent, an error will be returned. If empty, Dialogflow will decide which sub agents to query. If specified for a non-mega-agent query, will be silently ignored.

webhook_headers

map<string, string>

This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc.

platform

string

The platform of the virtual agent response messages.

If not empty, only emits messages from this platform in the response. Valid values are the enum names of platform.

QueryResult

Represents the result of conversational query or event processing.

Fields
query_text

string

The original conversational query text:

  • If natural language text was provided as input, query_text contains a copy of the input.
  • If natural language speech audio was provided as input, query_text contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked.
  • If automatic spell correction is enabled, query_text will contain the corrected user input.
language_code

string

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.

speech_recognition_confidence

float

The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.

action

string

The action name from the matched intent.

parameters

Struct

The collection of extracted parameters.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
all_required_params_present

bool

This field is set to:

  • false if the matched intent has required parameters and not all of the required parameter values have been collected.
  • true if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.
cancels_slot_filling

bool

Indicates whether the conversational query triggers a cancellation for slot filling. For more information, see the cancel slot filling documentation.

fulfillment_text

string

The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, fulfillment_messages should be preferred.

fulfillment_messages[]

Message

The collection of rich messages to present to the user.

webhook_source

string

If the query was fulfilled by a webhook call, this field is set to the value of the source field returned in the webhook response.

webhook_payload

Struct

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response.

output_contexts[]

Context

The collection of output contexts. If applicable, output_contexts.parameters contains entries with name <parameter name>.original containing the original parameter values before the query.

intent

Intent

The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name, display_name, end_interaction and is_fallback.

intent_detection_confidence

float

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are multiple knowledge_answers messages, this value is set to the greatest knowledgeAnswers.match_confidence value in the list.

diagnostic_info

Struct

Free-form diagnostic information for the associated detect intent request. The fields of this data can change without notice, so you should not write code that depends on its structure. The data may contain:

  • webhook call latency
  • webhook errors
sentiment_analysis_result

SentimentAnalysisResult

The sentiment analysis result, which depends on the sentiment_analysis_request_config specified in the request.

knowledge_answers

KnowledgeAnswers

The result from Knowledge Connector (if any), ordered by decreasing KnowledgeAnswers.match_confidence.

ReloadDocumentRequest

Request message for Documents.ReloadDocument.

Fields
name

string

Required. The name of the document to reload. Format: projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>

import_gcs_custom_metadata

bool

Whether to import custom metadata from Google Cloud Storage. Only valid when the document source is Google Cloud Storage URI.

Union field source. The source for document reloading.

Optional. If provided, the service will load the contents from the source and update document in the knowledge base.

Reloading from a new document source is allowed for smart messaging documents only. If you want to update the source for other document types, please delete the existing document and create a new one instead. source can be only one of the following:

gcs_source

GcsSource

The path for a Cloud Storage source file for reloading document content. If not provided, the Document's existing source will be reloaded.

ResponseMessage

Response messages from an automated agent.

Fields
Union field message. Required. The rich response message. message can be only one of the following:
text

Text

Returns a text response.

payload

Struct

Returns a response containing a custom, platform-specific payload.

live_agent_handoff

LiveAgentHandoff

Hands off conversation to a live agent.

end_interaction

EndInteraction

A signal that indicates the interaction with the Dialogflow agent has ended.

mixed_audio

MixedAudio

An audio response message composed of both the synthesized Dialogflow agent responses and the audios hosted in places known to the client.

telephony_transfer_call

TelephonyTransferCall

A signal that the client should transfer the phone call connected to this agent to a third-party endpoint.

EndInteraction

This type has no fields.

Indicates that interaction with the Dialogflow agent has ended.

LiveAgentHandoff

Indicates that the conversation should be handed off to a human agent.

Dialogflow only uses this to determine which conversations were handed off to a human agent for measurement purposes. What else to do with this signal is up to you and your handoff procedures.

You may set this, for example:

  • In the entry fulfillment of a CX Page if entering the page indicates something went extremely wrong in the conversation.
  • In a webhook response when you determine that the customer issue can only be handled by a human.
Fields
metadata

Struct

Custom metadata for your handoff procedure. Dialogflow doesn't impose any structure on this.

MixedAudio

Represents an audio message that is composed of both segments synthesized from the Dialogflow agent prompts and ones hosted externally at the specified URIs.

Fields
segments[]

Segment

Segments this audio response is composed of.

Segment

Represents one segment of audio.

Fields
allow_playback_interruption

bool

Whether the playback of this segment can be interrupted by the end user's speech and the client should then start the next Dialogflow request.

Union field content. Content of the segment. content can be only one of the following:
audio

bytes

Raw audio synthesized from the Dialogflow agent's response using the output config specified in the request.

uri

string

Client-specific URI that points to an audio clip accessible to the client.

TelephonyTransferCall

Represents the signal that telles the client to transfer the phone call connected to the agent to a third-party endpoint.

Fields
Union field endpoint. Endpoint to transfer the call to. endpoint can be only one of the following:
phone_number

string

Transfer the call to a phone number in E.164 format.

sip_uri

string

Transfer the call to a SIP endpoint.

Text

The text response message.

Fields
text[]

string

A collection of text responses.

RestoreAgentRequest

The request message for Agents.RestoreAgent.

Fields
parent

string

Required. The project that the agent to restore is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.restore
Union field agent. Required. The agent to restore. agent can be only one of the following:
agent_uri

string

The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://".

Dialogflow performs a read operation for the Cloud Storage object on the caller's behalf, so your request authentication must have read permissions for the object. For more information, see Dialogflow access control.

agent_content

bytes

Zip compressed raw byte content for agent.

SearchAgentsRequest

The request message for Agents.SearchAgents.

Fields
parent

string

Required. The project to list agents from. Format: projects/<Project ID or '-'> or projects/<Project ID or '-'>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.search
page_size

int32

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

page_token

string

Optional. The next_page_token value returned from a previous list request.

SearchAgentsResponse

The response message for Agents.SearchAgents.

Fields
agents[]

Agent

The list of agents. There will be a maximum number of items returned based on the page_size field in the request.

next_page_token

string

Token to retrieve the next page of results, or empty if there are no more results in the list.

SearchKnowledgeAnswer

Represents a SearchKnowledge answer.

Fields
answer

string

The piece of text from the knowledge base documents that answers the search query

answer_type

AnswerType

The type of the answer.

answer_sources[]

AnswerSource

All sources used to generate the answer.

answer_record

string

The name of the answer record. Format: projects/<Project ID>/locations/<location ID>/answer Records/<Answer Record ID>

AnswerSource

The sources of the answers.

Fields
title

string

The title of the article.

uri

string

The URI of the article.

snippet

string

The relevant snippet of the article.

AnswerType

The type of the answer.

Enums
ANSWER_TYPE_UNSPECIFIED The answer has a unspecified type.
FAQ The answer is from FAQ documents.
GENERATIVE The answer is from generative model.
INTENT The answer is from intent matching.

SearchKnowledgeRequest

The request message for Conversations.SearchKnowledge.

Fields
parent

string

The parent resource contains the conversation profile Format: 'projects/' or projects/<Project ID>/locations/<Location ID>.

query

TextInput

Required. The natural language text query for knowledge search.

conversation_profile

string

Required. The conversation profile used to configure the search. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

session_id

string

The ID of the search session. The session_id can be combined with Dialogflow V3 Agent ID retrieved from conversation profile or on its own to identify a search session. The search history of the same session will impact the search result. It's up to the API caller to choose an appropriate Session ID. It can be a random number or some type of session identifiers (preferably hashed). The length must not exceed 36 characters.

conversation

string

The conversation (between human agent and end user) where the search request is triggered. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

latest_message

string

The name of the latest conversation message when the request is triggered. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

SearchKnowledgeResponse

The response message for Conversations.SearchKnowledge.

Fields
answers[]

SearchKnowledgeAnswer

Most relevant snippets extracted from articles in the given knowledge base, ordered by confidence.

rewritten_query

string

The rewritten query used to search knowledge.

Sentiment

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

Fields
score

float

Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).

magnitude

float

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).

SentimentAnalysisRequestConfig

Configures the types of sentiment analysis to perform.

Fields
analyze_query_text_sentiment

bool

Instructs the service to perform sentiment analysis on query_text. If not provided, sentiment analysis is not performed on query_text.

SentimentAnalysisResult

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For [Participants.DetectIntent][], it needs to be configured in DetectIntentRequest.query_params. For [Participants.StreamingDetectIntent][], it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

Fields
query_text_sentiment

Sentiment

The sentiment analysis result for query_text.

SessionEntityType

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes.

For more information, see the session entity guide.

Fields
name

string

Required. The unique identifier of this session entity type. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name> - projects/<Project ID>/locations/<Location ID>/agent/environments/ <Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we assume default '-' user. <Entity Type Display Name> must be the display name of an existing entity type in the same agent that will be overridden or supplemented.

entity_override_mode

EntityOverrideMode

Required. Indicates whether the additional data should override or supplement the custom entity type definition.

entities[]

Entity

Required. The collection of entities associated with this session entity type.

EntityOverrideMode

The types of modifications for a session entity type.

Enums
ENTITY_OVERRIDE_MODE_UNSPECIFIED Not specified. This value should be never used.
ENTITY_OVERRIDE_MODE_OVERRIDE The collection of session entities overrides the collection of entities in the corresponding custom entity type.
ENTITY_OVERRIDE_MODE_SUPPLEMENT

The collection of session entities extends the collection of entities in the corresponding custom entity type.

Note: Even in this override mode calls to ListSessionEntityTypes, GetSessionEntityType, CreateSessionEntityType and UpdateSessionEntityType only return the additional entities added in this session entity type. If you want to get the supplemented list, please call EntityTypes.GetEntityType on the custom entity type and merge.

SetAgentRequest

The request message for Agents.SetAgent.

Fields
agent

Agent

Required. The agent to update.

Authorization requires the following IAM permission on the specified resource agent:

  • dialogflow.agents.create
update_mask

FieldMask

Optional. The mask to control which fields get updated.

SetSuggestionFeatureConfigOperationMetadata

Metadata for a [ConversationProfile.SetSuggestionFeatureConfig][] operation.

Fields
conversation_profile

string

The resource name of the conversation profile. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>

participant_role

Role

Required. The participant role to add or update the suggestion feature config. Only HUMAN_AGENT or END_USER can be used.

suggestion_feature_type

Type

Required. The type of the suggestion feature to add or update.

create_time

Timestamp

Timestamp whe the request was created. The time is measured on server side.

SetSuggestionFeatureConfigRequest

The request message for [ConversationProfiles.SetSuggestionFeature][].

Fields
conversation_profile

string

Required. The Conversation Profile to add or update the suggestion feature config. Format: projects/<Project ID>/locations/<Location ID>/conversationProfiles/<Conversation Profile ID>.

participant_role

Role

Required. The participant role to add or update the suggestion feature config. Only HUMAN_AGENT or END_USER can be used.

suggestion_feature_config

SuggestionFeatureConfig

Required. The suggestion feature config to add or update.

SmartReplyAnswer

Represents a smart reply answer.

Fields
reply

string

The content of the reply.

confidence

float

Smart reply confidence. The system's confidence score that this reply is a good match for this conversation, as a value from 0.0 (completely uncertain) to 1.0 (completely certain).

answer_record

string

The name of answer record, in the format of "projects//locations//answerRecords/"

SpeechContext

Hints for the speech recognizer to help with recognition in a specific conversation state.

Fields
phrases[]

string

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

This list can be used to:

  • improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent
  • add additional words to the speech recognizer vocabulary
  • ...

See the Cloud Speech documentation for usage limits.

boost

float

Optional. Boost for this context compared to other contexts:

  • If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases.
  • If the boost is unspecified or non-positive, Dialogflow will not apply any boost.

Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.

SpeechModelVariant

Variant of the specified Speech model to use.

See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.

Enums
SPEECH_MODEL_VARIANT_UNSPECIFIED No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.
USE_BEST_AVAILABLE

Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for.

Please see the Dialogflow docs for how to make your project eligible for enhanced models.

USE_STANDARD Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models.
USE_ENHANCED

Use an enhanced model variant:

  • If an enhanced variant does not exist for the given model and request language, Dialogflow falls back to the standard variant.

The Cloud Speech documentation describes which models have enhanced variants.

  • If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the Dialogflow docs for how to make your project eligible.

SpeechToTextConfig

Configures speech transcription for ConversationProfile.

Fields
speech_model_variant

SpeechModelVariant

The speech model used in speech to text. SPEECH_MODEL_VARIANT_UNSPECIFIED, USE_BEST_AVAILABLE will be treated as USE_ENHANCED. It can be overridden in AnalyzeContentRequest and StreamingAnalyzeContentRequest request. If enhanced model variant is specified and an enhanced version of the specified model for the language does not exist, then it would emit an error.

model

string

Which Speech model to select. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then Dialogflow auto-selects a model based on other parameters in the SpeechToTextConfig and Agent settings. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search

Leave this field unspecified to use Agent Speech settings for model selection.

use_timeout_based_endpointing

bool

Use timeout based endpointing, interpreting endpointer sensitivy as seconds of timeout value.

SpeechWordInfo

Information for a word recognized by the speech recognizer.

Fields
word

string

The word this info is for.

start_offset

Duration

Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.

end_offset

Duration

Time offset relative to the beginning of the audio that corresponds to the end of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.

confidence

float

The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided.

SsmlVoiceGender

Gender of the voice as described in SSML voice element.

Enums
SSML_VOICE_GENDER_UNSPECIFIED An unspecified gender, which means that the client doesn't care which gender the selected voice will have.
SSML_VOICE_GENDER_MALE A male voice.
SSML_VOICE_GENDER_FEMALE A female voice.
SSML_VOICE_GENDER_NEUTRAL A gender-neutral voice.

StreamingAnalyzeContentRequest

The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.

Multiple request messages should be sent in order:

  1. The first message must contain participant, [config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] and optionally query_params. If you want to receive an audio response, it should also contain reply_audio_config. The message must not contain [input][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.input].

  2. If [config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] in the first message was set to audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather analyze text input after you already started Speech recognition, please send a message with StreamingAnalyzeContentRequest.input_text.

However, note that:

* Dialogflow will bill you for the audio so far.
* Dialogflow discards all Speech recognition results in favor of the
  text input.
  1. If [StreamingAnalyzeContentRequest.config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] in the first message was set to StreamingAnalyzeContentRequest.text_config, then the second message must contain only input_text. Moreover, you must not send more than two messages.

After you sent all input, you must half-close or abort the request stream.

Fields
participant

string

Required. The name of the participant this text comes from. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource participant:

  • dialogflow.participants.analyzeContent
reply_audio_config

OutputAudioConfig

Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.

query_params

QueryParameters

Parameters for a Dialogflow virtual-agent query.

assist_query_params

AssistQueryParameters

Parameters for a human assist query.

cx_parameters

Struct

Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.

Note: this field should only be used if you are connecting to a Dialogflow CX agent.

cx_current_page

string

The unique identifier of the CX page to override the current_page in the session. Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>.

If cx_current_page is specified, the previous state of the session will be ignored by Dialogflow CX, including the [previous page][QueryResult.current_page] and the [previous session parameters][QueryResult.parameters]. In most cases, cx_current_page and cx_parameters should be configured together to direct a session to a specific state.

Note: this field should only be used if you are connecting to a Dialogflow CX agent.

enable_extended_streaming

bool

Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response.

Restrictions:

InvalidArgument Error will be returned if the one of restriction checks failed.

You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming

enable_partial_automated_agent_reply

bool

Enable partial virtual agent responses. If this flag is not enabled, response stream still contains only one final response even if some Fulfillments in Dialogflow virtual agent have been configured to return partial responses.

enable_debugging_info

bool

if true, StreamingAnalyzeContentResponse.debugging_info will get populated.

Union field config. Required. The input config. config can be only one of the following:
audio_config

InputAudioConfig

Instructs the speech recognizer how to process the speech audio.

text_config

InputTextConfig

The natural language text to be processed.

Union field input. Required. The input. input can be only one of the following:
input_audio

bytes

The input audio content to be recognized. Must be sent if audio_config is set in the first message. The complete audio over all streaming messages must not exceed 1 minute.

input_text

string

The UTF-8 encoded natural language text to be processed. Must be sent if text_config is set in the first message. Text length must not exceed 256 bytes for virtual agent interactions. The input_text field can be only sent once, and would cancel the speech recognition if any ongoing.

input_dtmf

TelephonyDtmfEvents

The DTMF digits used to invoke intent and fill in parameter value.

This input is ignored if the previous response indicated that DTMF input is not accepted.

input_intent

string

The intent to be triggered on V3 agent. Format: projects/<Project ID>/locations/<Location ID>/locations/ <Location ID>/agents/<Agent ID>/intents/<Intent ID>.

input_event

string

The input event name. This can only be sent once and would cancel the ongoing speech recognition if any.

StreamingAnalyzeContentResponse

The top-level message returned from the StreamingAnalyzeContent method.

Multiple response messages can be returned in order:

  1. If the input was set to streaming audio, the first one or more messages contain recognition_result. Each recognition_result represents a more complete transcript of what the user said. The last recognition_result has is_final set to true.

  2. In virtual agent stage: if enable_partial_automated_agent_reply is true, the following N (currently 1 <= N <= 4) messages contain automated_agent_reply and optionally reply_audio returned by the virtual agent. The first (N-1) automated_agent_replys will have automated_agent_reply_type set to PARTIAL. The last automated_agent_reply has automated_agent_reply_type set to FINAL. If enable_partial_automated_agent_reply is not enabled, response stream only contains the final reply.

In human assist stage: the following N (N >= 1) messages contain
`human_agent_suggestion_results`, `end_user_suggestion_results` or
`message`.
Fields
recognition_result

StreamingRecognitionResult

The result of speech recognition.

reply_text

string

Optional. The output text content. This field is set if an automated agent responded with a text for the user.

reply_audio

OutputAudio

Optional. The audio data bytes encoded as specified in the request. This field is set if:

  • The reply_audio_config field is specified in the request.
  • The automated agent, which this output comes from, responded with audio. In such case, the reply_audio.config field contains settings used to synthesize the speech.

In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

automated_agent_reply

AutomatedAgentReply

Optional. Only set if a Dialogflow automated agent has responded. Note that: [AutomatedAgentReply.detect_intent_response.output_audio][] and [AutomatedAgentReply.detect_intent_response.output_audio_config][] are always empty, use reply_audio instead.

message

Message

Output only. Message analyzed by CCAI.

human_agent_suggestion_results[]

SuggestionResult

The suggestions for most recent human agent. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.human_agent_suggestion_config.

end_user_suggestion_results[]

SuggestionResult

The suggestions for end user. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.end_user_suggestion_config.

dtmf_parameters

DtmfParameters

Indicates the parameters of DTMF.

debugging_info

CloudConversationDebuggingInfo

Debugging info that would get populated when StreamingAnalyzeContentRequest.enable_debugging_info is set to true.

StreamingDetectIntentRequest

The top-level message sent by the client to the Sessions.StreamingDetectIntent method.

Multiple request messages should be sent in order:

  1. The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio.
  2. If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text.
However, note that:

* Dialogflow will bill you for the audio duration so far.
* Dialogflow discards all Speech recognition results in favor of the
  input text.
* Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

Fields
session

string

Required. The name of the session the query is sent to. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

If Location ID is not specified we assume default 'us' location. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.

For more information, see the API interactions guide.

Note: Always use agent versions for production traffic. See Versions and environments.

Authorization requires the following IAM permission on the specified resource session:

  • dialogflow.sessions.streamingDetectIntent
query_params

QueryParameters

The parameters of this query.

query_input

QueryInput

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

single_utterance
(deprecated)

bool

DEPRECATED. Please use InputAudioConfig.single_utterance instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

output_audio_config

OutputAudioConfig

Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

output_audio_config_mask

FieldMask

Mask for output_audio_config indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.

If unspecified or empty, output_audio_config replaces the agent-level config in its entirety.

input_audio

bytes

The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.

enable_debugging_info

bool

If true, StreamingDetectIntentResponse.debugging_info will get populated.

StreamingDetectIntentResponse

The top-level message returned from the StreamingDetectIntent method.

Multiple response messages can be returned in order:

  1. If the StreamingDetectIntentRequest.input_audio field was set, the recognition_result field is populated for one or more messages. See the StreamingRecognitionResult message for details about the result message sequence.

  2. The next message contains response_id, query_result, alternative_query_results and optionally webhook_status if a WebHook was called.

  3. If output_audio_config was specified in the request or agent-level speech synthesizer is configured, all subsequent messages contain output_audio and output_audio_config.

Fields
response_id

string

The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

recognition_result

StreamingRecognitionResult

The result of speech recognition.

query_result

QueryResult

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

alternative_query_results[]

QueryResult

If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing QueryResult.intent_detection_confidence. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.

webhook_status

Status

Specifies the status of the webhook request.

output_audio

bytes

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

output_audio_config

OutputAudioConfig

The config used by the speech synthesizer to generate the output audio.

debugging_info

CloudConversationDebuggingInfo

Debugging info that would get populated when StreamingDetectIntentRequest.enable_debugging_info is set to true.

StreamingRecognitionResult

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

While end-user audio is being processed, Dialogflow sends a series of results. Each result may contain a transcript value. A transcript represents a portion of the utterance. While the recognizer is processing audio, transcript values may be interim values or finalized values. Once a transcript is finalized, the is_final value is set to true and processing continues for the next transcript.

If StreamingDetectIntentRequest.query_input.audio_config.single_utterance was true, and the recognizer has completed processing audio, the message_type value is set to `END_OF_SINGLE_UTTERANCE and the following (last) result contains the last finalized transcript.

The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results.

In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur.

Num | transcript              | message_type            | is_final
--- | ----------------------- | ----------------------- | --------
1   | "tube"                  | TRANSCRIPT              | false
2   | "to be a"               | TRANSCRIPT              | false
3   | "to be"                 | TRANSCRIPT              | false
4   | "to be or not to be"    | TRANSCRIPT              | true
5   | "that's"                | TRANSCRIPT              | false
6   | "that is                | TRANSCRIPT              | false
7   | unset                   | END_OF_SINGLE_UTTERANCE | unset
8   | " that is the question" | TRANSCRIPT              | true

Concatenating the finalized transcripts with is_final set to true, the complete utterance becomes "to be or not to be that is the question".

Fields
message_type

MessageType

Type of the result message.

transcript

string

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

is_final

bool

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

confidence

float

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

stability

float

An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:

  • If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for TRANSCRIPT results with is_final = false.
  • Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.
speech_word_info[]

SpeechWordInfo

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

speech_end_offset

Duration

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

language_code

string

Detected language code for the transcript.

dtmf_digits

TelephonyDtmfEvents

DTMF digits. Populated if and only if message_type = DTMF_DIGITS.

MessageType

Type of the response message.

Enums
MESSAGE_TYPE_UNSPECIFIED Not specified. Should never be used.
TRANSCRIPT Message contains a (possibly partial) transcript.
DTMF_DIGITS Message contains DTMF digits.
END_OF_SINGLE_UTTERANCE This event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if single_utterance was set to true, and is not used otherwise.
PARTIAL_DTMF_DIGITS Message contains DTMF digits. Before a message with DTMF_DIGITS is sent, a message with PARTIAL_DTMF_DIGITS may be sent with DTMF digits collected up to the time of sending, which represents an intermediate result.

SubAgent

Contains basic configuration for a sub-agent.

Fields
project

string

Required. The project of this agent. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

environment

string

Optional. The unique identifier (environment name in dialogflow console) of this sub-agent environment. Assumes draft environment if environment is not set.

SuggestArticlesRequest

The request message for Participants.SuggestArticles.

Fields
parent

string

Required. The name of the participant to fetch suggestion for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.suggestions.list
latest_message

string

Optional. The name of the latest conversation message to compile suggestion for. If empty, it will be the latest message of the conversation.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Optional. Max number of messages prior to and including latest_message to use as context when compiling the suggestion. By default 20 and at most 50.

assist_query_params

AssistQueryParameters

Optional. Parameters for a human assist query.

SuggestArticlesResponse

The response message for Participants.SuggestArticles.

Fields
article_answers[]

ArticleAnswer

Output only. Articles ordered by score in descending order.

latest_message

string

The name of the latest conversation message used to compile suggestion for.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including latest_message to compile the suggestion. It may be smaller than the SuggestArticlesResponse.context_size field in the request if there aren't that many messages in the conversation.

SuggestConversationSummaryRequest

The request message for Conversations.SuggestConversationSummary.

Fields
conversation

string

Required. The conversation to fetch suggestion for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>.

latest_message

string

The name of the latest conversation message used as context for compiling suggestion. If empty, the latest message of the conversation will be used.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 500 and at most 1000.

assist_query_params

AssistQueryParameters

Parameters for a human assist query. Only used for POC/demo purpose.

SuggestConversationSummaryResponse

The response message for Conversations.SuggestConversationSummary.

Fields
summary

Summary

Generated summary.

latest_message

string

The name of the latest conversation message used as context for compiling suggestion.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including [last_conversation_message][] used to compile the suggestion. It may be smaller than the [SuggestSummaryRequest.context_size][] field in the request if there weren't that many messages in the conversation.

Summary

Generated summary for a conversation.

Fields
text

string

The summary content that is concatenated into one string.

text_sections

map<string, string>

The summary content that is divided into sections. The key is the section's name and the value is the section's content. There is no specific format for the key or value.

answer_record

string

The name of the answer record. Format: "projects//answerRecords/"

baseline_model_version

string

The baseline model version used to generate this summary. It is empty if a baseline model was not used to generate this summary.

SuggestDialogflowAssistsResponse

The response message for Participants.SuggestDialogflowAssists.

Fields
dialogflow_assist_answers[]

DialogflowAssistAnswer

Output only. Multiple reply options provided by Dialogflow assist service. The order is based on the rank of the model prediction.

latest_message

string

The name of the latest conversation message used to suggest answer.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including latest_message to compile the suggestion. It may be smaller than the SuggestDialogflowAssistsRequest.context_size field in the request if there aren't that many messages in the conversation.

SuggestFaqAnswersRequest

The request message for Participants.SuggestFaqAnswers.

Fields
parent

string

Required. The name of the participant to fetch suggestion for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.suggestions.list
latest_message

string

Optional. The name of the latest conversation message to compile suggestion for. If empty, it will be the latest message of the conversation.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Optional. Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 20 and at most 50.

assist_query_params

AssistQueryParameters

Optional. Parameters for a human assist query.

SuggestFaqAnswersResponse

The request message for Participants.SuggestFaqAnswers.

Fields
faq_answers[]

FaqAnswer

Output only. Answers extracted from FAQ documents.

latest_message

string

The name of the latest conversation message used to compile suggestion for.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including latest_message to compile the suggestion. It may be smaller than the SuggestFaqAnswersRequest.context_size field in the request if there aren't that many messages in the conversation.

SuggestSmartRepliesRequest

The request message for Participants.SuggestSmartReplies.

Fields
parent

string

Required. The name of the participant to fetch suggestion for. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.suggestions.list
current_text_input

TextInput

The current natural language text segment to compile suggestion for. This provides a way for user to get follow up smart reply suggestion after a smart reply selection, without sending a text message.

latest_message

string

The name of the latest conversation message to compile suggestion for. If empty, it will be the latest message of the conversation.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Optional. Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 20 and at most 50.

SuggestSmartRepliesResponse

The response message for Participants.SuggestSmartReplies.

Fields
smart_reply_answers[]

SmartReplyAnswer

Output only. Multiple reply options provided by smart reply service. The order is based on the rank of the model prediction. The maximum number of the returned replies is set in SmartReplyConfig.

latest_message

string

The name of the latest conversation message used to compile suggestion for.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

context_size

int32

Number of messages prior to and including latest_message to compile the suggestion. It may be smaller than the SuggestSmartRepliesRequest.context_size field in the request if there aren't that many messages in the conversation.

Suggestion

Represents a suggestion for a human agent.

Fields
name

string

Output only. The name of this suggestion. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/*/suggestions/<Suggestion ID>.

articles[]

Article

Output only. Articles ordered by score in descending order.

faq_answers[]

FaqAnswer

Output only. Answers extracted from FAQ documents.

create_time

Timestamp

Output only. The time the suggestion was created.

latest_message

string

Output only. Latest message used as context to compile this suggestion.

Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/messages/<Message ID>.

Article

Represents suggested article.

Fields
title

string

Output only. The article title.

uri

string

Output only. The article URI.

snippets[]

string

Output only. Article snippets.

metadata

map<string, string>

Output only. A map that contains metadata about the answer and the document from which it originates.

answer_record

string

Output only. The name of answer record, in the format of "projects//locations//answerRecords/"

FaqAnswer

Represents suggested answer from "frequently asked questions".

Fields
answer

string

Output only. The piece of text from the source knowledge base document.

confidence

float

The system's confidence score that this Knowledge answer is a good match for this conversational query, range from 0.0 (completely uncertain) to 1.0 (completely certain).

question

string

Output only. The corresponding FAQ question.

source

string

Output only. Indicates which Knowledge Document this answer was extracted from. Format: projects/<Project ID>/locations/<Location ID>/agent/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>.

metadata

map<string, string>

Output only. A map that contains metadata about the answer and the document from which it originates.

answer_record

string

Output only. The name of answer record, in the format of "projects//locations//answerRecords/"

SuggestionFeature

The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

Fields
type

Type

Type of Human Agent Assistant API feature to request.

Type

Defines the type of Human Agent Assistant feature.

Enums
TYPE_UNSPECIFIED Unspecified feature type.
ARTICLE_SUGGESTION Run article suggestion model for chat.
FAQ Run FAQ model.
SMART_REPLY Run smart reply model for chat.
DIALOGFLOW_ASSIST Run Dialogflow assist model for chat, which will return automated agent response as suggestion.
CONVERSATION_SUMMARIZATION Run conversation summarization model for chat.

SuggestionInput

Represents the selection of a suggestion.

Fields
answer_record

string

Required. The ID of a suggestion selected by the human agent. The suggestion(s) were generated in a previous call to request Dialogflow assist. The format is: projects/<Project ID>/locations/<Location ID>/answerRecords/<Answer Record ID> where is an alphanumeric string.

text_override

TextInput

Optional. If the customer edited the suggestion before using it, include the revised text here.

parameters

Struct

In Dialogflow assist for v3, the user can submit a form by sending a SuggestionInput. The form is uniquely determined by the answer_record field, which identifies a v3 QueryResult containing the current page. The form parameters are specified via the parameters field.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
intent_input

IntentInput

The intent to be triggered on V3 agent.

SuggestionResult

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

Fields
Union field suggestion_response. Different type of suggestion response. suggestion_response can be only one of the following:
error

Status

Error status if the request failed.

suggest_articles_response

SuggestArticlesResponse

SuggestArticlesResponse if request is for ARTICLE_SUGGESTION.

suggest_faq_answers_response

SuggestFaqAnswersResponse

SuggestFaqAnswersResponse if request is for FAQ_ANSWER.

suggest_smart_replies_response

SuggestSmartRepliesResponse

SuggestSmartRepliesResponse if request is for SMART_REPLY.

suggest_dialogflow_assists_response

SuggestDialogflowAssistsResponse

SuggestDialogflowAssistsResponse if request is for DIALOGFLOW_ASSIST.

suggest_entity_extraction_response

SuggestDialogflowAssistsResponse

SuggestDialogflowAssistsResponse if request is for ENTITY_EXTRACTION.

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

Fields
speaking_rate

double

Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.

pitch

double

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.

volume_gain_db

double

Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.

effects_profile_id[]

string

Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.

voice

VoiceSelectionParams

Optional. The desired voice of the synthesized audio.

TelephonyDtmf

DTMF digit in Telephony Gateway.

Enums
TELEPHONY_DTMF_UNSPECIFIED Not specified. This value may be used to indicate an absent digit.
DTMF_ONE Number: '1'.
DTMF_TWO Number: '2'.
DTMF_THREE Number: '3'.
DTMF_FOUR Number: '4'.
DTMF_FIVE Number: '5'.
DTMF_SIX Number: '6'.
DTMF_SEVEN Number: '7'.
DTMF_EIGHT Number: '8'.
DTMF_NINE Number: '9'.
DTMF_ZERO Number: '0'.
DTMF_A Letter: 'A'.
DTMF_B Letter: 'B'.
DTMF_C Letter: 'C'.
DTMF_D Letter: 'D'.
DTMF_STAR Asterisk/star: '*'.
DTMF_POUND Pound/diamond/hash/square/gate/octothorpe: '#'.

TelephonyDtmfEvents

A wrapper of repeated TelephonyDtmf digits.

Fields
dtmf_events[]

TelephonyDtmf

A sequence of TelephonyDtmf digits.

TextInput

Represents the natural language text to be processed.

Fields
text

string

Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters for virtual agent interactions.

language_code

string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

TextToSpeechSettings

Instructs the speech synthesizer on how to generate the output audio content.

Fields
enable_text_to_speech

bool

Optional. Indicates whether text to speech is enabled. Even when this field is false, other settings in this proto are still retained.

output_audio_encoding

OutputAudioEncoding

Required. Audio encoding of the synthesized audio content.

sample_rate_hertz

int32

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

synthesize_speech_configs

map<string, SynthesizeSpeechConfig>

Optional. Configuration of how speech should be synthesized, mapping from language (https://cloud.google.com/dialogflow/docs/reference/language) to SynthesizeSpeechConfig.

TrainAgentRequest

The request message for Agents.TrainAgent.

Fields
parent

string

Required. The project that the agent to train is associated with. Format: projects/<Project ID> or projects/<Project ID>/locations/<Location ID>.

Authorization requires the following IAM permission on the specified resource parent:

  • dialogflow.agents.train

UpdateAnswerRecordRequest

Request message for AnswerRecords.UpdateAnswerRecord.

Fields
answer_record

AnswerRecord

Required. Answer record to update.

Authorization requires the following IAM permission on the specified resource answerRecord:

  • dialogflow.answerrecords.update
update_mask

FieldMask

Required. The mask to control which fields get updated.

UpdateContextRequest

The request message for Contexts.UpdateContext.

Fields
context

Context

Required. The context to update.

Authorization requires the following IAM permission on the specified resource context:

  • dialogflow.contexts.update
update_mask

FieldMask

Optional. The mask to control which fields get updated.

UpdateConversationProfileRequest

The request message for ConversationProfiles.UpdateConversationProfile.

Fields
conversation_profile

ConversationProfile

Required. The conversation profile to update.

Authorization requires the following IAM permission on the specified resource conversationProfile:

  • dialogflow.conversationProfiles.update
update_mask

FieldMask

Required. The mask to control which fields to update.

UpdateDocumentRequest

Request message for Documents.UpdateDocument.

Fields
document

Document

Required. The document to update.

update_mask

FieldMask

Optional. Not specified means update all. Currently, only display_name can be updated, an InvalidArgument will be returned for attempting to update other fields.

UpdateEntityTypeRequest

The request message for EntityTypes.UpdateEntityType.

Fields
entity_type

EntityType

Required. The entity type to update.

Authorization requires the following IAM permission on the specified resource entityType:

  • dialogflow.entityTypes.update
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

update_mask

FieldMask

Optional. The mask to control which fields get updated.

UpdateEnvironmentRequest

The request message for Environments.UpdateEnvironment.

Fields
environment

Environment

Required. The environment to update.

Authorization requires the following IAM permission on the specified resource environment:

  • dialogflow.environments.update
update_mask

FieldMask

Required. The mask to control which fields get updated.

allow_load_to_draft_and_discard_changes

bool

Optional. This field is used to prevent accidental overwrite of the draft environment, which is an operation that cannot be undone. To confirm that the caller desires this overwrite, this field must be explicitly set to true when updating the draft environment (environment ID = -).

UpdateFulfillmentRequest

The request message for Fulfillments.UpdateFulfillment.

Fields
fulfillment

Fulfillment

Required. The fulfillment to update.

Authorization requires the following IAM permission on the specified resource fulfillment:

  • dialogflow.fulfillments.update
update_mask

FieldMask

Required. The mask to control which fields get updated. If the mask is not present, all fields will be updated.

UpdateIntentRequest

The request message for Intents.UpdateIntent.

Fields
intent

Intent

Required. The intent to update.

Authorization requires the following IAM permission on the specified resource intent:

  • dialogflow.intents.update
language_code

string

Optional. The language used to access language-specific data. If not specified, the agent's default language is used. For more information, see Multilingual intent and entity data.

update_mask

FieldMask

Optional. The mask to control which fields get updated.

intent_view

IntentView

Optional. The resource view to apply to the returned intent.

UpdateKnowledgeBaseRequest

Request message for KnowledgeBases.UpdateKnowledgeBase.

Fields
knowledge_base

KnowledgeBase

Required. The knowledge base to update.

update_mask

FieldMask

Optional. Not specified means update all. Currently, only display_name can be updated, an InvalidArgument will be returned for attempting to update other fields.

UpdateParticipantRequest

The request message for Participants.UpdateParticipant.

Fields
participant

Participant

Required. The participant to update.

Authorization requires the following IAM permission on the specified resource participant:

  • dialogflow.participants.create
update_mask

FieldMask

Required. The mask to specify which fields to update.

UpdateSessionEntityTypeRequest

The request message for SessionEntityTypes.UpdateSessionEntityType.

Fields
session_entity_type

SessionEntityType

Required. The session entity type to update.

Authorization requires the following IAM permission on the specified resource sessionEntityType:

  • dialogflow.sessionEntityTypes.update
update_mask

FieldMask

Optional. The mask to control which fields get updated.

UpdateVersionRequest

The request message for Versions.UpdateVersion.

Fields
version

Version

Required. The version to update. Supported formats: - projects/<Project ID>/agent/versions/<Version ID> - projects/<Project ID>/locations/<Location ID>/agent/versions/<Version ID>

Authorization requires the following IAM permission on the specified resource version:

  • dialogflow.versions.update
update_mask

FieldMask

Required. The mask to control which fields get updated.

ValidationError

Represents a single validation error.

Fields
severity

Severity

The severity of the error.

entries[]

string

The names of the entries that the error is associated with. Format:

  • projects/<Project ID>/agent, if the error is associated with the entire agent.
  • projects/<Project ID>/agent/intents/<Intent ID>, if the error is associated with certain intents.
  • projects/<Project ID>/agent/intents/<Intent Id>/trainingPhrases/<Training Phrase ID>, if the error is associated with certain intent training phrases.
  • projects/<Project ID>/agent/intents/<Intent Id>/parameters/<Parameter ID>, if the error is associated with certain intent parameters.
  • projects/<Project ID>/agent/entities/<Entity ID>, if the error is associated with certain entities.
error_message

string

The detailed error message.

Severity

Represents a level of severity.

Enums
SEVERITY_UNSPECIFIED Not specified. This value should never be used.
INFO The agent doesn't follow Dialogflow best practices.
WARNING The agent may not behave as expected.
ERROR The agent may experience partial failures.
CRITICAL The agent may completely fail.

ValidationResult

Represents the output of agent validation.

Fields
validation_errors[]

ValidationError

Contains all validation errors.

Version

You can create multiple versions of your agent and publish them to separate environments.

When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent.

When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for:

  • testing
  • development
  • production
  • etc.

For more information, see the versions and environments guide.

Fields
name

string

Output only. The unique identifier of this agent version. Supported formats: - projects/<Project ID>/agent/versions/<Version ID> - projects/<Project ID>/locations/<Location ID>/agent/versions/<Version ID>

description

string

Optional. The developer-provided description of this version.

version_number

int32

Output only. The sequential number of this version. This field is read-only which means it cannot be set by create and update methods.

create_time

Timestamp

Output only. The creation time of this version. This field is read-only, i.e., it cannot be set by create and update methods.

status

VersionStatus

Output only. The status of this version. This field is read-only and cannot be set by create and update methods.

VersionStatus

The status of a version.

Enums
VERSION_STATUS_UNSPECIFIED Not specified. This value is not used.
IN_PROGRESS Version is not ready to serve (e.g. training is in progress).
READY Version is ready to serve.
FAILED Version training failed.

VoiceSelectionParams

Description of which voice to use for speech synthesis.

Fields
name

string

Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and ssml_gender.

For the list of available voices, please refer to Supported voices and languages.

ssml_gender

SsmlVoiceGender

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.

WebhookRequest

The request message for a webhook call.

Fields
session

string

The unique identifier of detectIntent request session. Can be used to identify end-user inside webhook implementation. Supported formats: - projects/<Project ID>/agent/sessions/<Session ID>, -projects//locations//agent/sessions/, -projects//agent/environments//users//sessions/, -projects//locations//agent/environments//users//sessions/`,

response_id

string

The unique identifier of the response. Contains the same value as [Streaming]DetectIntentResponse.response_id.

query_result

QueryResult

The result of the conversational query or event processing. Contains the same value as [Streaming]DetectIntentResponse.query_result.

alternative_query_results[]

QueryResult

Alternative query results from KnowledgeService.

original_detect_intent_request

OriginalDetectIntentRequest

Optional. The contents of the original request that was passed to [Streaming]DetectIntent call.

WebhookResponse

The response message for a webhook call.

This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error:

  • Use "" for empty strings
  • Use {} or null for empty objects
  • Use [] or null for empty arrays

For more information, see the Protocol Buffers Language Guide.

Fields
fulfillment_text

string

Optional. The text response message intended for the end-user. It is recommended to use fulfillment_messages.text.text[0] instead. When provided, Dialogflow uses this field to populate QueryResult.fulfillment_text sent to the integration or API caller.

fulfillment_messages[]

Message

Optional. The rich response messages intended for the end-user. When provided, Dialogflow uses this field to populate QueryResult.fulfillment_messages sent to the integration or API caller.

source

string

Optional. A custom field used to identify the webhook source. Arbitrary strings are supported. When provided, Dialogflow uses this field to populate QueryResult.webhook_source sent to the integration or API caller.

payload

Struct

Optional. This field can be used to pass custom data from your webhook to the integration or API caller. Arbitrary JSON objects are supported. When provided, Dialogflow uses this field to populate QueryResult.webhook_payload sent to the integration or API caller. This field is also used by the Google Assistant integration for rich response messages. See the format definition at Google Assistant Dialogflow webhook format

output_contexts[]

Context

Optional. The collection of output contexts that will overwrite currently active contexts for the session and reset their lifespans. When provided, Dialogflow uses this field to populate QueryResult.output_contexts sent to the integration or API caller.

followup_event_input

EventInput

Optional. Invokes the supplied events. When this field is set, Dialogflow ignores the fulfillment_text, fulfillment_messages, and payload fields.

live_agent_handoff

bool

Indicates that a live agent should be brought in to handle the interaction with the user. In most cases, when you set this flag to true, you would also want to set end_interaction to true as well. Default is false.

end_interaction

bool

Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.

session_entity_types[]

SessionEntityType

Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session. Setting this data from a webhook overwrites the session entity types that have been set using detectIntent, streamingDetectIntent or SessionEntityType management methods.