Package types (2.28.3)

API documentation for dialogflow_v2.types package.

Classes

Agent

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end-user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system.

For more information about agents, see the Agent guide <https://cloud.google.com/dialogflow/docs/agents-overview>__.

AgentAssistantFeedback

Detail feedback of Agent Assist result.

AgentAssistantRecord

Represents a record of a human agent assist answer.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

AnalyzeContentRequest

The request message for Participants.AnalyzeContent.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

AnalyzeContentResponse

The response message for Participants.AnalyzeContent.

AnnotatedMessagePart

Represents a part of a message possibly annotated with an entity. The part can be an entity or purely a part of the message between two entities or message start/end.

AnswerFeedback

Represents feedback the customer has about the quality & correctness of a certain answer in a conversation.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

AnswerRecord

Answer records are records to manage answer history and feedbacks for Dialogflow.

Currently, answer record includes:

  • human agent assistant article suggestion
  • human agent assistant faq article

It doesn't include:

  • DetectIntent intent matching
  • DetectIntent knowledge

Answer records are not related to the conversation history in the Dialogflow Console. A Record is generated even when the end-user disables conversation history in the console. Records are created when there's a human agent assistant suggestion generated.

A typical workflow for customers provide feedback to an answer is:

  1. For human agent assistant, customers get suggestion via ListSuggestions API. Together with the answers, AnswerRecord.name are returned to the customers.
  2. The customer uses the AnswerRecord.name to call the [UpdateAnswerRecord][] method to send feedback about a specific answer that they believe is wrong.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ArticleAnswer

Represents article answer.

ArticleSuggestionModelMetadata

Metadata for article suggestion models.

AssistQueryParameters

Represents the parameters of human assist query.

AudioEncoding

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation <https://cloud.google.com/speech-to-text/docs/basics>__ for more details.

Values: AUDIO_ENCODING_UNSPECIFIED (0): Not specified. AUDIO_ENCODING_LINEAR_16 (1): Uncompressed 16-bit signed little-endian samples (Linear PCM). AUDIO_ENCODING_FLAC (2): `FLAC https://xiph.org/flac/documentation.html__ (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth ofLINEAR16.FLACstream encoding supports 16-bit and 24-bit samples, however, not all fields inSTREAMINFOare supported. AUDIO_ENCODING_MULAW (3): 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. AUDIO_ENCODING_AMR (4): Adaptive Multi-Rate Narrowband codec.sample_rate_hertzmust be 8000. AUDIO_ENCODING_AMR_WB (5): Adaptive Multi-Rate Wideband codec.sample_rate_hertzmust be 16000. AUDIO_ENCODING_OGG_OPUS (6): Opus encoded audio frames in Ogg container (OggOpus https://wiki.xiph.org/OggOpus__).sample_rate_hertzmust be 16000. AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE (7): Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required,OGG_OPUSis highly preferred over Speex encoding. TheSpeex https://speex.org/__ encoding supported by Dialogflow API has a header byte in each block, as in MIME typeaudio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined inRFC 5574 https://tools.ietf.org/html/rfc5574`__. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC

  1. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz must be 16000.

AutomatedAgentConfig

Defines the Automated Agent to connect to a conversation.

AutomatedAgentReply

Represents a response from an automated agent.

BatchCreateEntitiesRequest

The request message for EntityTypes.BatchCreateEntities.

BatchDeleteEntitiesRequest

The request message for EntityTypes.BatchDeleteEntities.

BatchDeleteEntityTypesRequest

The request message for EntityTypes.BatchDeleteEntityTypes.

BatchDeleteIntentsRequest

The request message for Intents.BatchDeleteIntents.

BatchUpdateEntitiesRequest

The request message for EntityTypes.BatchUpdateEntities.

BatchUpdateEntityTypesRequest

The request message for EntityTypes.BatchUpdateEntityTypes.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

BatchUpdateEntityTypesResponse

The response message for EntityTypes.BatchUpdateEntityTypes.

BatchUpdateIntentsRequest

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

BatchUpdateIntentsResponse

The response message for Intents.BatchUpdateIntents.

ClearSuggestionFeatureConfigOperationMetadata

Metadata for a [ConversationProfile.ClearSuggestionFeatureConfig][] operation.

ClearSuggestionFeatureConfigRequest

The request message for [ConversationProfiles.ClearFeature][].

CloudConversationDebuggingInfo

Cloud conversation info for easier debugging. It will get populated in StreamingDetectIntentResponse or StreamingAnalyzeContentResponse when the flag enable_debugging_info is set to true in corresponding requests.

CompleteConversationRequest

The request message for Conversations.CompleteConversation.

Context

Dialogflow contexts are similar to natural language context. If a person says to you "they are orange", you need context in order to understand what "they" is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent.

Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts.

For more information about context, see the Contexts guide <https://cloud.google.com/dialogflow/docs/contexts-overview>__.

Conversation

Represents a conversation. A conversation is an interaction between an agent, including live agents and Dialogflow agents, and a support customer. Conversations can include phone calls and text-based chat sessions.

ConversationDataset

Represents a conversation dataset that a user imports raw data into. The data inside ConversationDataset can not be changed after ImportConversationData finishes (and calling ImportConversationData on a dataset that already has data is not allowed).

ConversationEvent

Represents a notification sent to Pub/Sub subscribers for conversation lifecycle events.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ConversationInfo

Represents metadata of a conversation.

ConversationModel

Represents a conversation model.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ConversationModelEvaluation

Represents evaluation result of a conversation model.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ConversationPhoneNumber

Represents a phone number for telephony integration. It allows for connecting a particular conversation over telephony.

ConversationProfile

Defines the services to connect to incoming Dialogflow conversations.

CreateContextRequest

The request message for Contexts.CreateContext.

CreateConversationDatasetOperationMetadata

Metadata for ConversationDatasets][CreateConversationDataset].

CreateConversationDatasetRequest

The request message for ConversationDatasets.CreateConversationDataset.

CreateConversationModelEvaluationOperationMetadata

Metadata for a ConversationModels.CreateConversationModelEvaluation operation.

CreateConversationModelEvaluationRequest

The request message for ConversationModels.CreateConversationModelEvaluation

CreateConversationModelOperationMetadata

Metadata for a ConversationModels.CreateConversationModel operation.

CreateConversationModelRequest

The request message for ConversationModels.CreateConversationModel

CreateConversationProfileRequest

The request message for ConversationProfiles.CreateConversationProfile.

CreateConversationRequest

The request message for Conversations.CreateConversation.

CreateDocumentRequest

Request message for Documents.CreateDocument.

CreateEntityTypeRequest

The request message for EntityTypes.CreateEntityType.

CreateEnvironmentRequest

The request message for Environments.CreateEnvironment.

CreateIntentRequest

The request message for Intents.CreateIntent.

CreateKnowledgeBaseRequest

Request message for KnowledgeBases.CreateKnowledgeBase.

CreateParticipantRequest

The request message for Participants.CreateParticipant.

CreateSessionEntityTypeRequest

The request message for SessionEntityTypes.CreateSessionEntityType.

CreateVersionRequest

The request message for Versions.CreateVersion.

DeleteAgentRequest

The request message for Agents.DeleteAgent.

DeleteAllContextsRequest

The request message for Contexts.DeleteAllContexts.

DeleteContextRequest

The request message for Contexts.DeleteContext.

DeleteConversationDatasetOperationMetadata

Metadata for ConversationDatasets][DeleteConversationDataset].

DeleteConversationDatasetRequest

The request message for ConversationDatasets.DeleteConversationDataset.

DeleteConversationModelOperationMetadata

Metadata for a ConversationModels.DeleteConversationModel operation.

DeleteConversationModelRequest

The request message for ConversationModels.DeleteConversationModel

DeleteConversationProfileRequest

The request message for ConversationProfiles.DeleteConversationProfile.

This operation fails if the conversation profile is still referenced from a phone number.

DeleteDocumentRequest

Request message for Documents.DeleteDocument.

DeleteEntityTypeRequest

The request message for EntityTypes.DeleteEntityType.

DeleteEnvironmentRequest

The request message for Environments.DeleteEnvironment.

DeleteIntentRequest

The request message for Intents.DeleteIntent.

DeleteKnowledgeBaseRequest

Request message for KnowledgeBases.DeleteKnowledgeBase.

DeleteSessionEntityTypeRequest

The request message for SessionEntityTypes.DeleteSessionEntityType.

DeleteVersionRequest

The request message for Versions.DeleteVersion.

DeployConversationModelOperationMetadata

Metadata for a ConversationModels.DeployConversationModel operation.

DeployConversationModelRequest

The request message for ConversationModels.DeployConversationModel

DetectIntentRequest

The request to detect user's intent.

DetectIntentResponse

The message returned from the DetectIntent method.

DialogflowAssistAnswer

Represents a Dialogflow assist answer.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Document

A knowledge document to be used by a KnowledgeBase.

For more information, see the knowledge base guide <https://cloud.google.com/dialogflow/docs/how/knowledge-bases>__.

Note: The projects.agent.knowledgeBases.documents resource is deprecated; only use projects.knowledgeBases.documents.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

DtmfParameters

The message in the response that indicates the parameters of DTMF.

EntityType

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted.

Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent.

For more information, see the Entity guide <https://cloud.google.com/dialogflow/docs/entities-overview>__.

EntityTypeBatch

This message is a wrapper around a collection of entity types.

Environment

You can create multiple versions of your agent and publish them to separate environments.

When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent.

When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for:

  • testing
  • development
  • production
  • etc.

For more information, see the versions and environments guide <https://cloud.google.com/dialogflow/docs/agents-versions>__.

EnvironmentHistory

The response message for Environments.GetEnvironmentHistory.

EvaluationConfig

The configuration for model evaluation.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

EventInput

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

ExportAgentRequest

The request message for Agents.ExportAgent.

ExportAgentResponse

The response message for Agents.ExportAgent.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ExportDocumentRequest

Request message for Documents.ExportDocument.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ExportOperationMetadata

Metadata related to the Export Data Operations (e.g. ExportDocument).

FaqAnswer

Represents answer from "frequently asked questions".

Fulfillment

By default, your agent responds to a matched intent with a static response. As an alternative, you can provide a more dynamic response by using fulfillment. When you enable fulfillment for an intent, Dialogflow responds to that intent by calling a service that you define. For example, if an end-user wants to schedule a haircut on Friday, your service can check your database and respond to the end-user with availability information for Friday.

For more information, see the fulfillment guide <https://cloud.google.com/dialogflow/docs/fulfillment-overview>__.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

GcsDestination

Google Cloud Storage location for the output.

GcsSources

Google Cloud Storage location for the inputs.

GenerateStatelessSummaryRequest

The request message for Conversations.GenerateStatelessSummary.

GenerateStatelessSummaryResponse

The response message for Conversations.GenerateStatelessSummary.

GetAgentRequest

The request message for Agents.GetAgent.

GetContextRequest

The request message for Contexts.GetContext.

GetConversationDatasetRequest

The request message for ConversationDatasets.GetConversationDataset.

GetConversationModelEvaluationRequest

The request message for ConversationModels.GetConversationModelEvaluation

GetConversationModelRequest

The request message for ConversationModels.GetConversationModel

GetConversationProfileRequest

The request message for ConversationProfiles.GetConversationProfile.

GetConversationRequest

The request message for Conversations.GetConversation.

GetDocumentRequest

Request message for Documents.GetDocument.

GetEntityTypeRequest

The request message for EntityTypes.GetEntityType.

GetEnvironmentHistoryRequest

The request message for Environments.GetEnvironmentHistory.

GetEnvironmentRequest

The request message for Environments.GetEnvironment.

GetFulfillmentRequest

The request message for Fulfillments.GetFulfillment.

GetIntentRequest

The request message for Intents.GetIntent.

GetKnowledgeBaseRequest

Request message for KnowledgeBases.GetKnowledgeBase.

GetParticipantRequest

The request message for Participants.GetParticipant.

GetSessionEntityTypeRequest

The request message for SessionEntityTypes.GetSessionEntityType.

GetValidationResultRequest

The request message for Agents.GetValidationResult.

GetVersionRequest

The request message for Versions.GetVersion.

HumanAgentAssistantConfig

Defines the Human Agent Assist to connect to a conversation.

HumanAgentAssistantEvent

Represents a notification sent to Cloud Pub/Sub subscribers for human agent assistant events in a specific conversation.

HumanAgentHandoffConfig

Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation.

Currently, this feature is not general available, please contact Google to get access.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ImportAgentRequest

The request message for Agents.ImportAgent.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ImportConversationDataOperationMetadata

Metadata for a ConversationDatasets.ImportConversationData operation.

ImportConversationDataOperationResponse

Response used for ConversationDatasets.ImportConversationData long running operation.

ImportConversationDataRequest

The request message for ConversationDatasets.ImportConversationData.

ImportDocumentTemplate

The template used for importing documents.

ImportDocumentsRequest

Request message for Documents.ImportDocuments.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ImportDocumentsResponse

Response message for Documents.ImportDocuments.

InputAudioConfig

Instructs the speech recognizer how to process the audio content.

InputConfig

Represents the configuration of importing a set of conversation files in Google Cloud Storage.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

InputDataset

InputDataset used to create model or do evaluation. NextID:5

InputTextConfig

Defines the language used in the input text.

Intent

An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification.

For more information, see the intent guide <https://cloud.google.com/dialogflow/docs/intents-overview>__.

IntentBatch

This message is a wrapper around a collection of intents.

IntentSuggestion

Represents an intent suggestion.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

IntentView

Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.

Values: INTENT_VIEW_UNSPECIFIED (0): Training phrases field is not populated in the response. INTENT_VIEW_FULL (1): All fields are populated.

KnowledgeBase

A knowledge base represents a collection of knowledge documents that you provide to Dialogflow. Your knowledge documents contain information that may be useful during conversations with end-users. Some Dialogflow features use knowledge bases when looking for a response to an end-user input.

For more information, see the knowledge base guide <https://cloud.google.com/dialogflow/docs/how/knowledge-bases>__.

Note: The projects.agent.knowledgeBases resource is deprecated; only use projects.knowledgeBases.

KnowledgeOperationMetadata

Metadata in google::longrunning::Operation for Knowledge operations.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ListAnswerRecordsRequest

Request message for AnswerRecords.ListAnswerRecords.

ListAnswerRecordsResponse

Response message for AnswerRecords.ListAnswerRecords.

ListContextsRequest

The request message for Contexts.ListContexts.

ListContextsResponse

The response message for Contexts.ListContexts.

ListConversationDatasetsRequest

The request message for ConversationDatasets.ListConversationDatasets.

ListConversationDatasetsResponse

The response message for ConversationDatasets.ListConversationDatasets.

ListConversationModelEvaluationsRequest

The request message for ConversationModels.ListConversationModelEvaluations

ListConversationModelEvaluationsResponse

The response message for ConversationModels.ListConversationModelEvaluations

ListConversationModelsRequest

The request message for ConversationModels.ListConversationModels

ListConversationModelsResponse

The response message for ConversationModels.ListConversationModels

ListConversationProfilesRequest

The request message for ConversationProfiles.ListConversationProfiles.

ListConversationProfilesResponse

The response message for ConversationProfiles.ListConversationProfiles.

ListConversationsRequest

The request message for Conversations.ListConversations.

ListConversationsResponse

The response message for Conversations.ListConversations.

ListDocumentsRequest

Request message for Documents.ListDocuments.

ListDocumentsResponse

Response message for Documents.ListDocuments.

ListEntityTypesRequest

The request message for EntityTypes.ListEntityTypes.

ListEntityTypesResponse

The response message for EntityTypes.ListEntityTypes.

ListEnvironmentsRequest

The request message for Environments.ListEnvironments.

ListEnvironmentsResponse

The response message for Environments.ListEnvironments.

ListIntentsRequest

The request message for Intents.ListIntents.

ListIntentsResponse

The response message for Intents.ListIntents.

ListKnowledgeBasesRequest

Request message for KnowledgeBases.ListKnowledgeBases.

ListKnowledgeBasesResponse

Response message for KnowledgeBases.ListKnowledgeBases.

ListMessagesRequest

The request message for Conversations.ListMessages.

ListMessagesResponse

The response message for Conversations.ListMessages.

ListParticipantsRequest

The request message for Participants.ListParticipants.

ListParticipantsResponse

The response message for Participants.ListParticipants.

ListSessionEntityTypesRequest

The request message for SessionEntityTypes.ListSessionEntityTypes.

ListSessionEntityTypesResponse

The response message for SessionEntityTypes.ListSessionEntityTypes.

ListVersionsRequest

The request message for Versions.ListVersions.

ListVersionsResponse

The response message for Versions.ListVersions.

LoggingConfig

Defines logging behavior for conversation lifecycle events.

Message

Represents a message posted into a conversation.

MessageAnnotation

Represents the result of annotation for the message.

NotificationConfig

Defines notification behavior.

OriginalDetectIntentRequest

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

OutputAudio

Represents the natural language speech audio to be played to the end user.

OutputAudioConfig

Instructs the speech synthesizer on how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

OutputAudioEncoding

Audio encoding of the output audio format in Text-To-Speech.

Values: OUTPUT_AUDIO_ENCODING_UNSPECIFIED (0): Not specified. OUTPUT_AUDIO_ENCODING_LINEAR_16 (1): Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header. OUTPUT_AUDIO_ENCODING_MP3 (2): MP3 audio at 32kbps. OUTPUT_AUDIO_ENCODING_MP3_64_KBPS (4): MP3 audio at 64kbps. OUTPUT_AUDIO_ENCODING_OGG_OPUS (3): Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate. OUTPUT_AUDIO_ENCODING_MULAW (5): 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.

Participant

Represents a conversation participant (human agent, virtual agent, end-user).

QueryInput

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text.

  3. An event that specifies which intent to trigger.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

QueryParameters

Represents the parameters of the conversational query.

QueryResult

Represents the result of conversational query or event processing.

ReloadDocumentRequest

Request message for Documents.ReloadDocument.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

RestoreAgentRequest

The request message for Agents.RestoreAgent.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

SearchAgentsRequest

The request message for Agents.SearchAgents.

SearchAgentsResponse

The response message for Agents.SearchAgents.

SearchKnowledgeAnswer

Represents a SearchKnowledge answer.

SearchKnowledgeRequest

The request message for Conversations.SearchKnowledge.

SearchKnowledgeResponse

The response message for Conversations.SearchKnowledge.

Sentiment

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text. See: https://cloud.google.com/natural-language/docs/basics#interpreting_sentiment_analysis_values for how to interpret the result.

SentimentAnalysisRequestConfig

Configures the types of sentiment analysis to perform.

SentimentAnalysisResult

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For [Participants.DetectIntent][], it needs to be configured in DetectIntentRequest.query_params. For [Participants.StreamingDetectIntent][], it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config

SessionEntityType

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes.

For more information, see the session entity guide <https://cloud.google.com/dialogflow/docs/entities-session>__.

SetAgentRequest

The request message for Agents.SetAgent.

SetSuggestionFeatureConfigOperationMetadata

Metadata for a [ConversationProfile.SetSuggestionFeatureConfig][] operation.

SetSuggestionFeatureConfigRequest

The request message for [ConversationProfiles.SetSuggestionFeature][].

SmartReplyAnswer

Represents a smart reply answer.

SmartReplyMetrics

The evaluation metrics for smart reply model.

SmartReplyModelMetadata

Metadata for smart reply models.

SpeechContext

Hints for the speech recognizer to help with recognition in a specific conversation state.

SpeechModelVariant

Variant of the specified [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use.

See the Cloud Speech documentation <https://cloud.google.com/speech-to-text/docs/enhanced-models>__ for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.

Values: SPEECH_MODEL_VARIANT_UNSPECIFIED (0): No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE. USE_BEST_AVAILABLE (1): Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for.

    Please see the `Dialogflow
    docs <https://cloud.google.com/dialogflow/docs/data-logging>`__
    for how to make your project eligible for enhanced models.
USE_STANDARD (2):
    Use standard model variant even if an enhanced model is
    available. See the `Cloud Speech
    documentation <https://cloud.google.com/speech-to-text/docs/enhanced-models>`__
    for details about enhanced models.
USE_ENHANCED (3):
    Use an enhanced model variant:

    -  If an enhanced variant does not exist for the given
       <xref uid="google.cloud.dialogflow.v2.InputAudioConfig.model">model</xref>
       and request language, Dialogflow falls back to the
       standard variant.

       The `Cloud Speech
       documentation <https://cloud.google.com/speech-to-text/docs/enhanced-models>`__
       describes which models have enhanced variants.

    -  If the API caller isn't eligible for enhanced models,
       Dialogflow returns an error. Please see the `Dialogflow
       docs <https://cloud.google.com/dialogflow/docs/data-logging>`__
       for how to make your project eligible.

SpeechToTextConfig

Configures speech transcription for ConversationProfile.

SpeechWordInfo

Information for a word recognized by the speech recognizer.

SsmlVoiceGender

Gender of the voice as described in SSML voice element <https://www.w3.org/TR/speech-synthesis11/#edef_voice>__.

Values: SSML_VOICE_GENDER_UNSPECIFIED (0): An unspecified gender, which means that the client doesn't care which gender the selected voice will have. SSML_VOICE_GENDER_MALE (1): A male voice. SSML_VOICE_GENDER_FEMALE (2): A female voice. SSML_VOICE_GENDER_NEUTRAL (3): A gender-neutral voice.

StreamingAnalyzeContentRequest

The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.

Multiple request messages should be sent in order:

  1. The first message must contain participant, config and optionally query_params. If you want to receive an audio response, it should also contain reply_audio_config. The message must not contain input.

  2. If config in the first message was set to audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. However, note that:

    • Dialogflow will bill you for the audio so far.
    • Dialogflow discards all Speech recognition results in favor of the text input.
  3. If StreamingAnalyzeContentRequest.config in the first message was set to StreamingAnalyzeContentRequest.text_config, then the second message must contain only input_text. Moreover, you must not send more than two messages.

After you sent all input, you must half-close or abort the request stream.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

StreamingAnalyzeContentResponse

The top-level message returned from the StreamingAnalyzeContent method.

Multiple response messages can be returned in order:

  1. If the input was set to streaming audio, the first one or more messages contain recognition_result. Each recognition_result represents a more complete transcript of what the user said. The last recognition_result has is_final set to true.

  2. In virtual agent stage: if enable_partial_automated_agent_reply is true, the following N (currently 1 <= N <= 4) messages contain automated_agent_reply and optionally reply_audio returned by the virtual agent. The first (N-1) automated_agent_reply\ s will have automated_agent_reply_type set to PARTIAL. The last automated_agent_reply has automated_agent_reply_type set to FINAL. If enable_partial_automated_agent_reply is not enabled, response stream only contains the final reply.

    In human assist stage: the following N (N >= 1) messages contain human_agent_suggestion_results, end_user_suggestion_results or message.

StreamingDetectIntentRequest

The top-level message sent by the client to the Sessions.StreamingDetectIntent method.

Multiple request messages should be sent in order:

  1. The first message must contain session, query_input plus optionally query_params. If the client wants to receive an audio response, it should also contain output_audio_config. The message must not contain input_audio.

  2. If query_input was set to query_input.audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text.

    However, note that:

    • Dialogflow will bill you for the audio duration so far.
    • Dialogflow discards all Speech recognition results in favor of the input text.
    • Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

StreamingDetectIntentResponse

The top-level message returned from the StreamingDetectIntent method.

Multiple response messages can be returned in order:

  1. If the StreamingDetectIntentRequest.input_audio field was set, the recognition_result field is populated for one or more messages. See the StreamingRecognitionResult message for details about the result message sequence.

  2. The next message contains response_id, query_result and optionally webhook_status if a WebHook was called.

StreamingRecognitionResult

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

While end-user audio is being processed, Dialogflow sends a series of results. Each result may contain a transcript value. A transcript represents a portion of the utterance. While the recognizer is processing audio, transcript values may be interim values or finalized values. Once a transcript is finalized, the is_final value is set to true and processing continues for the next transcript.

If StreamingDetectIntentRequest.query_input.audio_config.single_utterance was true, and the recognizer has completed processing audio, the message_type value is set to `END_OF_SINGLE_UTTERANCE and the following (last) result contains the last finalized transcript.

The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results.

In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur.

::

Numtranscriptmessage_typeis_final
1"tube"TRANSCRIPTfalse
2"to be a"TRANSCRIPTfalse
3"to be"TRANSCRIPTfalse
4"to be or not to be"TRANSCRIPTtrue
5"that's"TRANSCRIPTfalse
6"that isTRANSCRIPTfalse
7unsetEND_OF_SINGLE_UTTERANCEunset
8" that is the question"TRANSCRIPTtrue

Concatenating the finalized transcripts with is_final set to true, the complete utterance becomes "to be or not to be that is the question".

SuggestArticlesRequest

The request message for Participants.SuggestArticles.

SuggestArticlesResponse

The response message for Participants.SuggestArticles.

SuggestConversationSummaryRequest

The request message for Conversations.SuggestConversationSummary.

SuggestConversationSummaryResponse

The response message for Conversations.SuggestConversationSummary.

SuggestFaqAnswersRequest

The request message for Participants.SuggestFaqAnswers.

SuggestFaqAnswersResponse

The request message for Participants.SuggestFaqAnswers.

SuggestSmartRepliesRequest

The request message for Participants.SuggestSmartReplies.

SuggestSmartRepliesResponse

The response message for Participants.SuggestSmartReplies.

SuggestionFeature

The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

SuggestionInput

Represents the selection of a suggestion.

SuggestionResult

One response of different type of suggestion response which is used in the response of Participants.AnalyzeContent and Participants.AnalyzeContent, as well as HumanAgentAssistantEvent.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

TelephonyDtmf

DTMF <https://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling>__ digit in Telephony Gateway.

Values: TELEPHONY_DTMF_UNSPECIFIED (0): Not specified. This value may be used to indicate an absent digit. DTMF_ONE (1): Number: '1'. DTMF_TWO (2): Number: '2'. DTMF_THREE (3): Number: '3'. DTMF_FOUR (4): Number: '4'. DTMF_FIVE (5): Number: '5'. DTMF_SIX (6): Number: '6'. DTMF_SEVEN (7): Number: '7'. DTMF_EIGHT (8): Number: '8'. DTMF_NINE (9): Number: '9'. DTMF_ZERO (10): Number: '0'. DTMF_A (11): Letter: 'A'. DTMF_B (12): Letter: 'B'. DTMF_C (13): Letter: 'C'. DTMF_D (14): Letter: 'D'. DTMF_STAR (15): Asterisk/star: '*'. DTMF_POUND (16): Pound/diamond/hash/square/gate/octothorpe: '#'.

TelephonyDtmfEvents

A wrapper of repeated TelephonyDtmf digits.

TextInput

Auxiliary proto messages.

Represents the natural language text to be processed.

TextToSpeechSettings

Instructs the speech synthesizer on how to generate the output audio content.

TrainAgentRequest

The request message for Agents.TrainAgent.

UndeployConversationModelOperationMetadata

Metadata for a ConversationModels.UndeployConversationModel operation.

UndeployConversationModelRequest

The request message for ConversationModels.UndeployConversationModel

UpdateAnswerRecordRequest

Request message for AnswerRecords.UpdateAnswerRecord.

UpdateContextRequest

The request message for Contexts.UpdateContext.

UpdateConversationProfileRequest

The request message for ConversationProfiles.UpdateConversationProfile.

UpdateDocumentRequest

Request message for Documents.UpdateDocument.

UpdateEntityTypeRequest

The request message for EntityTypes.UpdateEntityType.

UpdateEnvironmentRequest

The request message for Environments.UpdateEnvironment.

UpdateFulfillmentRequest

The request message for Fulfillments.UpdateFulfillment.

UpdateIntentRequest

The request message for Intents.UpdateIntent.

UpdateKnowledgeBaseRequest

Request message for KnowledgeBases.UpdateKnowledgeBase.

UpdateParticipantRequest

The request message for Participants.UpdateParticipant.

UpdateSessionEntityTypeRequest

The request message for SessionEntityTypes.UpdateSessionEntityType.

UpdateVersionRequest

The request message for Versions.UpdateVersion.

ValidationError

Represents a single validation error.

ValidationResult

Represents the output of agent validation.

Version

You can create multiple versions of your agent and publish them to separate environments.

When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent.

When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for:

  • testing
  • development
  • production
  • etc.

For more information, see the versions and environments guide <https://cloud.google.com/dialogflow/docs/agents-versions>__.

VoiceSelectionParams

Description of which voice to use for speech synthesis.

WebhookRequest

The request message for a webhook call.

WebhookResponse

The response message for a webhook call.

This response is validated by the Dialogflow server. If validation fails, an error will be returned in the QueryResult.diagnostic_info field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error:

  • Use "" for empty strings
  • Use {} or null for empty objects
  • Use [] or null for empty arrays

For more information, see the Protocol Buffers Language Guide <https://developers.google.com/protocol-buffers/docs/proto3#json>__.