Module types (1.1.3)

API documentation for types module.

Classes

Agent

A Dialogflow agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language. Dialogflow translates end- user text or audio during a conversation to structured data that your apps and services can understand. You design and build a Dialogflow agent to handle the types of conversations required for your system. For more information about agents, see the Agent guide <https://cloud.google.com/dialogflow/docs/agents-overview>__.

Required. The name of this agent.

Optional. The list of all languages supported by this agent (except for the default_language_code).

Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.

Optional. Determines whether this agent should log conversation queries.

Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.

Optional. The agent tier. If not specified, TIER_STANDARD is assumed.

Any

API documentation for Any class.

BatchCreateEntitiesRequest

The request message for [EntityTypes.BatchCreateEntities][google.cloud .dialogflow.v2.EntityTypes.BatchCreateEntities].

Required. The entities to create.

BatchDeleteEntitiesRequest

The request message for [EntityTypes.BatchDeleteEntities][google.cloud .dialogflow.v2.EntityTypes.BatchDeleteEntities].

Required. The reference values of the entities to delete. Note that these are not fully-qualified names, i.e. they don’t start with projects/<Project ID>.

BatchDeleteEntityTypesRequest

The request message for [EntityTypes.BatchDeleteEntityTypes][google.cl oud.dialogflow.v2.EntityTypes.BatchDeleteEntityTypes].

Required. The names entity types to delete. All names must point to the same agent as parent.

BatchDeleteIntentsRequest

The request message for [Intents.BatchDeleteIntents][google.cloud.dial ogflow.v2.Intents.BatchDeleteIntents].

Required. The collection of intents to delete. Only intent name must be filled in.

BatchUpdateEntitiesRequest

The request message for [EntityTypes.BatchUpdateEntities][google.cloud .dialogflow.v2.EntityTypes.BatchUpdateEntities].

Required. The entities to update or create.

Optional. The mask to control which fields get updated.

BatchUpdateEntityTypesRequest

The request message for [EntityTypes.BatchUpdateEntityTypes][google.cl oud.dialogflow.v2.EntityTypes.BatchUpdateEntityTypes].

The source of the entity type batch. For each entity type in the batch: - If name is specified, we update an existing entity type. - If name is not specified, we create a new entity type.

The collection of entity types to update or create.

Optional. The mask to control which fields get updated.

BatchUpdateEntityTypesResponse

The response message for [EntityTypes.BatchUpdateEntityTypes][google.c loud.dialogflow.v2.EntityTypes.BatchUpdateEntityTypes].

BatchUpdateIntentsRequest

Attributes: parent: Required. The name of the agent to update or create intents in. Format: projects/<Project ID>/agent. intent_batch: The source of the intent batch. intent_batch_uri: The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with “gs://”. intent_batch_inline: The collection of intents to update or create. language_code: Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__. update_mask: Optional. The mask to control which fields get updated. intent_view: Optional. The resource view to apply to the returned intent.

BatchUpdateIntentsResponse

The response message for [Intents.BatchUpdateIntents][google.cloud.dia logflow.v2.Intents.BatchUpdateIntents].

CancelOperationRequest

API documentation for CancelOperationRequest class.

Context

Dialogflow contexts are similar to natural language context. If a person says to you “they are orange”, you need context in order to understand what “they” is referring to. Similarly, for Dialogflow to handle an end-user expression like that, it needs to be provided with context in order to correctly match an intent. Using contexts, you can control the flow of a conversation. You can configure contexts for an intent by setting input and output contexts, which are identified by string names. When an intent is matched, any configured output contexts for that intent become active. While any contexts are active, Dialogflow is more likely to match intents that are configured with input contexts that correspond to the currently active contexts. For more information about context, see the Contexts guide <https://cloud.google.com/dialogflow/docs/contexts-overview>__.

Optional. The number of conversational query requests after which the context expires. The default is 0. If set to 0, the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.

CreateContextRequest

The request message for [Contexts.CreateContext][google.cloud.dialogfl ow.v2.Contexts.CreateContext].

Required. The context to create.

CreateEntityTypeRequest

The request message for [EntityTypes.CreateEntityType][google.cloud.di alogflow.v2.EntityTypes.CreateEntityType].

Required. The entity type to create.

CreateIntentRequest

The request message for [Intents.CreateIntent][google.cloud.dialogflow .v2.Intents.CreateIntent].

Required. The intent to create.

Optional. The resource view to apply to the returned intent.

CreateSessionEntityTypeRequest

The request message for [SessionEntityTypes.CreateSessionEntityType][g oogle.cloud.dialogflow.v2.SessionEntityTypes.CreateSessionEntityType].

Required. The session entity type to create.

DeleteAgentRequest

The request message for Agents.DeleteAgent.

DeleteAllContextsRequest

The request message for [Contexts.DeleteAllContexts][google.cloud.dial ogflow.v2.Contexts.DeleteAllContexts].

DeleteContextRequest

The request message for [Contexts.DeleteContext][google.cloud.dialogfl ow.v2.Contexts.DeleteContext].

DeleteEntityTypeRequest

The request message for [EntityTypes.DeleteEntityType][google.cloud.di alogflow.v2.EntityTypes.DeleteEntityType].

DeleteIntentRequest

The request message for [Intents.DeleteIntent][google.cloud.dialogflow .v2.Intents.DeleteIntent].

DeleteOperationRequest

API documentation for DeleteOperationRequest class.

DeleteSessionEntityTypeRequest

The request message for [SessionEntityTypes.DeleteSessionEntityType][g oogle.cloud.dialogflow.v2.SessionEntityTypes.DeleteSessionEntityType].

DetectIntentRequest

The request to detect user’s intent.

The parameters of this query.

Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

The natural language speech audio to be processed. This field should be populated iff query_input is set to an input audio config. A single request can contain up to 1 minute of speech audio data.

DetectIntentResponse

The message returned from the DetectIntent method.

The selected results of the conversational query or event processing. See alternative_query_results for additional potential results.

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the query_result.fulfillment_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty. In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

Duration

API documentation for Duration class.

Empty

API documentation for Empty class.

EntityType

Each intent parameter has a type, called the entity type, which dictates exactly how data from an end-user expression is extracted. Dialogflow provides predefined system entities that can match many common types of data. For example, there are system entities for matching dates, times, colors, email addresses, and so on. You can also create your own custom entities for matching custom data. For example, you could define a vegetable entity that can match the types of vegetables available for purchase with a grocery store agent. For more information, see the Entity guide <https://cloud.google.com/dialogflow/docs/entities-overview>__.

Required. The name of the entity type.

Optional. Indicates whether the entity type can be automatically expanded.

Optional. Enables fuzzy entity extraction during classification.

EntityTypeBatch

This message is a wrapper around a collection of entity types.

Environment

You can create multiple versions of your agent and publish them to separate environments. When you edit an agent, you are editing the draft agent. At any point, you can save the draft agent as an agent version, which is an immutable snapshot of your agent. When you save the draft agent, it is published to the default environment. When you create agent versions, you can publish them to custom environments. You can create a variety of custom environments for: - testing - development - production - etc. For more information, see the versions and environments guide <https://cloud.google.com/dialogflow/docs/agents-versions>__.

Optional. The developer-provided description for this environment. The maximum length is 500 characters. If exceeded, the request is rejected.

Output only. The state of this environment. This field is read-only, i.e., it cannot be set by create and update methods.

EventInput

Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event", parameters: { name: "Sam" } }> can trigger a personalized welcome response. The parameter name may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?".

The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter’s entity type is a composite entity: map - Else: string or number, depending on parameter value type - MapValue value: - If parameter’s entity type is a composite entity: map from composite entity property names to property values - Else: parameter value

ExportAgentRequest

The request message for Agents.ExportAgent.

Required. The Google Cloud Storage <https://cloud.google.com/storage/docs/>__ URI to export the agent to. The format of this URI must be gs://<bucket- name>/<object-name>. If left unspecified, the serialized agent is returned inline.

ExportAgentResponse

The response message for Agents.ExportAgent.

The URI to a file containing the exported agent. This field is populated only if agent_uri is specified in ExportAgentRequest.

FieldMask

API documentation for FieldMask class.

GetAgentRequest

The request message for Agents.GetAgent.

GetContextRequest

The request message for Contexts.GetContext.

GetEntityTypeRequest

The request message for [EntityTypes.GetEntityType][google.cloud.dialo gflow.v2.EntityTypes.GetEntityType].

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

GetIntentRequest

The request message for Intents.GetIntent.

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

GetOperationRequest

API documentation for GetOperationRequest class.

GetSessionEntityTypeRequest

The request message for [SessionEntityTypes.GetSessionEntityType][goog le.cloud.dialogflow.v2.SessionEntityTypes.GetSessionEntityType].

GetValidationResultRequest

The request message for [Agents.GetValidationResult][google.cloud.dial ogflow.v2.Agents.GetValidationResult].

Optional. The language for which you want a validation result. If not specified, the agent’s default language is used. Many languages <https://cloud.google.com/dialogflow/docs/reference/ language>__ are supported. Note: languages must be enabled in the agent before they can be used.

ImportAgentRequest

The request message for Agents.ImportAgent.

Required. The agent to import.

Zip compressed raw byte content for agent.

InputAudioConfig

Instructs the speech recognizer how to process the audio content.

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation <https://cloud.google.com/speech-to-text/docs/basics>__ for more details.

If true, Dialogflow returns SpeechWordInfo in [StreamingRecognitionResult][google.cloud.dialogflow.v2.Stream ingRecognitionResult] with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn’t return any word-level information.

Context information to assist speech recognition. See the Cloud Speech documentation <https://cloud.google.com/speech- to-text/docs/basics#phrase-hints>__ for more details.

Which variant of the [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use.

Intent

An intent categorizes an end-user’s intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the intent guide <https://cloud.google.com/dialogflow/docs/intents-overview>__.

Required. The name of this intent.

Optional. The priority of this intent. Higher numbers represent higher priorities. - If the supplied value is unspecified or 0, the service translates the value to 500,000, which corresponds to the Normal priority in the console. - If the supplied value is negative, the intent is ignored in runtime detect intent requests.

Optional. Indicates whether Machine Learning is disabled for the intent. Note: If ml_disabled setting is set to true, then this intent is not taken into account during inference in ML ONLY match mode. Also, auto-markup in the UI is turned off.

Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent. Event names are limited to 150 characters.

Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.

Optional. Indicates whether to delete all contexts in the current session when this intent is matched.

Optional. The collection of rich messages corresponding to the Response field in the Dialogflow console.

Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output. Format: projects/<Project ID>/agent/intents/<Intent ID>.

Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.

IntentBatch

This message is a wrapper around a collection of intents.

LatLng

API documentation for LatLng class.

ListContextsRequest

The request message for [Contexts.ListContexts][google.cloud.dialogflo w.v2.Contexts.ListContexts].

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

ListContextsResponse

The response message for [Contexts.ListContexts][google.cloud.dialogfl ow.v2.Contexts.ListContexts].

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListEntityTypesRequest

The request message for [EntityTypes.ListEntityTypes][google.cloud.dia logflow.v2.EntityTypes.ListEntityTypes].

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

Optional. The next_page_token value returned from a previous list request.

ListEntityTypesResponse

The response message for [EntityTypes.ListEntityTypes][google.cloud.di alogflow.v2.EntityTypes.ListEntityTypes].

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListEnvironmentsRequest

The request message for [Environments.ListEnvironments][google.cloud.d ialogflow.v2.Environments.ListEnvironments].

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

ListEnvironmentsResponse

The response message for [Environments.ListEnvironments][google.cloud. dialogflow.v2.Environments.ListEnvironments].

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListIntentsRequest

The request message for Intents.ListIntents.

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

ListIntentsResponse

The response message for Intents.ListIntents.

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListOperationsRequest

API documentation for ListOperationsRequest class.

ListOperationsResponse

API documentation for ListOperationsResponse class.

ListSessionEntityTypesRequest

The request message for [SessionEntityTypes.ListSessionEntityTypes][go ogle.cloud.dialogflow.v2.SessionEntityTypes.ListSessionEntityTypes].

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

ListSessionEntityTypesResponse

The response message for [SessionEntityTypes.ListSessionEntityTypes][g oogle.cloud.dialogflow.v2.SessionEntityTypes.ListSessionEntityTypes].

Token to retrieve the next page of results, or empty if there are no more results in the list.

ListValue

API documentation for ListValue class.

Operation

API documentation for Operation class.

OperationInfo

API documentation for OperationInfo class.

OriginalDetectIntentRequest

Represents the contents of the original request that was passed to the [Streaming]DetectIntent call.

Optional. The version of the protocol used for this request. This field is AoG-specific.

OutputAudioConfig

Instructs the speech synthesizer on how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.

The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice’s natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

QueryInput

Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text,. 3. An event that specifies which intent to trigger.

Instructs the speech recognizer how to process the speech audio.

The event to be processed.

QueryParameters

Represents the parameters of the conversational query.

The geo location of this conversational query.

Specifies whether to delete all contexts in the current session before the new ones are activated.

This field can be used to pass custom data to your webhook. Arbitrary JSON objects are supported. If supplied, the value is used to populate the WebhookRequest.original_detect_intent_request.payload field sent to your webhook.

QueryResult

Represents the result of conversational query or event processing.

The language that was triggered during intent detection. See Language Support <https://cloud.google.com/dialogflow/docs/re ference/language>__ for a list of the currently supported language codes.

The action name from the matched intent.

This field is set to: - false if the matched intent has required parameters and not all of the required parameter values have been collected. - true if all required parameter values have been collected, or if the matched intent doesn’t contain any required parameters.

The collection of rich messages to present to the user.

If the query was fulfilled by a webhook call, this field is set to the value of the payload field returned in the webhook response.

The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name, display_name, end_interaction and is_fallback.

Free-form diagnostic information for the associated detect intent request. The fields of this data can change without notice, so you should not write code that depends on its structure. The data may contain: - webhook call latency - webhook errors

RestoreAgentRequest

The request message for Agents.RestoreAgent.

Required. The agent to restore.

Zip compressed raw byte content for agent.

SearchAgentsRequest

The request message for Agents.SearchAgents.

Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.

SearchAgentsResponse

The response message for Agents.SearchAgents.

Token to retrieve the next page of results, or empty if there are no more results in the list.

Sentiment

The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).

SentimentAnalysisRequestConfig

Configures the types of sentiment analysis to perform.

SentimentAnalysisResult

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user’s attitude as positive, negative, or neutral. For [Pa rticipants.AnalyzeContent][google.cloud.dialogflow.v2.Participants.Ana lyzeContent], it needs to be configured in [DetectIntentRequest.query_ params][google.cloud.dialogflow.v2.DetectIntentRequest.query_params]. For [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2. Participants.StreamingAnalyzeContent], it needs to be configured in [S treamingDetectIntentRequest.query_params][google.cloud.dialogflow.v2.S treamingDetectIntentRequest.query_params]. And for [Participants.Analy zeContent][google.cloud.dialogflow.v2.Participants.AnalyzeContent] and [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2.Part icipants.StreamingAnalyzeContent], it needs to be configured in [Conve rsationProfile.human_agent_assistant_config][google.cloud.dialogflow.v 2.ConversationProfile.human_agent_assistant_config]

SessionEntityType

A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities, during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes. For more information, see the session entity guide <https://cloud.google.com/dialogflow/docs/entities-session>__.

Required. Indicates whether the additional data should override or supplement the custom entity type definition.

SetAgentRequest

The request message for Agents.SetAgent.

Optional. The mask to control which fields get updated.

SpeechContext

Hints for the speech recognizer to help with recognition in a specific conversation state.

Optional. Boost for this context compared to other contexts:

  • If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. - If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.

SpeechWordInfo

Information for a word recognized by the speech recognizer.

Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.

The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided.

Status

API documentation for Status class.

StreamingDetectIntentRequest

The top-level message sent by the client to the [Sessions.StreamingDet ectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent] method. Multiple request messages should be sent in order: 1. The first message must contain [session][google.cloud.dialogflow.v2.Str eamingDetectIntentRequest.session], [query_input][google.cloud.dial ogflow.v2.StreamingDetectIntentRequest.query_input] plus optionally [query_params][google.cloud.dialogflow.v2.StreamingDetectIntentRequest .query_params]. If the client wants to receive an audio response, it should also contain [output_audio_config][google.cloud.dialog flow.v2.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [input_audio][google.cloud.dialogflow.v2.S treamingDetectIntentRequest.input_audio]. 2. If [query_input][googl e.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to [query_input.audio_config][google.cloud.dialogflow.v2.InputA udioConfig], all subsequent messages must contain [input_audio][ google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text. However, note that: - Dialogflow will bill you for the audio duration so far. - Dialogflow discards all Speech recognition results in favor of the input text. - Dialogflow will use the language code from the first message. After you sent all input, you must half-close or abort the request stream.

The parameters of this query.

Please use [InputAudioConfig.single_utterance][google.cloud.di alogflow.v2.InputAudioConfig.single_utterance] instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio’s voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

Mask for [output_audio_config][google.cloud.dialogflow.v2.Stre amingDetectIntentRequest.output_audio_config] indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level. If unspecified or empty, [output_audio_config][google.cloud.dialogflow.v2.Str eamingDetectIntentRequest.output_audio_config] replaces the agent-level config in its entirety.

StreamingDetectIntentResponse

The top-level message returned from the StreamingDetectIntent method. Multiple response messages can be returned in order: 1. If the input was set to streaming audio, the first one or more messages contain recognition_result. Each recognition_result represents a more complete transcript of what the user said. The last recognition_result has is_final set to true. 2. The next message contains response_id, query_result and optionally webhook_status if a WebHook was called.

The result of speech recognition.

Specifies the status of the webhook request.

The config used by the speech synthesizer to generate the output audio.

StreamingRecognitionResult

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance. Example: 1. transcript: “tube” 2. transcript: “to be a” 3. transcript: “to be” 4. transcript: “to be or not to be” is_final: true 5. transcript: " that’s" 6. transcript: " that is" 7. message_type: END_OF_SINGLE_UTTERANCE 8. transcript: " that is the question" is_final: true Only two of the responses contain final results (#4 and #8 indicated by is_final: true). Concatenating these generates the full transcript: “to be or not to be that is the question”. In each response we populate: - for TRANSCRIPT: transcript and possibly is_final. - for END_OF_SINGLE_UTTERANCE: only message_type.

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

Struct

API documentation for Struct class.

SynthesizeSpeechConfig

Configuration of how speech should be synthesized.

Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.

Optional. An identifier which selects ‘audio effects’ profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.

TextInput

Represents the natural language text to be processed.

Required. The language of this conversational query. See Language Support <https://cloud.google.com/dialogflow/docs/re ference/language>__ for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Timestamp

API documentation for Timestamp class.

TrainAgentRequest

The request message for Agents.TrainAgent.

UpdateContextRequest

The request message for [Contexts.UpdateContext][google.cloud.dialogfl ow.v2.Contexts.UpdateContext].

Optional. The mask to control which fields get updated.

UpdateEntityTypeRequest

The request message for [EntityTypes.UpdateEntityType][google.cloud.di alogflow.v2.EntityTypes.UpdateEntityType].

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

UpdateIntentRequest

The request message for [Intents.UpdateIntent][google.cloud.dialogflow .v2.Intents.UpdateIntent].

Optional. The language used to access language-specific data. If not specified, the agent’s default language is used. For more information, see Multilingual intent and entity data <https://cloud.google.com/dialogflow/docs/agents- multilingual#intent-entity>__.

Optional. The resource view to apply to the returned intent.

UpdateSessionEntityTypeRequest

The request message for [SessionEntityTypes.UpdateSessionEntityType][g oogle.cloud.dialogflow.v2.SessionEntityTypes.UpdateSessionEntityType].

Optional. The mask to control which fields get updated.

ValidationError

Represents a single validation error.

The names of the entries that the error is associated with. Format: - “projects//agent”, if the error is associated with the entire agent. - “projects//agent/intents/”, if the error is associated with certain intents. - “projects//agent/intents//trainingPhrases/”, if the error is associated with certain intent training phrases. - “projects//agent/intents//parameters/”, if the error is associated with certain intent parameters. - “projects//agent/entities/”, if the error is associated with certain entities.

ValidationResult

Represents the output of agent validation.

Value

API documentation for Value class.

VoiceSelectionParams

Description of which voice to use for speech synthesis.

Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.

WaitOperationRequest

API documentation for WaitOperationRequest class.

WebhookRequest

The request message for a webhook call.

The unique identifier of the response. Contains the same value as [Streaming]DetectIntentResponse.response_id.

Optional. The contents of the original request that was passed to [Streaming]DetectIntent call.

WebhookResponse

The response message for a webhook call. This response is validated by the Dialogflow server. If validation fails, an error will be returned in the [QueryResult.diagnostic_info][google.cloud.dialogflow. v2.QueryResult.diagnostic_info] field. Setting JSON fields to an empty value with the wrong type is a common error. To avoid this error: - Use "" for empty strings - Use {} or null for empty objects - Use [] or null for empty arrays For more information, see the Protocol Buffers Language Guide <https://developers.google.com/protocol-buffers/docs/proto3#json>__.

Optional. The rich response messages intended for the end- user. When provided, Dialogflow uses this field to populate [Q ueryResult.fulfillment_messages][google.cloud.dialogflow.v2.Qu eryResult.fulfillment_messages] sent to the integration or API caller.

Optional. This field can be used to pass custom data from your webhook to the integration or API caller. Arbitrary JSON objects are supported. When provided, Dialogflow uses this field to populate [QueryResult.webhook_payload][google.cloud.d ialogflow.v2.QueryResult.webhook_payload] sent to the integration or API caller. This field is also used by the Google Assistant integration <https://cloud.google.com/dialogflow/docs/integrations/aog> for rich response messages. See the format definition at Google Assistant Dialogflow webhook format <https://developer s.google.com/assistant/actions/build/json/dialogflow-webhook- json>

Optional. Invokes the supplied events. When this field is set, Dialogflow ignores the fulfillment_text, fulfillment_messages, and payload fields.