API documentation for types
module.
Classes
Agent
Represents a conversational agent.
Required. The name of this agent.
Optional. The list of all languages supported by this agent
(except for the default_language_code
).
Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.
Optional. Determines whether this agent should log conversation queries.
Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.
Optional. The agent tier. If not specified, TIER_STANDARD is assumed.
Any
API documentation for Any
class.
BatchCreateEntitiesRequest
The request message for EntityTypes.BatchCreateEntities.
Required. The entities to create.
BatchDeleteEntitiesRequest
The request message for EntityTypes.BatchDeleteEntities.
Required. The canonical values
of the entities to delete.
Note that these are not fully-qualified names, i.e. they don't
start with projects/<Project ID>
.
BatchDeleteEntityTypesRequest
The request message for EntityTypes.BatchDeleteEntityTypes.
Required. The names entity types to delete. All names must
point to the same agent as parent
.
BatchDeleteIntentsRequest
The request message for Intents.BatchDeleteIntents.
Required. The collection of intents to delete. Only intent
name
must be filled in.
BatchUpdateEntitiesRequest
The request message for EntityTypes.BatchUpdateEntities.
Required. The entities to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateEntityTypesRequest
The request message for EntityTypes.BatchUpdateEntityTypes.
The source of the entity type batch. For each entity type in
the batch: - If name
is specified, we update an existing
entity type. - If name
is not specified, we create a new
entity type.
The collection of entity types to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateEntityTypesResponse
The response message for EntityTypes.BatchUpdateEntityTypes.
BatchUpdateIntentsRequest
The request message for Intents.BatchUpdateIntents.
The source of the intent batch.
The collection of intents to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateIntentsResponse
The response message for Intents.BatchUpdateIntents.
CancelOperationRequest
API documentation for CancelOperationRequest
class.
Context
Represents a context.
Optional. The number of conversational query requests after
which the context expires. If set to 0
(the default) the
context expires immediately. Contexts expire automatically
after 20 minutes if there are no matching queries.
CreateContextRequest
The request message for Contexts.CreateContext.
Required. The context to create.
CreateEntityTypeRequest
The request message for EntityTypes.CreateEntityType.
Required. The entity type to create.
CreateIntentRequest
The request message for Intents.CreateIntent.
Required. The intent to create.
Optional. The resource view to apply to the returned intent.
CreateSessionEntityTypeRequest
The request message for SessionEntityTypes.CreateSessionEntityType.
Required. The session entity type to create.
DeleteAgentRequest
The request message for Agents.DeleteAgent.
DeleteAllContextsRequest
The request message for Contexts.DeleteAllContexts.
DeleteContextRequest
The request message for Contexts.DeleteContext.
DeleteEntityTypeRequest
The request message for EntityTypes.DeleteEntityType.
DeleteIntentRequest
The request message for Intents.DeleteIntent.
DeleteOperationRequest
API documentation for DeleteOperationRequest
class.
DeleteSessionEntityTypeRequest
The request message for SessionEntityTypes.DeleteSessionEntityType.
DetectIntentRequest
The request to detect user's intent.
Optional. The parameters of this query.
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
DetectIntentResponse
The message returned from the DetectIntent method.
The selected results of the conversational query or event
processing. See alternative_query_results
for additional
potential results.
The audio data bytes encoded as specified in the request.
Note: The output audio is generated based on the values of
default platform text responses found in the
query_result.fulfillment_messages
field. If multiple
default text responses exist, they will be concatenated when
generating audio. If no default platform text responses exist,
the generated audio content will be empty.
Empty
API documentation for Empty
class.
EntityType
Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.
Required. The name of the entity type.
Optional. Indicates whether the entity type can be automatically expanded.
Optional. Enables fuzzy entity extraction during classification.
EntityTypeBatch
This message is a wrapper around a collection of entity types.
EventInput
Events allow for matching intents by event name instead of the natural
language input. For instance, input
<event: { name: "welcome_event", parameters: { name: "Sam" } }>
can
trigger a personalized welcome response. The parameter name
may be
used by the agent in the response:
"Hello #welcome_event.name! What can I do for you today?"
.
Optional. The collection of parameters associated with the event.
ExportAgentRequest
The request message for Agents.ExportAgent.
Required. The Google Cloud Storage
<https://cloud.google.com/storage/docs/>
__ URI to export the
agent to. The format of this URI must be gs://<bucket-
name>/<object-name>
. If left unspecified, the serialized
agent is returned inline.
ExportAgentResponse
The response message for Agents.ExportAgent.
The URI to a file containing the exported agent. This field is
populated only if agent_uri
is specified in
ExportAgentRequest
.
FieldMask
API documentation for FieldMask
class.
GetAgentRequest
The request message for Agents.GetAgent.
GetContextRequest
The request message for Contexts.GetContext.
GetEntityTypeRequest
The request message for EntityTypes.GetEntityType.
Optional. The language to retrieve entity synonyms for. If not
specified, the agent's default language is used. Many
languages <https://cloud.google.com/dialogflow/docs/reference/
language>
__ are supported. Note: languages must be enabled in
the agent before they can be used.
GetIntentRequest
The request message for Intents.GetIntent.
Optional. The language to retrieve training phrases,
parameters and rich messages for. If not specified, the
agent's default language is used. Many languages <https://clo
ud.google.com/dialogflow/docs/reference/language>
__ are
supported. Note: languages must be enabled in the agent before
they can be used.
GetOperationRequest
API documentation for GetOperationRequest
class.
GetSessionEntityTypeRequest
The request message for SessionEntityTypes.GetSessionEntityType.
ImportAgentRequest
The request message for Agents.ImportAgent.
Required. The agent to import.
Zip compressed raw byte content for agent.
InputAudioConfig
Instructs the speech recognizer how to process the audio content.
Required. Sample rate (in Hertz) of the audio content sent in
the query. Refer to Cloud Speech API documentation
<https://cloud.google.com/speech-to-text/docs/basics>
__ for
more details.
Optional. A list of strings containing words and phrases that
the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation
<https://cloud.google.com/speech-to-text/docs/basics#phrase-
hints>
__ for more details.
Optional. If false
(default), recognition does not cease
until the client closes the stream. If true
, the
recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice
has stopped or paused. In this case, once a detected intent is
received, the client should close the stream and start a new
request with a new stream as needed. Note: This setting is
relevant only for streaming methods. Note: When specified,
InputAudioConfig.single_utterance takes precedence over
StreamingDetectIntentRequest.single_utterance.
Intent
Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.
Required. The name of this intent.
Optional. The priority of this intent. Higher numbers represent higher priorities. If this is zero or unspecified, we use the default priority 500000. Negative numbers mean that the intent is disabled.
Optional. Indicates whether Machine Learning is disabled for
the intent. Note: If ml_diabled
setting is set to true,
then this intent is not taken into account during inference in
ML ONLY
match mode. Also, auto-markup in the UI is turned
off.
Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent.
Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.
Optional. Indicates whether to delete all contexts in the current session when this intent is matched.
Optional. The collection of rich messages corresponding to the
Response
field in the Dialogflow console.
Read-only. The unique identifier of the root intent in the
chain of followup intents. It identifies the correct followup
intents chain for this intent. We populate this field only in
the output. Format: projects/<Project
ID>/agent/intents/<Intent ID>
.
Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.
IntentBatch
This message is a wrapper around a collection of intents.
LatLng
API documentation for LatLng
class.
ListContextsRequest
The request message for Contexts.ListContexts.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListContextsResponse
The response message for Contexts.ListContexts.
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListEntityTypesRequest
The request message for EntityTypes.ListEntityTypes.
Optional. The language to list entity synonyms for. If not
specified, the agent's default language is used. Many
languages <https://cloud.google.com/dialogflow/docs/reference/
language>
__ are supported. Note: languages must be enabled in
the agent before they can be used.
Optional. The next_page_token value returned from a previous list request.
ListEntityTypesResponse
The response message for EntityTypes.ListEntityTypes.
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListIntentsRequest
The request message for Intents.ListIntents.
Optional. The language to list training phrases, parameters
and rich messages for. If not specified, the agent's default
language is used. Many languages <https://cloud.google.com/di
alogflow/docs/reference/language>
__ are supported. Note:
languages must be enabled in the agent before they can be
used.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListIntentsResponse
The response message for Intents.ListIntents.
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListOperationsRequest
API documentation for ListOperationsRequest
class.
ListOperationsResponse
API documentation for ListOperationsResponse
class.
ListSessionEntityTypesRequest
The request message for SessionEntityTypes.ListSessionEntityTypes.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListSessionEntityTypesResponse
The response message for SessionEntityTypes.ListSessionEntityTypes.
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListValue
API documentation for ListValue
class.
Operation
API documentation for Operation
class.
OperationInfo
API documentation for OperationInfo
class.
OriginalDetectIntentRequest
Represents the contents of the original request that was passed to the
[Streaming]DetectIntent
call.
Optional. The version of the protocol used for this request. This field is AoG-specific.
OutputAudioConfig
Instructs the speech synthesizer on how to generate the output audio content.
Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
QueryInput
Represents the query input. It can contain either:
An audio config which instructs the speech recognizer how to process the speech audio.
A conversational query in the form of text,.
An event that specifies which intent to trigger.
Instructs the speech recognizer how to process the speech audio.
The event to be processed.
QueryParameters
Represents the parameters of the conversational query.
Optional. The geo location of this conversational query.
Optional. Specifies whether to delete all contexts in the current session before the new ones are activated.
Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported.
QueryResult
Represents the result of conversational query or event processing.
The language that was triggered during intent detection. See
Language Support <https://cloud.google.com/dialogflow/docs/re
ference/language>
__ for a list of the currently supported
language codes.
The action name from the matched intent.
This field is set to: - false
if the matched intent has
required parameters and not all of the required parameter
values have been collected. - true
if all required
parameter values have been collected, or if the matched
intent doesn't contain any required parameters.
The collection of rich messages to present to the user.
If the query was fulfilled by a webhook call, this field is
set to the value of the payload
field returned in the
webhook response.
The intent that matched the conversational query. Some, not
all fields are filled in this message, including but not
limited to: name
, display_name
, end_interaction
and is_fallback
.
The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice.
RestoreAgentRequest
The request message for Agents.RestoreAgent.
Required. The agent to restore.
Zip compressed raw byte content for agent.
SearchAgentsRequest
The request message for Agents.SearchAgents.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
SearchAgentsResponse
The response message for Agents.SearchAgents.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Sentiment
The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
SentimentAnalysisRequestConfig
Configures the types of sentiment analysis to perform.
SentimentAnalysisResult
The result of sentiment analysis as configured by
sentiment_analysis_request_config
.
SessionEntityType
Represents a session entity type.
Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types").
Note: session entity types apply to all queries, regardless of the language.
Required. Indicates whether the additional data should override or supplement the developer entity type definition.
SetAgentRequest
The request message for Agents.SetAgent.
Optional. The mask to control which fields get updated.
Status
API documentation for Status
class.
StreamingDetectIntentRequest
The top-level message sent by the client to the [StreamingDetectIntent][] method.
Multiple request messages should be sent in order:
- The first message must contain StreamingDetectIntentRequest.session, [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].
If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [StreamingDetectIntentRequest.query_input.text][].
However, note that:
- Dialogflow will bill you for the audio duration so far.
- Dialogflow discards all Speech recognition results in favor of the input text.
- Dialogflow will use the language code from the first message.
After you sent all input, you must half-close or abort the request stream.
Optional. The parameters of this query.
Optional. Please use [InputAudioConfig.single_utterance][goog
le.cloud.dialogflow.v2.InputAudioConfig.single_utterance]
instead. If false
(default), recognition does not cease
until the client closes the stream. If true
, the
recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice
has stopped or paused. In this case, once a detected intent is
received, the client should close the stream and start a new
request with a new stream as needed. This setting is ignored
when query_input
is a piece of text or an event.
Optional. The input audio content to be recognized. Must be
sent if query_input
was set to a streaming input audio
config. The complete audio over all streaming messages must
not exceed 1 minute.
StreamingDetectIntentResponse
The top-level message returned from the StreamingDetectIntent
method.
Multiple response messages can be returned in order:
If the input was set to streaming audio, the first one or more messages contain
recognition_result
. Eachrecognition_result
represents a more complete transcript of what the user said. The lastrecognition_result
hasis_final
set totrue
.The next message contains
response_id
,query_result
and optionallywebhook_status
if a WebHook was called.The result of speech recognition.
Specifies the status of the webhook request.
The config used by the speech synthesizer to generate the output audio.
StreamingRecognitionResult
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
Example:
transcript: "tube"
transcript: "to be a"
transcript: "to be"
transcript: "to be or not to be" is_final: true
transcript: " that's"
transcript: " that is"
message_type:
END_OF_SINGLE_UTTERANCE
transcript: " that is the question" is_final: true
Only two of the responses contain final results (#4 and #8 indicated by
is_final: true
). Concatenating these generates the full transcript:
"to be or not to be that is the question".
In each response we populate:
for
TRANSCRIPT
:transcript
and possiblyis_final
.for
END_OF_SINGLE_UTTERANCE
: onlymessage_type
.Transcript text representing the words that the user spoke. Populated if and only if
message_type
=TRANSCRIPT
.The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if
is_final
is true and you should not rely on it being accurate or even set.
Struct
API documentation for Struct
class.
SynthesizeSpeechConfig
Configuration of how speech should be synthesized.
Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
TextInput
Represents the natural language text to be processed.
Required. The language of this conversational query. See
Language Support <https://cloud.google.com/dialogflow/docs/re
ference/language>
__ for a list of the currently supported
language codes. Note that queries in the same session do not
necessarily need to specify the same language.
TrainAgentRequest
The request message for Agents.TrainAgent.
UpdateContextRequest
The request message for Contexts.UpdateContext.
Optional. The mask to control which fields get updated.
UpdateEntityTypeRequest
The request message for EntityTypes.UpdateEntityType.
Optional. The language of entity synonyms defined in
entity_type
. If not specified, the agent's default
language is used. Many languages <https://cloud.google.com/di
alogflow/docs/reference/language>
__ are supported. Note:
languages must be enabled in the agent before they can be
used.
UpdateIntentRequest
The request message for Intents.UpdateIntent.
Optional. The language of training phrases, parameters and
rich messages defined in intent
. If not specified, the
agent's default language is used. Many languages <https://clo
ud.google.com/dialogflow/docs/reference/language>
__ are
supported. Note: languages must be enabled in the agent before
they can be used.
Optional. The resource view to apply to the returned intent.
UpdateSessionEntityTypeRequest
The request message for SessionEntityTypes.UpdateSessionEntityType.
Optional. The mask to control which fields get updated.
Value
API documentation for Value
class.
VoiceSelectionParams
Description of which voice to use for speech synthesis.
Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
WaitOperationRequest
API documentation for WaitOperationRequest
class.
WebhookRequest
The request message for a webhook call.
The unique identifier of the response. Contains the same value
as [Streaming]DetectIntentResponse.response_id
.
Optional. The contents of the original request that was passed
to [Streaming]DetectIntent
call.
WebhookResponse
The response message for a webhook call.
Optional. The collection of rich messages to present to the
user. This value is passed directly to
QueryResult.fulfillment_messages
.
Optional. This value is passed directly to
QueryResult.webhook_payload
. See the related
fulfillment_messages[i].payload field
, which may be used
as an alternative to this field. This field can be used for
Actions on Google responses. It should have a structure
similar to the JSON message shown here. For more information,
see Actions on Google Webhook Format
<https://developers.google.com/actions/dialogflow/webhook>
__
.. raw:: html
{ "google": { "expectUserResponse": true, "richResponse": { "items": [ { "simpleResponse": { "textToSpeech": "this is a simple response" } } ] } } }
Optional. Makes the platform immediately invoke another
DetectIntent
call internally with the specified event as
input. When this field is set, Dialogflow ignores the
fulfillment_text
, fulfillment_messages
, and
payload
fields.