API documentation for types
module.
Classes
Agent
A Dialogflow agent is a virtual agent that handles conversations with
your end-users. It is a natural language understanding module that
understands the nuances of human language. Dialogflow translates end-
user text or audio during a conversation to structured data that your
apps and services can understand. You design and build a Dialogflow
agent to handle the types of conversations required for your system.
For more information about agents, see the Agent guide
<https://cloud.google.com/dialogflow/docs/agents-overview>
__.
Required. The name of this agent.
Optional. The list of all languages supported by this agent
(except for the default_language_code
).
Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.
Optional. Determines whether this agent should log conversation queries.
Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.
Optional. The agent tier. If not specified, TIER_STANDARD is assumed.
Any
API documentation for Any
class.
AutoApproveSmartMessagingEntriesResponse
Response message for [Documents.AutoApproveSmartMessagingEntries].
Number of smart messaging entries disabled.
BatchCreateEntitiesRequest
The request message for [EntityTypes.BatchCreateEntities][google.cloud .dialogflow.v2beta1.EntityTypes.BatchCreateEntities].
Required. The entities to create.
BatchDeleteEntitiesRequest
The request message for [EntityTypes.BatchDeleteEntities][google.cloud .dialogflow.v2beta1.EntityTypes.BatchDeleteEntities].
Required. The reference values
of the entities to delete.
Note that these are not fully-qualified names, i.e. they don’t
start with projects/<Project ID>
.
BatchDeleteEntityTypesRequest
The request message for [EntityTypes.BatchDeleteEntityTypes][google.cl oud.dialogflow.v2beta1.EntityTypes.BatchDeleteEntityTypes].
Required. The names entity types to delete. All names must
point to the same agent as parent
.
BatchDeleteIntentsRequest
The request message for [Intents.BatchDeleteIntents][google.cloud.dial ogflow.v2beta1.Intents.BatchDeleteIntents].
Required. The collection of intents to delete. Only intent
name
must be filled in.
BatchUpdateEntitiesRequest
The request message for [EntityTypes.BatchUpdateEntities][google.cloud .dialogflow.v2beta1.EntityTypes.BatchUpdateEntities].
Required. The entities to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateEntityTypesRequest
The request message for [EntityTypes.BatchUpdateEntityTypes][google.cl oud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].
The source of the entity type batch. For each entity type in
the batch: - If name
is specified, we update an existing
entity type. - If name
is not specified, we create a new
entity type.
The collection of entity types to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateEntityTypesResponse
The response message for [EntityTypes.BatchUpdateEntityTypes][google.c loud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].
BatchUpdateIntentsRequest
The request message for [Intents.BatchUpdateIntents][google.cloud.dial ogflow.v2beta1.Intents.BatchUpdateIntents].
Required. The source of the intent batch. For each intent in
the batch: - If name
is specified, we update an existing
intent. - If name
is not specified, we create a new
intent.
The collection of intents to update or create.
Optional. The mask to control which fields get updated.
BatchUpdateIntentsResponse
The response message for [Intents.BatchUpdateIntents][google.cloud.dia logflow.v2beta1.Intents.BatchUpdateIntents].
CancelOperationRequest
API documentation for CancelOperationRequest
class.
Context
Dialogflow contexts are similar to natural language context. If a
person says to you “they are orange”, you need context in order to
understand what “they” is referring to. Similarly, for Dialogflow to
handle an end-user expression like that, it needs to be provided with
context in order to correctly match an intent. Using contexts, you
can control the flow of a conversation. You can configure contexts for
an intent by setting input and output contexts, which are identified
by string names. When an intent is matched, any configured output
contexts for that intent become active. While any contexts are active,
Dialogflow is more likely to match intents that are configured with
input contexts that correspond to the currently active contexts. For
more information about context, see the Contexts guide
<https://cloud.google.com/dialogflow/docs/contexts-overview>
__.
Optional. The number of conversational query requests after
which the context expires. The default is 0
. If set to
0
, the context expires immediately. Contexts expire
automatically after 20 minutes if there are no matching
queries.
CreateContextRequest
The request message for [Contexts.CreateContext][google.cloud.dialogfl ow.v2beta1.Contexts.CreateContext].
Required. The context to create.
CreateDocumentRequest
Request message for [Documents.CreateDocument][google.cloud.dialogflow .v2beta1.Documents.CreateDocument].
Required. The document to create.
CreateEntityTypeRequest
The request message for [EntityTypes.CreateEntityType][google.cloud.di alogflow.v2beta1.EntityTypes.CreateEntityType].
Required. The entity type to create.
CreateIntentRequest
The request message for [Intents.CreateIntent][google.cloud.dialogflow .v2beta1.Intents.CreateIntent].
Required. The intent to create.
Optional. The resource view to apply to the returned intent.
CreateKnowledgeBaseRequest
Request message for [KnowledgeBases.CreateKnowledgeBase][google.cloud. dialogflow.v2beta1.KnowledgeBases.CreateKnowledgeBase].
Required. The knowledge base to create.
CreateSessionEntityTypeRequest
The request message for [SessionEntityTypes.CreateSessionEntityType][g oogle.cloud.dialogflow.v2beta1.SessionEntityTypes.CreateSessionEntityT ype].
Required. The session entity type to create.
DeleteAgentRequest
The request message for [Agents.DeleteAgent][google.cloud.dialogflow.v 2beta1.Agents.DeleteAgent].
DeleteAllContextsRequest
The request message for [Contexts.DeleteAllContexts][google.cloud.dial ogflow.v2beta1.Contexts.DeleteAllContexts].
DeleteContextRequest
The request message for [Contexts.DeleteContext][google.cloud.dialogfl ow.v2beta1.Contexts.DeleteContext].
DeleteDocumentRequest
Request message for [Documents.DeleteDocument][google.cloud.dialogflow .v2beta1.Documents.DeleteDocument].
DeleteEntityTypeRequest
The request message for [EntityTypes.DeleteEntityType][google.cloud.di alogflow.v2beta1.EntityTypes.DeleteEntityType].
DeleteIntentRequest
The request message for [Intents.DeleteIntent][google.cloud.dialogflow .v2beta1.Intents.DeleteIntent].
DeleteKnowledgeBaseRequest
Request message for [KnowledgeBases.DeleteKnowledgeBase][google.cloud. dialogflow.v2beta1.KnowledgeBases.DeleteKnowledgeBase].
Optional. Force deletes the knowledge base. When set to true, any documents in the knowledge base are also deleted.
DeleteOperationRequest
API documentation for DeleteOperationRequest
class.
DeleteSessionEntityTypeRequest
The request message for [SessionEntityTypes.DeleteSessionEntityType][g oogle.cloud.dialogflow.v2beta1.SessionEntityTypes.DeleteSessionEntityT ype].
DetectIntentRequest
The request to detect user’s intent.
The parameters of this query.
Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
The natural language speech audio to be processed. This field
should be populated iff query_input
is set to an input
audio config. A single request can contain up to 1 minute of
speech audio data.
DetectIntentResponse
The message returned from the DetectIntent method.
The selected results of the conversational query or event
processing. See alternative_query_results
for additional
potential results.
Specifies the status of the webhook request.
The config used by the speech synthesizer to generate the output audio.
Document
A knowledge document to be used by a
KnowledgeBase. For
more information, see the knowledge base guide
<https://cloud.google.com/dialogflow/docs/how/knowledge-bases>
__.
Note: The projects.agent.knowledgeBases.documents
resource is
deprecated; only use projects.knowledgeBases.documents
.
Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails.
Required. The knowledge type of document content.
The URI where the file content is located. For documents
stored in Google Cloud Storage, these URIs must have the form
gs://<bucket-name>/<object-name>
. NOTE: External URLs
must correspond to public webpages, i.e., they must be indexed
by Google Search. In particular, URLs for showing documents in
Google Cloud Storage (i.e. the URL in your browser) are not
supported. Instead use the gs://
format URI described
above.
The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types.
Output only. The time and status of the latest reload. This reload may have been triggered automatically or manually and may not have succeeded.
Duration
API documentation for Duration
class.
Empty
API documentation for Empty
class.
EntityType
Each intent parameter has a type, called the entity type, which
dictates exactly how data from an end-user expression is extracted.
Dialogflow provides predefined system entities that can match many
common types of data. For example, there are system entities for
matching dates, times, colors, email addresses, and so on. You can
also create your own custom entities for matching custom data. For
example, you could define a vegetable entity that can match the types
of vegetables available for purchase with a grocery store agent. For
more information, see the Entity guide
<https://cloud.google.com/dialogflow/docs/entities-overview>
__.
Required. The name of the entity type.
Optional. Indicates whether the entity type can be automatically expanded.
Optional. Enables fuzzy entity extraction during classification.
EntityTypeBatch
This message is a wrapper around a collection of entity types.
Environment
You can create multiple versions of your agent and publish them to
separate environments. When you edit an agent, you are editing the
draft agent. At any point, you can save the draft agent as an agent
version, which is an immutable snapshot of your agent. When you save
the draft agent, it is published to the default environment. When you
create agent versions, you can publish them to custom environments.
You can create a variety of custom environments for: - testing -
development - production - etc. For more information, see the
versions and environments guide
<https://cloud.google.com/dialogflow/docs/agents-versions>
__.
Optional. The developer-provided description for this environment. The maximum length is 500 characters. If exceeded, the request is rejected.
Output only. The state of this environment. This field is read-only, i.e., it cannot be set by create and update methods.
EventInput
Events allow for matching intents by event name instead of the natural
language input. For instance, input <event: { name: "welcome_event",
parameters: { name: "Sam" } }>
can trigger a personalized welcome
response. The parameter name
may be used by the agent in the
response: "Hello #welcome_event.name! What can I do for you
today?"
.
The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:
- MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter’s entity type is a composite entity: map - Else: string or number, depending on parameter value type - MapValue value: - If parameter’s entity type is a composite entity: map from composite entity property names to property values - Else: parameter value
ExportAgentRequest
The request message for [Agents.ExportAgent][google.cloud.dialogflow.v 2beta1.Agents.ExportAgent].
Optional. The Google Cloud Storage
<https://cloud.google.com/storage/docs/>
__ URI to export the
agent to. The format of this URI must be gs://<bucket-
name>/<object-name>
. If left unspecified, the serialized
agent is returned inline.
ExportAgentResponse
The response message for [Agents.ExportAgent][google.cloud.dialogflow. v2beta1.Agents.ExportAgent].
The URI to a file containing the exported agent. This field is
populated only if agent_uri
is specified in
ExportAgentRequest
.
FieldMask
API documentation for FieldMask
class.
GcsSource
Google Cloud Storage location for single input.
GetAgentRequest
The request message for Agents.GetAgent.
GetContextRequest
The request message for [Contexts.GetContext][google.cloud.dialogflow. v2beta1.Contexts.GetContext].
GetDocumentRequest
Request message for [Documents.GetDocument][google.cloud.dialogflow.v2 beta1.Documents.GetDocument].
GetEntityTypeRequest
The request message for [EntityTypes.GetEntityType][google.cloud.dialo gflow.v2beta1.EntityTypes.GetEntityType].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
GetIntentRequest
The request message for [Intents.GetIntent][google.cloud.dialogflow.v2 beta1.Intents.GetIntent].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
GetKnowledgeBaseRequest
Request message for [KnowledgeBases.GetKnowledgeBase][google.cloud.dia logflow.v2beta1.KnowledgeBases.GetKnowledgeBase].
GetOperationRequest
API documentation for GetOperationRequest
class.
GetSessionEntityTypeRequest
The request message for [SessionEntityTypes.GetSessionEntityType][goog le.cloud.dialogflow.v2beta1.SessionEntityTypes.GetSessionEntityType].
GetValidationResultRequest
The request message for [Agents.GetValidationResult][google.cloud.dial ogflow.v2beta1.Agents.GetValidationResult].
Optional. The language for which you want a validation result.
If not specified, the agent’s default language is used. Many
languages <https://cloud.google.com/dialogflow/docs/reference/
language>
__ are supported. Note: languages must be enabled in
the agent before they can be used.
ImportAgentRequest
The request message for [Agents.ImportAgent][google.cloud.dialogflow.v 2beta1.Agents.ImportAgent].
Required. The agent to import.
Zip compressed raw byte content for agent.
InputAudioConfig
Instructs the speech recognizer on how to process the audio content.
Required. Sample rate (in Hertz) of the audio content sent in
the query. Refer to Cloud Speech API documentation
<https://cloud.google.com/speech-to-text/docs/basics>
__ for
more details.
If true
, Dialogflow returns [SpeechWordInfo][google.cloud.
dialogflow.v2beta1.SpeechWordInfo] in [StreamingRecognitionRes
ult][google.cloud.dialogflow.v2beta1.StreamingRecognitionResul
t] with information about the recognized speech words,
e.g. start and end time offsets. If false or unspecified,
Speech doesn’t return any word-level information.
Context information to assist speech recognition. See the
Cloud Speech documentation <https://cloud.google.com/speech-
to-text/docs/basics#phrase-hints>
__ for more details.
Which variant of the [Speech model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] to use.
Intent
An intent categorizes an end-user’s intention for one conversation
turn. For each agent, you define many intents, where your combined
intents can handle a complete conversation. When an end-user writes or
says something, referred to as an end-user expression or end-user
input, Dialogflow matches the end-user input to the best intent in
your agent. Matching an intent is also known as intent classification.
For more information, see the intent guide
<https://cloud.google.com/dialogflow/docs/intents-overview>
__.
Required. The name of this intent.
Optional. The priority of this intent. Higher numbers
represent higher priorities. - If the supplied value is
unspecified or 0, the service translates the value to
500,000, which corresponds to the Normal
priority in the
console. - If the supplied value is negative, the intent is
ignored in runtime detect intent requests.
Optional. Indicates whether Machine Learning is enabled for
the intent. Note: If ml_enabled
setting is set to false,
then this intent is not taken into account during inference in
ML ONLY
match mode. Also, auto-markup in the UI is turned
off. DEPRECATED! Please use ml_disabled
field instead.
NOTE: If both ml_enabled
and ml_disabled
are either
not set or false, then the default value is determined as
follows: - Before April 15th, 2018 the default is: ml_enabled
= false / ml_disabled = true. - After April 15th, 2018 the
default is: ml_enabled = true / ml_disabled = false.
Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.
Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent. Event names are limited to 150 characters.
Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.
Optional. Indicates whether to delete all contexts in the current session when this intent is matched.
Optional. The collection of rich messages corresponding to the
Response
field in the Dialogflow console.
Output only. The unique identifier of the root intent in the
chain of followup intents. It identifies the correct followup
intents chain for this intent. Format: projects/<Project
ID>/agent/intents/<Intent ID>
.
Output only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.
IntentBatch
This message is a wrapper around a collection of intents.
KnowledgeAnswers
Represents the result of querying a Knowledge base.
KnowledgeBase
A knowledge base represents a collection of knowledge documents that
you provide to Dialogflow. Your knowledge documents contain
information that may be useful during conversations with end-users.
Some Dialogflow features use knowledge bases when looking for a
response to an end-user input. For more information, see the
knowledge base guide
<https://cloud.google.com/dialogflow/docs/how/knowledge-bases>
__.
Note: The projects.agent.knowledgeBases
resource is deprecated;
only use projects.knowledgeBases
.
Required. The display name of the knowledge base. The name must be 1024 bytes or less; otherwise, the creation request fails.
KnowledgeOperationMetadata
Metadata in google::longrunning::Operation for Knowledge operations.
LatLng
API documentation for LatLng
class.
ListContextsRequest
The request message for [Contexts.ListContexts][google.cloud.dialogflo w.v2beta1.Contexts.ListContexts].
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListContextsResponse
The response message for [Contexts.ListContexts][google.cloud.dialogfl ow.v2beta1.Contexts.ListContexts].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListDocumentsRequest
Request message for [Documents.ListDocuments][google.cloud.dialogflow. v2beta1.Documents.ListDocuments].
The maximum number of items to return in a single page. By default 10 and at most 100.
ListDocumentsResponse
Response message for [Documents.ListDocuments][google.cloud.dialogflow .v2beta1.Documents.ListDocuments].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListEntityTypesRequest
The request message for [EntityTypes.ListEntityTypes][google.cloud.dia logflow.v2beta1.EntityTypes.ListEntityTypes].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
Optional. The next_page_token value returned from a previous list request.
ListEntityTypesResponse
The response message for [EntityTypes.ListEntityTypes][google.cloud.di alogflow.v2beta1.EntityTypes.ListEntityTypes].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListEnvironmentsRequest
The request message for [Environments.ListEnvironments][google.cloud.d ialogflow.v2beta1.Environments.ListEnvironments].
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListEnvironmentsResponse
The response message for [Environments.ListEnvironments][google.cloud. dialogflow.v2beta1.Environments.ListEnvironments].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListIntentsRequest
The request message for [Intents.ListIntents][google.cloud.dialogflow. v2beta1.Intents.ListIntents].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListIntentsResponse
The response message for [Intents.ListIntents][google.cloud.dialogflow .v2beta1.Intents.ListIntents].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListKnowledgeBasesRequest
Request message for [KnowledgeBases.ListKnowledgeBases][google.cloud.d ialogflow.v2beta1.KnowledgeBases.ListKnowledgeBases].
Optional. The maximum number of items to return in a single page. By default 10 and at most 100.
ListKnowledgeBasesResponse
Response message for [KnowledgeBases.ListKnowledgeBases][google.cloud. dialogflow.v2beta1.KnowledgeBases.ListKnowledgeBases].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListOperationsRequest
API documentation for ListOperationsRequest
class.
ListOperationsResponse
API documentation for ListOperationsResponse
class.
ListSessionEntityTypesRequest
The request message for [SessionEntityTypes.ListSessionEntityTypes][go ogle.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityType s].
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
ListSessionEntityTypesResponse
The response message for [SessionEntityTypes.ListSessionEntityTypes][g oogle.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityTyp es].
Token to retrieve the next page of results, or empty if there are no more results in the list.
ListValue
API documentation for ListValue
class.
Operation
API documentation for Operation
class.
OperationInfo
API documentation for OperationInfo
class.
OriginalDetectIntentRequest
Represents the contents of the original request that was passed to the
[Streaming]DetectIntent
call.
Optional. The version of the protocol used for this request. This field is AoG-specific.
OutputAudioConfig
Instructs the speech synthesizer how to generate the output audio content. If this audio config is supplied in a request, it overrides all existing text-to-speech settings applied to the agent.
The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice’s natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
QueryInput
Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger.
Instructs the speech recognizer how to process the speech audio.
The event to be processed.
QueryParameters
Represents the parameters of the conversational query.
The geo location of this conversational query.
Specifies whether to delete all contexts in the current session before the new ones are activated.
This field can be used to pass custom data to your webhook.
Arbitrary JSON objects are supported. If supplied, the value
is used to populate the
WebhookRequest.original_detect_intent_request.payload
field sent to your webhook.
Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed. Note: Sentiment Analysis is only currently available for Enterprise Edition agents.
This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook alone with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google’s specified headers are not allowed. Including: “Host”, “Content-Length”, “Connection”, “From”, “User-Agent”, “Accept- Encoding”, “If-Modified-Since”, “If-None-Match”, “X-Forwarded- For”, etc.
QueryResult
Represents the result of conversational query or event processing.
The language that was triggered during intent detection. See
Language Support <https://cloud.google.com/dialogflow/docs/re
ference/language>
__ for a list of the currently supported
language codes.
The action name from the matched intent.
This field is set to: - false
if the matched intent has
required parameters and not all of the required parameter
values have been collected. - true
if all required
parameter values have been collected, or if the matched
intent doesn’t contain any required parameters.
The collection of rich messages to present to the user.
If the query was fulfilled by a webhook call, this field is
set to the value of the payload
field returned in the
webhook response.
The intent that matched the conversational query. Some, not
all fields are filled in this message, including but not
limited to: name
, display_name
, end_interaction
and is_fallback
.
Free-form diagnostic information for the associated detect intent request. The fields of this data can change without notice, so you should not write code that depends on its structure. The data may contain: - webhook call latency - webhook errors
The result from Knowledge Connector (if any), ordered by
decreasing KnowledgeAnswers.match_confidence
.
ReloadDocumentRequest
Request message for [Documents.ReloadDocument][google.cloud.dialogflow .v2beta1.Documents.ReloadDocument].
The source for document reloading. Optional. If provided, the service will load the contents from the source and update document in the knowledge base.
RestoreAgentRequest
The request message for [Agents.RestoreAgent][google.cloud.dialogflow. v2beta1.Agents.RestoreAgent].
Required. The agent to restore.
Zip compressed raw byte content for agent.
SearchAgentsRequest
The request message for [Agents.SearchAgents][google.cloud.dialogflow. v2beta1.Agents.SearchAgents].
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
SearchAgentsResponse
The response message for [Agents.SearchAgents][google.cloud.dialogflow .v2beta1.Agents.SearchAgents].
Token to retrieve the next page of results, or empty if there are no more results in the list.
Sentiment
The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
SentimentAnalysisRequestConfig
Configures the types of sentiment analysis to perform.
SentimentAnalysisResult
The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user’s attitude as positive, negative, or neutral. For [Pa rticipants.AnalyzeContent][google.cloud.dialogflow.v2beta1.Participant s.AnalyzeContent], it needs to be configured in [DetectIntentRequest.q uery_params][google.cloud.dialogflow.v2beta1.DetectIntentRequest.query _params]. For [Participants.StreamingAnalyzeContent][google.cloud.dial ogflow.v2beta1.Participants.StreamingAnalyzeContent], it needs to be configured in [StreamingDetectIntentRequest.query_params][google.cloud .dialogflow.v2beta1.StreamingDetectIntentRequest.query_params]. And for [Participants.AnalyzeContent][google.cloud.dialogflow.v2beta1.Part icipants.AnalyzeContent] and [Participants.StreamingAnalyzeContent][go ogle.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent], it needs to be configured in [ConversationProfile.human_agent_assistan t_config][google.cloud.dialogflow.v2beta1.ConversationProfile.human_ag ent_assistant_config]
SessionEntityType
A session represents a conversation between a Dialogflow agent and an
end-user. You can create special entities, called session entities,
during a session. Session entities can extend or replace custom entity
types and only exist during the session that they were created for.
All session data, including session entities, is stored by Dialogflow
for 20 minutes. For more information, see the session entity guide
<https://cloud.google.com/dialogflow/docs/entities-session>
__.
Required. Indicates whether the additional data should override or supplement the custom entity type definition.
SetAgentRequest
The request message for Agents.SetAgent.
Optional. The mask to control which fields get updated.
SpeechContext
Hints for the speech recognizer to help with recognition in a specific conversation state.
Optional. Boost for this context compared to other contexts:
- If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. - If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.
SpeechWordInfo
Information for a word recognized by the speech recognizer.
Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.
The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided.
Status
API documentation for Status
class.
StreamingDetectIntentRequest
The top-level message sent by the client to the [Sessions.StreamingDet ectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectInt ent] method. Multiple request messages should be sent in order: 1. The first message must contain [session][google.cloud.dialogflow.v2 beta1.StreamingDetectIntentRequest.session], [query_input][google.c loud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] plus optionally [query_params][google.cloud.dialogflow.v2beta1.Stre amingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [output_audio_ config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.o utput_audio_config]. The message must not contain [input_audio][ google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_aud io]. 2. If [query_input][google.cloud.dialogflow.v2beta1.StreamingD etectIntentRequest.query_input] was set to [query_input.audio_co nfig][google.cloud.dialogflow.v2beta1.InputAudioConfig], all subsequent messages must contain [input_audio][google.cloud.dialogf low.v2beta1.StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with query_input.text. However, note that: - Dialogflow will bill you for the audio duration so far. - Dialogflow discards all Speech recognition results in favor of the input text. - Dialogflow will use the language code from the first message. After you sent all input, you must half-close or abort the request stream.
The parameters of this query.
DEPRECATED. Please use [InputAudioConfig.single_utterance][goo
gle.cloud.dialogflow.v2beta1.InputAudioConfig.single_utterance
] instead. If false
(default), recognition does not cease
until the client closes the stream. If true
, the
recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio’s voice
has stopped or paused. In this case, once a detected intent is
received, the client should close the stream and start a new
request with a new stream as needed. This setting is ignored
when query_input
is a piece of text or an event.
Mask for [output_audio_config][google.cloud.dialogflow.v2beta1 .StreamingDetectIntentRequest.output_audio_config] indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level. If unspecified or empty, [output_audio_config][google.cloud.dialo gflow.v2beta1.StreamingDetectIntentRequest.output_audio_config ] replaces the agent-level config in its entirety.
StreamingDetectIntentResponse
The top-level message returned from the StreamingDetectIntent
method. Multiple response messages can be returned in order: 1. If
the input was set to streaming audio, the first one or more
messages contain recognition_result
. Each recognition_result
represents a more complete transcript of what the user said. The last
recognition_result
has is_final
set to true
. 2. The next
message contains response_id
, query_result
,
alternative_query_results
and optionally webhook_status
if a
WebHook was called. 3. If output_audio_config
was specified in
the request or agent-level speech synthesizer is configured, all
subsequent messages contain output_audio
and
output_audio_config
.
The result of speech recognition.
If Knowledge Connectors are enabled, there could be more than
one result returned for a given query or event, and this field
will contain all results except for the top one, which is
captured in query_result. The alternative results are ordered
by decreasing QueryResult.intent_detection_confidence
. If
Knowledge Connectors are disabled, this field will be empty
until multiple responses for regular intents are supported, at
which point those additional results will be surfaced here.
The audio data bytes encoded as specified in the request.
Note: The output audio is generated based on the values of
default platform text responses found in the
query_result.fulfillment_messages
field. If multiple
default text responses exist, they will be concatenated when
generating audio. If no default platform text responses exist,
the generated audio content will be empty. In some scenarios,
multiple output audio fields may be present in the response
structure. In these cases, only the top-most-level audio
output has content.
StreamingRecognitionResult
Contains a speech recognition result corresponding to a portion of the
audio that is currently being processed or an indication that this is
the end of the single requested utterance. Example: 1. transcript:
“tube” 2. transcript: “to be a” 3. transcript: “to be” 4.
transcript: “to be or not to be” is_final: true 5. transcript: "
that’s" 6. transcript: " that is" 7. message_type:
END_OF_SINGLE_UTTERANCE
8. transcript: " that is the question"
is_final: true Only two of the responses contain final results (#4
and #8 indicated by is_final: true
). Concatenating these generates
the full transcript: “to be or not to be that is the question”. In
each response we populate: - for TRANSCRIPT
: transcript
and
possibly is_final
. - for END_OF_SINGLE_UTTERANCE
: only
message_type
.
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
The Speech confidence between 0.0 and 1.0 for the current
portion of audio. A higher number indicates an estimated
greater likelihood that the recognized words are correct. The
default of 0.0 is a sentinel value indicating that confidence
was not set. This field is typically only provided if
is_final
is true and you should not rely on it being
accurate or even set.
Word-specific information for the words recognized by Speech
in [transcript][google.cloud.dialogflow.v2beta1.StreamingRecog
nitionResult.transcript]. Populated if and only if
message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
Struct
API documentation for Struct
class.
SubAgent
Contains basic configuration for a sub-agent.
Optional. The unique identifier (environment name
in
dialogflow console) of this sub-agent environment. Assumes
draft environment if environment
is not set.
SynthesizeSpeechConfig
Configuration of how speech should be synthesized.
Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
Optional. An identifier which selects ‘audio effects’ profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
TextInput
Represents the natural language text to be processed.
Required. The language of this conversational query. See
Language Support <https://cloud.google.com/dialogflow/docs/re
ference/language>
__ for a list of the currently supported
language codes. Note that queries in the same session do not
necessarily need to specify the same language.
Timestamp
API documentation for Timestamp
class.
TrainAgentRequest
The request message for [Agents.TrainAgent][google.cloud.dialogflow.v2 beta1.Agents.TrainAgent].
UpdateContextRequest
The request message for [Contexts.UpdateContext][google.cloud.dialogfl ow.v2beta1.Contexts.UpdateContext].
Optional. The mask to control which fields get updated.
UpdateDocumentRequest
Request message for [Documents.UpdateDocument][google.cloud.dialogflow .v2beta1.Documents.UpdateDocument].
Optional. Not specified means update all
. Currently, only
display_name
can be updated, an InvalidArgument will be
returned for attempting to update other fields.
UpdateEntityTypeRequest
The request message for [EntityTypes.UpdateEntityType][google.cloud.di alogflow.v2beta1.EntityTypes.UpdateEntityType].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
UpdateIntentRequest
The request message for [Intents.UpdateIntent][google.cloud.dialogflow .v2beta1.Intents.UpdateIntent].
Optional. The language used to access language-specific data.
If not specified, the agent’s default language is used. For
more information, see Multilingual intent and entity data
<https://cloud.google.com/dialogflow/docs/agents-
multilingual#intent-entity>
__.
Optional. The resource view to apply to the returned intent.
UpdateKnowledgeBaseRequest
Request message for [KnowledgeBases.UpdateKnowledgeBase][google.cloud. dialogflow.v2beta1.KnowledgeBases.UpdateKnowledgeBase].
Optional. Not specified means update all
. Currently, only
display_name
can be updated, an InvalidArgument will be
returned for attempting to update other fields.
UpdateSessionEntityTypeRequest
The request message for [SessionEntityTypes.UpdateSessionEntityType][g oogle.cloud.dialogflow.v2beta1.SessionEntityTypes.UpdateSessionEntityT ype].
Optional. The mask to control which fields get updated.
ValidationError
Represents a single validation error.
The names of the entries that the error is associated with. Format: - “projects//agent”, if the error is associated with the entire agent. - “projects//agent/intents/”, if the error is associated with certain intents. - “projects//agent/intents//trainingPhrases/”, if the error is associated with certain intent training phrases. - “projects//agent/intents//parameters/”, if the error is associated with certain intent parameters. - “projects//agent/entities/”, if the error is associated with certain entities.
ValidationResult
Represents the output of agent validation.
Value
API documentation for Value
class.
VoiceSelectionParams
Description of which voice to use for speech synthesis.
Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and [name][google.cloud.dialogflow.v2beta1.Vo iceSelectionParams.name]. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
WaitOperationRequest
API documentation for WaitOperationRequest
class.
WebhookRequest
The request message for a webhook call.
The unique identifier of the response. Contains the same value
as [Streaming]DetectIntentResponse.response_id
.
Alternative query results from KnowledgeService.
WebhookResponse
The response message for a webhook call. This response is validated
by the Dialogflow server. If validation fails, an error will be
returned in the [QueryResult.diagnostic_info][google.cloud.dialogflow.
v2beta1.QueryResult.diagnostic_info] field. Setting JSON fields to an
empty value with the wrong type is a common error. To avoid this
error: - Use ""
for empty strings - Use {}
or null
for
empty objects - Use []
or null
for empty arrays For more
information, see the Protocol Buffers Language Guide
<https://developers.google.com/protocol-buffers/docs/proto3#json>
__.
Optional. The rich response messages intended for the end- user. When provided, Dialogflow uses this field to populate [Q ueryResult.fulfillment_messages][google.cloud.dialogflow.v2bet a1.QueryResult.fulfillment_messages] sent to the integration or API caller.
Optional. This field can be used to pass custom data from your
webhook to the integration or API caller. Arbitrary JSON
objects are supported. When provided, Dialogflow uses this
field to populate [QueryResult.webhook_payload][google.cloud.d
ialogflow.v2beta1.QueryResult.webhook_payload] sent to the
integration or API caller. This field is also used by the
Google Assistant integration
<https://cloud.google.com/dialogflow/docs/integrations/aog>
for rich response messages. See the format definition at
Google Assistant Dialogflow webhook format <https://developer
s.google.com/assistant/actions/build/json/dialogflow-webhook-
json>
Optional. Invokes the supplied events. When this field is set,
Dialogflow ignores the fulfillment_text
,
fulfillment_messages
, and payload
fields.
Optional. Additional session entity types to replace or extend
developer entity types with. The entity synonyms apply to all
languages and persist for the session. Setting this data from
a webhook overwrites the session entity types that have been
set using detectIntent
, streamingDetectIntent
or [Sess
ionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTy
pe] management methods.