Index
Agents
(interface)AsynchronousFulfillment
(interface)Contexts
(interface)Documents
(interface)EntityTypes
(interface)Intents
(interface)KnowledgeBases
(interface)SessionEntityTypes
(interface)Sessions
(interface)Agent
(message)Agent.ApiVersion
(enum)Agent.MatchMode
(enum)Agent.Tier
(enum)AnnotatedConversationDataset
(message)AudioEncoding
(enum)BatchCreateEntitiesRequest
(message)BatchDeleteEntitiesRequest
(message)BatchDeleteEntityTypesRequest
(message)BatchDeleteIntentsRequest
(message)BatchUpdateEntitiesRequest
(message)BatchUpdateEntityTypesRequest
(message)BatchUpdateEntityTypesResponse
(message)BatchUpdateIntentsRequest
(message)BatchUpdateIntentsResponse
(message)Context
(message)CreateContextRequest
(message)CreateDocumentRequest
(message)CreateEntityTypeRequest
(message)CreateIntentRequest
(message)CreateKnowledgeBaseRequest
(message)CreateSessionEntityTypeRequest
(message)DeleteAgentRequest
(message)DeleteAllContextsRequest
(message)DeleteContextRequest
(message)DeleteDocumentRequest
(message)DeleteEntityTypeRequest
(message)DeleteIntentRequest
(message)DeleteKnowledgeBaseRequest
(message)DeleteSessionEntityTypeRequest
(message)DetectIntentRequest
(message)DetectIntentResponse
(message)Document
(message)Document.KnowledgeType
(enum)EntityType
(message)EntityType.AutoExpansionMode
(enum)EntityType.Entity
(message)EntityType.Kind
(enum)EntityTypeBatch
(message)EventInput
(message)ExportAgentRequest
(message)ExportAgentResponse
(message)GcsSource
(message)GetAgentRequest
(message)GetContextRequest
(message)GetDocumentRequest
(message)GetEntityTypeRequest
(message)GetIntentRequest
(message)GetKnowledgeBaseRequest
(message)GetSessionEntityTypeRequest
(message)GetValidationResultRequest
(message)ImportAgentRequest
(message)InputAudioConfig
(message)Intent
(message)Intent.FollowupIntentInfo
(message)Intent.Message
(message)Intent.Message.BasicCard
(message)Intent.Message.BasicCard.Button
(message)Intent.Message.BasicCard.Button.OpenUriAction
(message)Intent.Message.BrowseCarouselCard
(message)Intent.Message.BrowseCarouselCard.BrowseCarouselCardItem
(message)Intent.Message.BrowseCarouselCard.BrowseCarouselCardItem.OpenUrlAction
(message)Intent.Message.BrowseCarouselCard.BrowseCarouselCardItem.OpenUrlAction.UrlTypeHint
(enum)Intent.Message.BrowseCarouselCard.ImageDisplayOptions
(enum)Intent.Message.Card
(message)Intent.Message.Card.Button
(message)Intent.Message.CarouselSelect
(message)Intent.Message.CarouselSelect.Item
(message)Intent.Message.ColumnProperties
(message)Intent.Message.ColumnProperties.HorizontalAlignment
(enum)Intent.Message.Image
(message)Intent.Message.LinkOutSuggestion
(message)Intent.Message.ListSelect
(message)Intent.Message.ListSelect.Item
(message)Intent.Message.MediaContent
(message)Intent.Message.MediaContent.ResponseMediaObject
(message)Intent.Message.MediaContent.ResponseMediaType
(enum)Intent.Message.Platform
(enum)Intent.Message.QuickReplies
(message)Intent.Message.RbmCardContent
(message)Intent.Message.RbmCardContent.RbmMedia
(message)Intent.Message.RbmCardContent.RbmMedia.Height
(enum)Intent.Message.RbmCarouselCard
(message)Intent.Message.RbmCarouselCard.CardWidth
(enum)Intent.Message.RbmStandaloneCard
(message)Intent.Message.RbmStandaloneCard.CardOrientation
(enum)Intent.Message.RbmStandaloneCard.ThumbnailImageAlignment
(enum)Intent.Message.RbmSuggestedAction
(message)Intent.Message.RbmSuggestedAction.RbmSuggestedActionDial
(message)Intent.Message.RbmSuggestedAction.RbmSuggestedActionOpenUri
(message)Intent.Message.RbmSuggestedAction.RbmSuggestedActionShareLocation
(message)Intent.Message.RbmSuggestedReply
(message)Intent.Message.RbmSuggestion
(message)Intent.Message.RbmText
(message)Intent.Message.SelectItemInfo
(message)Intent.Message.SimpleResponse
(message)Intent.Message.SimpleResponses
(message)Intent.Message.Suggestion
(message)Intent.Message.Suggestions
(message)Intent.Message.TableCard
(message)Intent.Message.TableCardCell
(message)Intent.Message.TableCardRow
(message)Intent.Message.TelephonyPlayAudio
(message)Intent.Message.TelephonySynthesizeSpeech
(message)Intent.Message.TelephonyTransferCall
(message)Intent.Message.Text
(message)Intent.Parameter
(message)Intent.TrainingPhrase
(message)Intent.TrainingPhrase.Part
(message)Intent.TrainingPhrase.Type
(enum)Intent.WebhookState
(enum)IntentBatch
(message)IntentView
(enum)KnowledgeAnswers
(message)KnowledgeAnswers.Answer
(message)KnowledgeAnswers.Answer.MatchConfidenceLevel
(enum)KnowledgeBase
(message)KnowledgeOperationMetadata
(message)KnowledgeOperationMetadata.State
(enum)LabelConversationResponse
(message)ListContextsRequest
(message)ListContextsResponse
(message)ListDocumentsRequest
(message)ListDocumentsResponse
(message)ListEntityTypesRequest
(message)ListEntityTypesResponse
(message)ListIntentsRequest
(message)ListIntentsResponse
(message)ListKnowledgeBasesRequest
(message)ListKnowledgeBasesResponse
(message)ListSessionEntityTypesRequest
(message)ListSessionEntityTypesResponse
(message)OriginalDetectIntentRequest
(message)OutputAudioConfig
(message)OutputAudioEncoding
(enum)QueryInput
(message)QueryParameters
(message)QueryResult
(message)ReloadDocumentRequest
(message)RestoreAgentRequest
(message)SearchAgentsRequest
(message)SearchAgentsResponse
(message)Sentiment
(message)SentimentAnalysisRequestConfig
(message)SentimentAnalysisResult
(message)SessionEntityType
(message)SessionEntityType.EntityOverrideMode
(enum)SetAgentRequest
(message)SpeechContext
(message)SpeechModelVariant
(enum)SpeechWordInfo
(message)SsmlVoiceGender
(enum)StreamingDetectIntentRequest
(message)StreamingDetectIntentResponse
(message)StreamingRecognitionResult
(message)StreamingRecognitionResult.MessageType
(enum)SynthesizeSpeechConfig
(message)TextInput
(message)TrainAgentRequest
(message)UpdateContextRequest
(message)UpdateDocumentRequest
(message)UpdateEntityTypeRequest
(message)UpdateIntentRequest
(message)UpdateKnowledgeBaseRequest
(message)UpdateSessionEntityTypeRequest
(message)ValidationError
(message)ValidationError.Severity
(enum)ValidationResult
(message)VoiceSelectionParams
(message)WebhookRequest
(message)WebhookResponse
(message)
Agents
Agents are best described as Natural Language Understanding (NLU) modules that transform user requests into actionable data. You can include agents in your app, product, or service to determine user intent and respond to the user in a natural way.
After you create an agent, you can add Intents
, Contexts
, Entity Types
, Webhooks
, and so on to manage the flow of a conversation and match user input to predefined intents and actions.
You can create an agent using both Dialogflow Standard Edition and Dialogflow Enterprise Edition. For details, see Dialogflow Editions.
You can save your agent for backup or versioning by exporting the agent by using the ExportAgent
method. You can import a saved agent by using the ImportAgent
method.
Dialogflow provides several prebuilt agents for common conversation scenarios such as determining a date and time, converting currency, and so on.
For more information about agents, see the Dialogflow documentation.
DeleteAgent | |
---|---|
Deletes the specified agent.
|
ExportAgent | |
---|---|
Exports the specified agent to a ZIP file. Operation <response:
|
GetAgent | |
---|---|
Retrieves the specified agent.
|
GetValidationResult | |
---|---|
Gets agent validation result. Agent validation is performed during training time and is updated automatically when training is completed.
|
ImportAgent | |
---|---|
Imports the specified agent from a ZIP file. Uploads new intents and entity types without deleting the existing ones. Intents and entity types with the same name are replaced with the new versions from ImportAgentRequest. Operation <response:
|
RestoreAgent | |
---|---|
Restores the specified agent from a ZIP file. Replaces the current agent version with a new one. All the intents and entity types in the older version are deleted. Operation <response:
|
SearchAgents | |
---|---|
Returns the list of agents. Since there is at most one conversational agent per project, this method is useful primarily for listing all agents across projects the caller has access to. One can achieve that with a wildcard project collection id "-". Refer to List Sub-Collections.
|
SetAgent | |
---|---|
Creates/updates the specified agent.
|
TrainAgent | |
---|---|
Trains the specified agent. Operation <response:
|
AsynchronousFulfillment
Fulfillment is code that's deployed as a webhook that lets your Dialogflow agent call business logic on an intent-by-intent basis. Some of the fulfillment calls could take a long time (in the order of minutes). With asynchronous fulfillment, customers can send an on-hold message while their end users are waiting for an asynchronous backend call to return. The asynchronous fulfillment notifies its completion to Dialogflow through PushFulfillmentResult
method.
Contexts
A context represents additional information included with user input or with an intent returned by the Dialogflow API. Contexts are helpful for differentiating user input which may be vague or have a different meaning depending on additional details from your application such as user setting and preferences, previous user input, where the user is in your application, geographic location, and so on.
You can include contexts as input parameters of a DetectIntent
(or StreamingDetectIntent
) request, or as output contexts included in the returned intent. Contexts expire when an intent is matched, after the number of DetectIntent
requests specified by the lifespan_count
parameter, or after 20 minutes if no intents are matched for a DetectIntent
request.
For more information about contexts, see the Dialogflow documentation.
CreateContext | |
---|---|
Creates a context. If the specified context already exists, overrides the context.
|
DeleteAllContexts | |
---|---|
Deletes all active contexts in the specified session.
|
DeleteContext | |
---|---|
Deletes the specified context.
|
GetContext | |
---|---|
Retrieves the specified context.
|
ListContexts | |
---|---|
Returns the list of all contexts in the specified session.
|
UpdateContext | |
---|---|
Updates the specified context.
|
Documents
Manages documents of a knowledge base.
CreateDocument | |
---|---|
Creates a new document. Note: The Operation <response:
|
DeleteDocument | |
---|---|
Deletes the specified document. Note: The Operation <response:
|
GetDocument | |
---|---|
Retrieves the specified document. Note: The
|
ListDocuments | |
---|---|
Returns the list of all documents of the knowledge base. Note: The
|
ReloadDocument | |
---|---|
Reloads the specified document from its specified source, content_uri or content. The previously loaded content of the document will be deleted. Note: Even when the content of the document has not changed, there still may be side effects because of internal implementation changes. Note: The Operation <response:
|
UpdateDocument | |
---|---|
Updates the specified document. Note: The Operation <response:
|
EntityTypes
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application.
When you define an entity, you can also include synonyms that all map to that entity. For example, "soft drink", "soda", "pop", and so on.
There are three types of entities:
System - entities that are defined by the Dialogflow API for common data types such as date, time, currency, and so on. A system entity is represented by the
EntityType
type.Developer - entities that are defined by you that represent actionable data that is meaningful to your application. For example, you could define a
pizza.sauce
entity for red or white pizza sauce, apizza.cheese
entity for the different types of cheese on a pizza, apizza.topping
entity for different toppings, and so on. A developer entity is represented by theEntityType
type.User - entities that are built for an individual user such as favorites, preferences, playlists, and so on. A user entity is represented by the
SessionEntityType
type.
For more information about entity types, see the Dialogflow documentation.
BatchCreateEntities | |
---|---|
Creates multiple new entities in the specified entity type. Operation <response:
|
BatchDeleteEntities | |
---|---|
Deletes entities in the specified entity type. Operation <response:
|
BatchDeleteEntityTypes | |
---|---|
Deletes entity types in the specified agent. Operation <response:
|
BatchUpdateEntities | |
---|---|
Updates or creates multiple entities in the specified entity type. This method does not affect entities in the entity type that aren't explicitly specified in the request. Operation <response:
|
BatchUpdateEntityTypes | |
---|---|
Updates/Creates multiple entity types in the specified agent. Operation <response:
|
CreateEntityType | |
---|---|
Creates an entity type in the specified agent.
|
DeleteEntityType | |
---|---|
Deletes the specified entity type.
|
GetEntityType | |
---|---|
Retrieves the specified entity type.
|
ListEntityTypes | |
---|---|
Returns the list of all entity types in the specified agent.
|
UpdateEntityType | |
---|---|
Updates the specified entity type.
|
Intents
An intent represents a mapping between input from a user and an action to be taken by your application. When you pass user input to the DetectIntent
(or StreamingDetectIntent
) method, the Dialogflow API analyzes the input and searches for a matching intent. If no match is found, the Dialogflow API returns a fallback intent (is_fallback
= true).
You can provide additional information for the Dialogflow API to use to match user input to an intent by adding the following to your intent.
Contexts - provide additional context for intent analysis. For example, if an intent is related to an object in your application that plays music, you can provide a context to determine when to match the intent if the user input is "turn it off". You can include a context that matches the intent when there is previous user input of "play music", and not when there is previous user input of "turn on the light".
Events - allow for matching an intent by using an event name instead of user input. Your application can provide an event name and related parameters to the Dialogflow API to match an intent. For example, when your application starts, you can send a welcome event with a user name parameter to the Dialogflow API to match an intent with a personalized welcome message for the user.
Training phrases - provide examples of user input to train the Dialogflow API agent to better match intents.
For more information about intents, see the Dialogflow documentation.
BatchDeleteIntents | |
---|---|
Deletes intents in the specified agent. Operation <response:
|
BatchUpdateIntents | |
---|---|
Updates/Creates multiple intents in the specified agent. Operation <response:
|
CreateIntent | |
---|---|
Creates an intent in the specified agent.
|
DeleteIntent | |
---|---|
Deletes the specified intent and its direct or indirect followup intents.
|
GetIntent | |
---|---|
Retrieves the specified intent.
|
ListIntents | |
---|---|
Returns the list of all intents in the specified agent.
|
UpdateIntent | |
---|---|
Updates the specified intent.
|
KnowledgeBases
Manages knowledge bases.
Allows users to setup and maintain knowledge bases with their knowledge data.
CreateKnowledgeBase | |
---|---|
Creates a knowledge base. Note: The
|
DeleteKnowledgeBase | |
---|---|
Deletes the specified knowledge base. Note: The
|
GetKnowledgeBase | |
---|---|
Retrieves the specified knowledge base. Note: The
|
ListKnowledgeBases | |
---|---|
Returns the list of all knowledge bases of the specified agent. Note: The
|
UpdateKnowledgeBase | |
---|---|
Updates the specified knowledge base. Note: The
|
SessionEntityTypes
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application.
Session entity types are referred to as User entity types and are entities that are built for an individual user such as favorites, preferences, playlists, and so on. You can redefine a session entity type at the session level.
Session entity methods do not work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
For more information about entity types, see the Dialogflow documentation.
CreateSessionEntityType | |
---|---|
Creates a session entity type. If the specified session entity type already exists, overrides the session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
|
DeleteSessionEntityType | |
---|---|
Deletes the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
|
GetSessionEntityType | |
---|---|
Retrieves the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
|
ListSessionEntityTypes | |
---|---|
Returns the list of all session entity types in the specified session. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
|
UpdateSessionEntityType | |
---|---|
Updates the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
|
Sessions
A session represents an interaction with a user. You retrieve user input and pass it to the DetectIntent
(or StreamingDetectIntent
) method to determine user intent and respond.
DetectIntent | |
---|---|
Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.
|
StreamingDetectIntent | |
---|---|
Processes a natural language query in audio format in a streaming fashion and returns structured, actionable data as a result. This method is only available via the gRPC API (not REST).
|
Agent
Represents a conversational agent.
Fields | |
---|---|
parent |
Required. The project of this agent. Format: |
display_name |
Required. The name of this agent. |
default_language_code |
Required. The default language of the agent as a language tag. See Language Support for a list of the currently supported language codes. This field cannot be set by the |
supported_language_codes[] |
Optional. The list of all languages supported by this agent (except for the |
time_zone |
Required. The time zone of this agent from the time zone database, e.g., America/New_York, Europe/Paris. |
description |
Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected. |
avatar_uri |
Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted Web Demo integration. |
enable_logging |
Optional. Determines whether this agent should log conversation queries. |
match_mode |
Optional. Determines how intents are detected from user queries. |
classification_threshold |
Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used. |
api_version |
Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version. |
tier |
Optional. The agent tier. If not specified, TIER_STANDARD is assumed. |
ApiVersion
API version for the agent.
Enums | |
---|---|
API_VERSION_UNSPECIFIED |
Not specified. |
API_VERSION_V1 |
Legacy V1 API. |
API_VERSION_V2 |
V2 API. |
API_VERSION_V2_BETA_1 |
V2beta1 API. |
MatchMode
Match mode determines how intents are detected from user queries.
Enums | |
---|---|
MATCH_MODE_UNSPECIFIED |
Not specified. |
MATCH_MODE_HYBRID |
Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities. |
MATCH_MODE_ML_ONLY |
Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large developer entities. |
Tier
Represents the agent tier.
Enums | |
---|---|
TIER_UNSPECIFIED |
Not specified. This value should never be used. |
TIER_STANDARD |
Standard tier. |
TIER_ENTERPRISE |
Enterprise tier (Essentials). |
TIER_ENTERPRISE_PLUS |
Enterprise tier (Plus). |
AnnotatedConversationDataset
Represents an annotated conversation dataset. ConversationDataset can have multiple AnnotatedConversationDataset, each of them represents one result from one annotation task. AnnotatedConversationDataset can only be generated from annotation task, which will be triggered by LabelConversation.
Fields | |
---|---|
name |
Output only. AnnotatedConversationDataset resource name. Format: |
display_name |
Required. The display name of the annotated conversation dataset. It's specified when user starts an annotation task. Maximum of 64 bytes. |
description |
Optional. The description of the annotated conversation dataset. Maximum of 10000 bytes. |
create_time |
Output only. Creation time of this annotated conversation dataset. |
example_count |
Output only. Number of examples in the annotated conversation dataset. |
completed_example_count |
Output only. Number of examples that have annotations in the annotated conversation dataset. |
question_type_name |
Output only. Question type name that identifies a labeling task. A question is a single task that a worker answers. A question type is set of related questions. Each question belongs to a particular question type. It can be used in CrowdCompute UI to filter and manage labeling tasks. |
AudioEncoding
Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.
Enums | |
---|---|
AUDIO_ENCODING_UNSPECIFIED |
Not specified. |
AUDIO_ENCODING_LINEAR_16 |
Uncompressed 16-bit signed little-endian samples (Linear PCM). |
AUDIO_ENCODING_FLAC |
FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16 . FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported. |
AUDIO_ENCODING_MULAW |
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
AUDIO_ENCODING_AMR |
Adaptive Multi-Rate Narrowband codec. sample_rate_hertz must be 8000. |
AUDIO_ENCODING_AMR_WB |
Adaptive Multi-Rate Wideband codec. sample_rate_hertz must be 16000. |
AUDIO_ENCODING_OGG_OPUS |
Opus encoded audio frames in Ogg container (OggOpus). sample_rate_hertz must be 16000. |
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE |
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte . It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz must be 16000. |
BatchCreateEntitiesRequest
The request message for EntityTypes.BatchCreateEntities
.
Fields | |
---|---|
parent |
Required. The name of the entity type to create entities in. Format: Authorization requires the following Google IAM permission on the specified resource
|
entities[] |
Required. The entities to create. |
language_code |
Optional. The language of entity synonyms defined in |
BatchDeleteEntitiesRequest
The request message for EntityTypes.BatchDeleteEntities
.
Fields | |
---|---|
parent |
Required. The name of the entity type to delete entries for. Format: Authorization requires the following Google IAM permission on the specified resource
|
entity_values[] |
Required. The canonical |
language_code |
Optional. The language of entity synonyms defined in |
BatchDeleteEntityTypesRequest
The request message for EntityTypes.BatchDeleteEntityTypes
.
Fields | |
---|---|
parent |
Required. The name of the agent to delete all entities types for. Format: Authorization requires the following Google IAM permission on the specified resource
|
entity_type_names[] |
Required. The names entity types to delete. All names must point to the same agent as |
BatchDeleteIntentsRequest
The request message for Intents.BatchDeleteIntents
.
Fields | |
---|---|
parent |
Required. The name of the agent to delete all entities types for. Format: Authorization requires the following Google IAM permission on the specified resource
|
intents[] |
Required. The collection of intents to delete. Only intent |
BatchUpdateEntitiesRequest
The request message for EntityTypes.BatchUpdateEntities
.
Fields | |
---|---|
parent |
Required. The name of the entity type to update or create entities in. Format: Authorization requires the following Google IAM permission on the specified resource
|
entities[] |
Required. The entities to update or create. |
language_code |
Optional. The language of entity synonyms defined in |
update_mask |
Optional. The mask to control which fields get updated. |
BatchUpdateEntityTypesRequest
The request message for EntityTypes.BatchUpdateEntityTypes
.
Fields | ||
---|---|---|
parent |
Required. The name of the agent to update or create entity types in. Format: Authorization requires the following Google IAM permission on the specified resource
|
|
language_code |
Optional. The language of entity synonyms defined in |
|
update_mask |
Optional. The mask to control which fields get updated. |
|
Union field For each entity type in the batch:
|
||
entity_type_batch_uri |
The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://". |
|
entity_type_batch_inline |
The collection of entity types to update or create. |
BatchUpdateEntityTypesResponse
The response message for EntityTypes.BatchUpdateEntityTypes
.
Fields | |
---|---|
entity_types[] |
The collection of updated or created entity types. |
BatchUpdateIntentsRequest
The request message for Intents.BatchUpdateIntents
.
Fields | ||
---|---|---|
parent |
Required. The name of the agent to update or create intents in. Format: Authorization requires the following Google IAM permission on the specified resource
|
|
language_code |
Optional. The language of training phrases, parameters and rich messages defined in |
|
update_mask |
Optional. The mask to control which fields get updated. |
|
intent_view |
Optional. The resource view to apply to the returned intent. |
|
Union field For each intent in the batch:
|
||
intent_batch_uri |
The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://". |
|
intent_batch_inline |
The collection of intents to update or create. |
BatchUpdateIntentsResponse
The response message for Intents.BatchUpdateIntents
.
Fields | |
---|---|
intents[] |
The collection of updated or created intents. |
Context
Represents a context.
Fields | |
---|---|
name |
Required. The unique identifier of the context. Format: The If |
lifespan_count |
Optional. The number of conversational query requests after which the context expires. If set to |
parameters |
Optional. The collection of parameters associated with this context. Refer to this doc for syntax. |
CreateContextRequest
The request message for Contexts.CreateContext
.
Fields | |
---|---|
parent |
Required. The session to create a context for. Format: Authorization requires the following Google IAM permission on the specified resource
|
context |
Required. The context to create. |
CreateDocumentRequest
Request message for Documents.CreateDocument
.
Fields | |
---|---|
parent |
Required. The knoweldge base to create a document for. Format: Authorization requires the following Google IAM permission on the specified resource
|
document |
Required. The document to create. |
CreateEntityTypeRequest
The request message for EntityTypes.CreateEntityType
.
Fields | |
---|---|
parent |
Required. The agent to create a entity type for. Format: Authorization requires the following Google IAM permission on the specified resource
|
entity_type |
Required. The entity type to create. |
language_code |
Optional. The language of entity synonyms defined in |
CreateIntentRequest
The request message for Intents.CreateIntent
.
Fields | |
---|---|
parent |
Required. The agent to create a intent for. Format: Authorization requires the following Google IAM permission on the specified resource
|
intent |
Required. The intent to create. |
language_code |
Optional. The language of training phrases, parameters and rich messages defined in |
intent_view |
Optional. The resource view to apply to the returned intent. |
CreateKnowledgeBaseRequest
Request message for KnowledgeBases.CreateKnowledgeBase
.
Fields | |
---|---|
parent |
Required. The project to create a knowledge base for. Format: Authorization requires the following Google IAM permission on the specified resource
|
knowledge_base |
Required. The knowledge base to create. |
CreateSessionEntityTypeRequest
The request message for SessionEntityTypes.CreateSessionEntityType
.
Fields | |
---|---|
parent |
Required. The session to create a session entity type for. Format: Authorization requires the following Google IAM permission on the specified resource
|
session_entity_type |
Required. The session entity type to create. |
DeleteAgentRequest
The request message for Agents.DeleteAgent
.
Fields | |
---|---|
parent |
Required. The project that the agent to delete is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteAllContextsRequest
The request message for Contexts.DeleteAllContexts
.
Fields | |
---|---|
parent |
Required. The name of the session to delete all contexts from. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteContextRequest
The request message for Contexts.DeleteContext
.
Fields | |
---|---|
name |
Required. The name of the context to delete. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteDocumentRequest
Request message for Documents.DeleteDocument
.
Fields | |
---|---|
name |
The name of the document to delete. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteEntityTypeRequest
The request message for EntityTypes.DeleteEntityType
.
Fields | |
---|---|
name |
Required. The name of the entity type to delete. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteIntentRequest
The request message for Intents.DeleteIntent
.
Fields | |
---|---|
name |
Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them. Format: Authorization requires the following Google IAM permission on the specified resource
|
DeleteKnowledgeBaseRequest
Request message for KnowledgeBases.DeleteKnowledgeBase
.
Fields | |
---|---|
name |
Required. The name of the knowledge base to delete. Format: Authorization requires the following Google IAM permission on the specified resource
|
force |
Optional. Force deletes the knowledge base. When set to true, any documents in the knowledge base are also deleted. |
DeleteSessionEntityTypeRequest
The request message for SessionEntityTypes.DeleteSessionEntityType
.
Fields | |
---|---|
name |
Required. The name of the entity type to delete. Format: Authorization requires the following Google IAM permission on the specified resource
|
DetectIntentRequest
The request to detect user's intent.
Fields | |
---|---|
session |
Required. The name of the session this query is sent to. Format: Authorization requires the following Google IAM permission on the specified resource
|
query_params |
Optional. The parameters of this query. |
query_input |
Required. The input specification. It can be set to:
|
output_audio_config |
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated. |
input_audio |
Optional. The natural language speech audio to be processed. This field should be populated iff |
DetectIntentResponse
The message returned from the DetectIntent method.
Fields | |
---|---|
response_id |
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues. |
query_result |
The selected results of the conversational query or event processing. See |
alternative_query_results[] |
If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing |
webhook_status |
Specifies the status of the webhook request. |
output_audio |
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the |
output_audio_config |
The config used by the speech synthesizer to generate the output audio. |
Document
A document resource.
Note: The projects.agent.knowledgeBases.documents
resource is deprecated; only use projects.knowledgeBases.documents
.
Fields | ||
---|---|---|
name |
The document resource name. The name must be empty when creating a document. Format: |
|
display_name |
Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails. |
|
mime_type |
Required. The MIME type of this document. |
|
knowledge_types[] |
Required. The knowledge type of document content. |
|
Union field source . Required. The source of this document. source can be only one of the following: |
||
content_uri |
The URI where the file content is located. For documents stored in Google Cloud Storage, these URIs must have the form NOTE: External URLs must correspond to public webpages, i.e., they must be indexed by Google Search. In particular, URLs for showing documents in Google Cloud Storage (i.e. the URL in your browser) are not supported. Instead use the |
|
content |
The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. Note: This field is in the process of being deprecated, please use raw_content instead. |
|
raw_content |
The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. |
KnowledgeType
The knowledge type of document content.
Enums | |
---|---|
KNOWLEDGE_TYPE_UNSPECIFIED |
The type is unspecified or arbitrary. |
FAQ |
The document content contains question and answer pairs as either HTML or CSV. Typical FAQ HTML formats are parsed accurately, but unusual formats may fail to be parsed. CSV must have questions in the first column and answers in the second, with no header. Because of this explicit format, they are always parsed accurately. |
EXTRACTIVE_QA |
Documents for which unstructured text is extracted and used for question answering. |
EntityType
Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.
Fields | |
---|---|
name |
The unique identifier of the entity type. Required for |
display_name |
Required. The name of the entity type. |
kind |
Required. Indicates the kind of entity type. |
auto_expansion_mode |
Optional. Indicates whether the entity type can be automatically expanded. |
entities[] |
Optional. The collection of entity entries associated with the entity type. |
enable_fuzzy_extraction |
Optional. Enables fuzzy entity extraction during classification. |
AutoExpansionMode
Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).
Enums | |
---|---|
AUTO_EXPANSION_MODE_UNSPECIFIED |
Auto expansion disabled for the entity. |
AUTO_EXPANSION_MODE_DEFAULT |
Allows an agent to recognize values that have not been explicitly listed in the entity. |
Entity
An entity entry for an associated entity type.
Fields | |
---|---|
value |
Required. The primary value associated with this entity entry. For example, if the entity type is vegetable, the value could be scallions. For
For
|
synonyms[] |
Required. A collection of value synonyms. For example, if the entity type is vegetable, and For
|
Kind
Represents kinds of entities.
Enums | |
---|---|
KIND_UNSPECIFIED |
Not specified. This value should be never used. |
KIND_MAP |
Map entity types allow mapping of a group of synonyms to a canonical value. |
KIND_LIST |
List entity types contain a set of entries that do not map to canonical values. However, list entity types can contain references to other entity types (with or without aliases). |
KIND_REGEXP |
Regexp entity types allow to specify regular expressions in entries values. |
EntityTypeBatch
This message is a wrapper around a collection of entity types.
Fields | |
---|---|
entity_types[] |
A collection of entity types. |
EventInput
Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event",
parameters: { name: "Sam" } }>
can trigger a personalized welcome response. The parameter name
may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?"
.
Fields | |
---|---|
name |
Required. The unique identifier of the event. |
parameters |
Optional. The collection of parameters associated with the event. |
language_code |
Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
ExportAgentRequest
The request message for Agents.ExportAgent
.
Fields | |
---|---|
parent |
Required. The project that the agent to export is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
agent_uri |
Optional. The Google Cloud Storage URI to export the agent to. The format of this URI must be |
ExportAgentResponse
The response message for Agents.ExportAgent
.
Fields | ||
---|---|---|
Union field agent . The exported agent. agent can be only one of the following: |
||
agent_uri |
The URI to a file containing the exported agent. This field is populated only if |
|
agent_content |
Zip compressed raw byte content for agent. |
GcsSource
Google Cloud Storage location for single input.
Fields | |
---|---|
uri |
Required. The Google Cloud Storage URIs for the inputs. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case. |
GetAgentRequest
The request message for Agents.GetAgent
.
Fields | |
---|---|
parent |
Required. The project that the agent to fetch is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
GetContextRequest
The request message for Contexts.GetContext
.
Fields | |
---|---|
name |
Required. The name of the context. Format: Authorization requires the following Google IAM permission on the specified resource
|
GetDocumentRequest
Request message for Documents.GetDocument
.
Fields | |
---|---|
name |
Required. The name of the document to retrieve. Format Authorization requires the following Google IAM permission on the specified resource
|
GetEntityTypeRequest
The request message for EntityTypes.GetEntityType
.
Fields | |
---|---|
name |
Required. The name of the entity type. Format: Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language to retrieve entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used. |
GetIntentRequest
The request message for Intents.GetIntent
.
Fields | |
---|---|
name |
Required. The name of the intent. Format: Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language to retrieve training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used. |
intent_view |
Optional. The resource view to apply to the returned intent. |
GetKnowledgeBaseRequest
Request message for [KnowledgeBase.GetDocument][].
Fields | |
---|---|
name |
Required. The name of the knowledge base to retrieve. Format Authorization requires the following Google IAM permission on the specified resource
|
GetSessionEntityTypeRequest
The request message for SessionEntityTypes.GetSessionEntityType
.
Fields | |
---|---|
name |
Required. The name of the session entity type. Format: Authorization requires the following Google IAM permission on the specified resource
|
GetValidationResultRequest
The request message for Agents.GetValidationResult
.
Fields | |
---|---|
parent |
Required. The project that the agent is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language for which you want a validation result. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used. |
ImportAgentRequest
The request message for Agents.ImportAgent
.
Fields | ||
---|---|---|
parent |
Required. The project that the agent to import is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
|
Union field agent . Required. The agent to import. agent can be only one of the following: |
||
agent_uri |
The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://". |
|
agent_content |
Zip compressed raw byte content for agent. |
InputAudioConfig
Instructs the speech recognizer on how to process the audio content.
Fields | |
---|---|
audio_encoding |
Required. Audio encoding of the audio content to process. |
sample_rate_hertz |
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details. |
language_code |
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
enable_word_info |
Optional. If |
phrase_hints[] |
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. |
speech_contexts[] |
Optional. Context information to assist speech recognition. See the Cloud Speech documentation for more details. |
model |
Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. |
model_variant |
Optional. Which variant of the |
single_utterance |
Optional. If |
Intent
Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.
Fields | |
---|---|
name |
The unique identifier of this intent. Required for |
display_name |
Required. The name of this intent. |
webhook_state |
Optional. Indicates whether webhooks are enabled for the intent. |
priority |
The priority of this intent. Higher numbers represent higher priorities.
|
is_fallback |
Optional. Indicates whether this is a fallback intent. |
ml_enabled |
Optional. Indicates whether Machine Learning is enabled for the intent. Note: If |
ml_disabled |
Optional. Indicates whether Machine Learning is disabled for the intent. Note: If |
end_interaction |
Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false. |
input_context_names[] |
Optional. The list of context names required for this intent to be triggered. Format: |
events[] |
Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent. |
training_phrases[] |
Optional. The collection of examples that the agent is trained on. |
action |
Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces. |
output_contexts[] |
Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the |
reset_contexts |
Optional. Indicates whether to delete all contexts in the current session when this intent is matched. |
parameters[] |
Optional. The collection of parameters associated with the intent. |
messages[] |
Optional. The collection of rich messages corresponding to the |
default_response_platforms[] |
Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform). |
root_followup_intent_name |
Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output. Format: |
parent_followup_intent_name |
Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with [CreateIntent][] or [BatchUpdateIntents][], in order to make this intent a followup intent. It identifies the parent followup intent. Format: |
followup_intent_info[] |
Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output. |
FollowupIntentInfo
Represents a single followup intent in the chain.
Fields | |
---|---|
followup_intent_name |
The unique identifier of the followup intent. Format: |
parent_followup_intent_name |
The unique identifier of the followup intent's parent. Format: |
Message
Corresponds to the Response
field in the Dialogflow console.
Fields | ||
---|---|---|
platform |
Optional. The platform that this message is intended for. |
|
Union field message . Required. The rich response message. message can be only one of the following: |
||
text |
Returns a text response. |
|
image |
Displays an image. |
|
quick_replies |
Displays quick replies. |
|
card |
Displays a card. |
|
payload |
Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform. |
|
simple_responses |
Returns a voice or text-only response for Actions on Google. |
|
basic_card |
Displays a basic card for Actions on Google. |
|
suggestions |
Displays suggestion chips for Actions on Google. |
|
link_out_suggestion |
Displays a link out suggestion chip for Actions on Google. |
|
list_select |
Displays a list card for Actions on Google. |
|
carousel_select |
Displays a carousel card for Actions on Google. |
|
telephony_play_audio |
Plays audio from a file in Telephony Gateway. |
|
telephony_synthesize_speech |
Synthesizes speech in Telephony Gateway. |
|
telephony_transfer_call |
Transfers the call in Telephony Gateway. |
|
rbm_text |
Rich Business Messaging (RBM) text response. RBM allows businesses to send enriched and branded versions of SMS. See https://jibe.google.com/business-messaging. |
|
rbm_standalone_rich_card |
Standalone Rich Business Messaging (RBM) rich card response. |
|
rbm_carousel_rich_card |
Rich Business Messaging (RBM) carousel rich card response. |
|
browse_carousel_card |
Browse carousel card for Actions on Google. |
|
table_card |
Table card for Actions on Google. |
|
media_content |
The media content card for Actions on Google. |
BasicCard
The basic card message. Useful for displaying information.
Fields | |
---|---|
title |
Optional. The title of the card. |
subtitle |
Optional. The subtitle of the card. |
formatted_text |
Required, unless image is present. The body text of the card. |
image |
Optional. The image for the card. |
buttons[] |
Optional. The collection of card buttons. |
Button
The button object that appears at the bottom of a card.
Fields | |
---|---|
title |
Required. The title of the button. |
open_uri_action |
Required. Action to take when a user taps on the button. |
OpenUriAction
Opens the given URI.
Fields | |
---|---|
uri |
Required. The HTTP or HTTPS scheme URI. |
BrowseCarouselCard
Browse Carousel Card for Actions on Google. https://developers.google.com/actions/assistant/responses#browsing_carousel
Fields | |
---|---|
items[] |
Required. List of items in the Browse Carousel Card. Minimum of two items, maximum of ten. |
image_display_options |
Optional. Settings for displaying the image. Applies to every image in |
BrowseCarouselCardItem
Browsing carousel tile
Fields | |
---|---|
open_uri_action |
Required. Action to present to the user. |
title |
Required. Title of the carousel item. Maximum of two lines of text. |
description |
Optional. Description of the carousel item. Maximum of four lines of text. |
image |
Optional. Hero image for the carousel item. |
footer |
Optional. Text that appears at the bottom of the Browse Carousel Card. Maximum of one line of text. |
OpenUrlAction
Actions on Google action to open a given url.
Fields | |
---|---|
url |
Required. URL |
url_type_hint |
Optional. Specifies the type of viewer that is used when opening the URL. Defaults to opening via web browser. |
UrlTypeHint
Type of the URI.
Enums | |
---|---|
URL_TYPE_HINT_UNSPECIFIED |
Unspecified |
AMP_ACTION |
Url would be an amp action |
AMP_CONTENT |
URL that points directly to AMP content, or to a canonical URL which refers to AMP content via . |
ImageDisplayOptions
Image display options for Actions on Google. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.
Enums | |
---|---|
IMAGE_DISPLAY_OPTIONS_UNSPECIFIED |
Fill the gaps between the image and the image container with gray bars. |
GRAY |
Fill the gaps between the image and the image container with gray bars. |
WHITE |
Fill the gaps between the image and the image container with white bars. |
CROPPED |
Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video. |
BLURRED_BACKGROUND |
Pad the gaps between image and image frame with a blurred copy of the same image. |
Card
The card response message.
Fields | |
---|---|
title |
Optional. The title of the card. |
subtitle |
Optional. The subtitle of the card. |
image_uri |
Optional. The public URI to an image file for the card. |
buttons[] |
Optional. The collection of card buttons. |
Button
Optional. Contains information about a button.
Fields | |
---|---|
text |
Optional. The text to show on the button. |
postback |
Optional. The text to send back to the Dialogflow API or a URI to open. |
CarouselSelect
The card for presenting a carousel of options to select from.
Fields | |
---|---|
items[] |
Required. Carousel items. |
Item
An item in the carousel.
Fields | |
---|---|
info |
Required. Additional info about the option item. |
title |
Required. Title of the carousel item. |
description |
Optional. The body text of the card. |
image |
Optional. The image to display. |
ColumnProperties
Column properties for TableCard
.
Fields | |
---|---|
header |
Required. Column heading. |
horizontal_alignment |
Optional. Defines text alignment for all cells in this column. |
HorizontalAlignment
Text alignments within a cell.
Enums | |
---|---|
HORIZONTAL_ALIGNMENT_UNSPECIFIED |
Text is aligned to the leading edge of the column. |
LEADING |
Text is aligned to the leading edge of the column. |
CENTER |
Text is centered in the column. |
TRAILING |
Text is aligned to the trailing edge of the column. |
Image
The image response message.
Fields | |
---|---|
image_uri |
Optional. The public URI to an image file. |
accessibility_text |
A text description of the image to be used for accessibility, e.g., screen readers. Required if image_uri is set for CarouselSelect. |
LinkOutSuggestion
The suggestion chip message that allows the user to jump out to the app or website associated with this agent.
Fields | |
---|---|
destination_name |
Required. The name of the app or site this chip is linking to. |
uri |
Required. The URI of the app or site to open when the user taps the suggestion chip. |
ListSelect
The card for presenting a list of options to select from.
Fields | |
---|---|
title |
Optional. The overall title of the list. |
items[] |
Required. List items. |
subtitle |
Optional. Subtitle of the list. |
Item
An item in the list.
Fields | |
---|---|
info |
Required. Additional information about this option. |
title |
Required. The title of the list item. |
description |
Optional. The main text describing the item. |
image |
Optional. The image to display. |
MediaContent
The media content card for Actions on Google.
Fields | |
---|---|
media_type |
Optional. What type of media is the content (ie "audio"). |
media_objects[] |
Required. List of media objects. |
ResponseMediaObject
Response media object for media content card.
Fields | ||
---|---|---|
name |
Required. Name of media card. |
|
description |
Optional. Description of media card. |
|
content_url |
Required. Url where the media is stored. |
|
Union field image . Image to show with the media card. image can be only one of the following: |
||
large_image |
Optional. Image to display above media content. |
|
icon |
Optional. Icon to display above media content. |
ResponseMediaType
Format of response media type.
Enums | |
---|---|
RESPONSE_MEDIA_TYPE_UNSPECIFIED |
Unspecified. |
AUDIO |
Response media type is audio. |
Platform
Represents different platforms that a rich message can be intended for.
Enums | |
---|---|
PLATFORM_UNSPECIFIED |
Not specified. |
FACEBOOK |
Facebook. |
SLACK |
Slack. |
TELEGRAM |
Telegram. |
KIK |
Kik. |
SKYPE |
Skype. |
LINE |
Line. |
VIBER |
Viber. |
ACTIONS_ON_GOOGLE |
Actions on Google. When using Actions on Google, you can choose one of the specific Intent.Message types that mention support for Actions on Google, or you can use the advanced Intent.Message.payload field. The payload field provides access to AoG features not available in the specific message types. If using the Intent.Message.payload field, it should have a structure similar to the JSON message shown here. For more information, see Actions on Google Webhook Format { "expectUserResponse": true, "isSsml": false, "noInputPrompts": [], "richResponse": { "items": [ { "simpleResponse": { "displayText": "hi", "textToSpeech": "hello" } } ], "suggestions": [ { "title": "Say this" }, { "title": "or this" } ] }, "systemIntent": { "data": { "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec", "listSelect": { "items": [ { "optionInfo": { "key": "key1", "synonyms": [ "key one" ] }, "title": "must not be empty, but unique" }, { "optionInfo": { "key": "key2", "synonyms": [ "key two" ] }, "title": "must not be empty, but unique" } ] } }, "intent": "actions.intent.OPTION" } } |
TELEPHONY |
Telephony Gateway. |
GOOGLE_HANGOUTS |
Google Hangouts. |
QuickReplies
The quick replies response message.
Fields | |
---|---|
title |
Optional. The title of the collection of quick replies. |
quick_replies[] |
Optional. The collection of quick replies. |
RbmCardContent
Rich Business Messaging (RBM) Card content
Fields | |
---|---|
title |
Optional. Title of the card (at most 200 bytes). At least one of the title, description or media must be set. |
description |
Optional. Description of the card (at most 2000 bytes). At least one of the title, description or media must be set. |
media |
Optional. However at least one of the title, description or media must be set. Media (image, GIF or a video) to include in the card. |
suggestions[] |
Optional. List of suggestions to include in the card. |
RbmMedia
Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported:
Image Types
image/jpeg image/jpg' image/gif image/png
Video Types
video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm
Fields | |
---|---|
file_uri |
Required. Publicly reachable URI of the file. The RBM platform determines the MIME type of the file from the content-type field in the HTTP headers when the platform fetches the file. The content-type field must be present and accurate in the HTTP response from the URL. |
thumbnail_uri |
Optional. Publicly reachable URI of the thumbnail.If you don't provide a thumbnail URI, the RBM platform displays a blank placeholder thumbnail until the user's device downloads the file. Depending on the user's setting, the file may not download automatically and may require the user to tap a download button. |
height |
Required for cards with vertical orientation. The height of the media within a rich card with a vertical layout. (https://goo.gl/NeFCjz). For a standalone card with horizontal layout, height is not customizable, and this field is ignored. |
Height
Media height
Enums | |
---|---|
HEIGHT_UNSPECIFIED |
Not specified. |
SHORT |
112 DP. |
MEDIUM |
168 DP. |
TALL |
264 DP. Not available for rich card carousels when the card width is set to small. |
RbmCarouselCard
Carousel Rich Business Messaging (RBM) rich card.
Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.
For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. If you want to show a single card with more control over the layout, please use RbmStandaloneCard
instead.
Fields | |
---|---|
card_width |
Required. The width of the cards in the carousel. |
card_contents[] |
Required. The cards in the carousel. A carousel must have at least 2 cards and at most 10. |
CardWidth
The width of the cards in the carousel.
Enums | |
---|---|
CARD_WIDTH_UNSPECIFIED |
Not specified. |
SMALL |
120 DP. Note that tall media cannot be used. |
MEDIUM |
232 DP. |
RbmStandaloneCard
Standalone Rich Business Messaging (RBM) rich card.
Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions.
For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. You can group multiple rich cards into one using RbmCarouselCard
but carousel cards will give you less control over the card layout.
Fields | |
---|---|
card_orientation |
Required. Orientation of the card. |
thumbnail_image_alignment |
Required if orientation is horizontal. Image preview alignment for standalone cards with horizontal layout. |
card_content |
Required. Card content. |
CardOrientation
Orientation of the card.
Enums | |
---|---|
CARD_ORIENTATION_UNSPECIFIED |
Not specified. |
HORIZONTAL |
Horizontal layout. |
VERTICAL |
Vertical layout. |
ThumbnailImageAlignment
Thumbnail preview alignment for standalone cards with horizontal layout.
Enums | |
---|---|
THUMBNAIL_IMAGE_ALIGNMENT_UNSPECIFIED |
Not specified. |
LEFT |
Thumbnail preview is left-aligned. |
RIGHT |
Thumbnail preview is right-aligned. |
RbmSuggestedAction
Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.
Fields | ||
---|---|---|
text |
Text to display alongside the action. |
|
postback_data |
Opaque payload that the Dialogflow receives in a user event when the user taps the suggested action. This data will be also forwarded to webhook to allow performing custom business logic. |
|
Union field action . Action that needs to be triggered. action can be only one of the following: |
||
dial |
Suggested client side action: Dial a phone number |
|
open_url |
Suggested client side action: Open a URI on device |
|
share_location |
Suggested client side action: Share user location |
RbmSuggestedActionDial
Opens the user's default dialer app with the specified phone number but does not dial automatically (https://goo.gl/ergbB2).
Fields | |
---|---|
phone_number |
Required. The phone number to fill in the default dialer app. This field should be in E.164 format. An example of a correctly formatted phone number: +15556767888. |
RbmSuggestedActionOpenUri
Opens the user's default web browser app to the specified uri (https://goo.gl/6GLJD2). If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.
Fields | |
---|---|
uri |
Required. The uri to open on the user device |
RbmSuggestedReply
Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.
Fields | |
---|---|
text |
Suggested reply text. |
postback_data |
Opaque payload that the Dialogflow receives in a user event when the user taps the suggested reply. This data will be also forwarded to webhook to allow performing custom business logic. |
RbmSuggestion
Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).
Fields | ||
---|---|---|
Union field suggestion . Predefined suggested response or action for user to choose suggestion can be only one of the following: |
||
reply |
Predefined replies for user to select instead of typing |
|
action |
Predefined client side actions that user can choose |
RbmText
Rich Business Messaging (RBM) text response with suggestions.
Fields | |
---|---|
text |
Required. Text sent and displayed to the user. |
rbm_suggestion[] |
Optional. One or more suggestions to show to the user. |
SelectItemInfo
Additional info about the select item for when it is triggered in a dialog.
Fields | |
---|---|
key |
Required. A unique key that will be sent back to the agent if this response is given. |
synonyms[] |
Optional. A list of synonyms that can also be used to trigger this item in dialog. |
SimpleResponse
The simple response message containing speech or text.
Fields | |
---|---|
text_to_speech |
One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml. |
ssml |
One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech. |
display_text |
Optional. The text to display. |
SimpleResponses
The collection of simple response candidates. This message in QueryResult.fulfillment_messages
and WebhookResponse.fulfillment_messages
should contain only one SimpleResponse
.
Fields | |
---|---|
simple_responses[] |
Required. The list of simple responses. |
Suggestion
The suggestion chip message that the user can tap to quickly post a reply to the conversation.
Fields | |
---|---|
title |
Required. The text shown the in the suggestion chip. |
Suggestions
The collection of suggestions.
Fields | |
---|---|
suggestions[] |
Required. The list of suggested replies. |
TableCard
Table card for Actions on Google.
Fields | |
---|---|
title |
Required. Title of the card. |
subtitle |
Optional. Subtitle to the title. |
image |
Optional. Image which should be displayed on the card. |
column_properties[] |
Optional. Display properties for the columns in this table. |
rows[] |
Optional. Rows in this table of data. |
buttons[] |
Optional. List of buttons for the card. |
TableCardCell
Cell of TableCardRow
.
Fields | |
---|---|
text |
Required. Text in this cell. |
TableCardRow
Row of TableCard
.
Fields | |
---|---|
cells[] |
Optional. List of cells that make up this row. |
divider_after |
Optional. Whether to add a visual divider after this row. |
TelephonyPlayAudio
Plays audio from a file in Telephony Gateway.
Fields | |
---|---|
audio_uri |
Required. URI to a Google Cloud Storage object containing the audio to play, e.g., "gs://bucket/object". The object must contain a single channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz. This object must be readable by the For audio from other sources, consider using the |
TelephonySynthesizeSpeech
Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway.
Telephony Gateway takes the synthesizer settings from DetectIntentResponse.output_audio_config
which can either be set at request-level or can come from the agent-level synthesizer config.
Fields | ||
---|---|---|
Union field source . Required. The source to be synthesized. source can be only one of the following: |
||
text |
The raw text to be synthesized. |
|
ssml |
The SSML to be synthesized. For more information, see SSML. |
TelephonyTransferCall
Transfers the call in Telephony Gateway.
Fields | |
---|---|
phone_number |
Required. The phone number to transfer the call to in E.164 format. We currently only allow transferring to US numbers (+1xxxyyyzzzz). |
Text
The text response message.
Fields | |
---|---|
text[] |
Optional. The collection of the agent's responses. |
Parameter
Represents intent parameters.
Fields | |
---|---|
name |
The unique identifier of this parameter. |
display_name |
Required. The name of the parameter. |
value |
Optional. The definition of the parameter value. It can be: - a constant string, - a parameter value defined as |
default_value |
Optional. The default value to use when the |
entity_type_display_name |
Optional. The name of the entity type, prefixed with |
mandatory |
Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value. |
prompts[] |
Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter. |
is_list |
Optional. Indicates whether the parameter represents a list of values. |
TrainingPhrase
Represents an example that the agent is trained on.
Fields | |
---|---|
name |
Output only. The unique identifier of this training phrase. |
type |
Required. The type of the training phrase. |
parts[] |
Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase. Note: The API does not automatically annotate training phrases like the Dialogflow Console does. Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated. If the training phrase does not need to be annotated with parameters, you just need a single part with only the If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways:
|
times_added_count |
Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased. |
Part
Represents a part of a training phrase.
Fields | |
---|---|
text |
Required. The text for this part. |
entity_type |
Optional. The entity type name prefixed with |
alias |
Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase. |
user_defined |
Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true. |
Type
Represents different types of training phrases.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Not specified. This value should never be used. |
EXAMPLE |
Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types. |
TEMPLATE |
Templates are not annotated with entity types, but they can contain @-prefixed entity type names as substrings. Template mode has been deprecated. Example mode is the only supported way to create new training phrases. If you have existing training phrases that you've created in template mode, those will continue to work. |
WebhookState
Represents the different states that webhooks can be in.
Enums | |
---|---|
WEBHOOK_STATE_UNSPECIFIED |
Webhook is disabled in the agent and in the intent. |
WEBHOOK_STATE_ENABLED |
Webhook is enabled in the agent and in the intent. |
WEBHOOK_STATE_ENABLED_FOR_SLOT_FILLING |
Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook. |
IntentBatch
This message is a wrapper around a collection of intents.
Fields | |
---|---|
intents[] |
A collection of intents. |
IntentView
Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.
Enums | |
---|---|
INTENT_VIEW_UNSPECIFIED |
Training phrases field is not populated in the response. |
INTENT_VIEW_FULL |
All fields are populated. |
KnowledgeAnswers
Represents the result of querying a Knowledge base.
Fields | |
---|---|
answers[] |
A list of answers from Knowledge Connector. |
Answer
An answer from Knowledge Connector.
Fields | |
---|---|
source |
Indicates which Knowledge Document this answer was extracted from. Format: |
faq_question |
The corresponding FAQ question if the answer was extracted from a FAQ Document, empty otherwise. |
answer |
The piece of text from the |
match_confidence_level |
The system's confidence level that this knowledge answer is a good match for this conversational query. NOTE: The confidence level for a given |
match_confidence |
The system's confidence score that this Knowledge answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). Note: The confidence score is likely to vary somewhat (possibly even for identical requests), as the underlying model is under constant improvement. It may be deprecated in the future. We recommend using |
MatchConfidenceLevel
Represents the system's confidence that this knowledge answer is a good match for this conversational query.
Enums | |
---|---|
MATCH_CONFIDENCE_LEVEL_UNSPECIFIED |
Not specified. |
LOW |
Indicates that the confidence is low. |
MEDIUM |
Indicates our confidence is medium. |
HIGH |
Indicates our confidence is high. |
KnowledgeBase
Represents knowledge base resource.
Note: The projects.agent.knowledgeBases
resource is deprecated; only use projects.knowledgeBases
.
Fields | |
---|---|
name |
The knowledge base resource name. The name must be empty when creating a knowledge base. Format: |
display_name |
Required. The display name of the knowledge base. The name must be 1024 bytes or less; otherwise, the creation request fails. |
language_code |
Language which represents the KnowledgeBase. When the KnowledgeBase is created/updated, this is populated for all non en-us languages. If not populated, the default language en-us applies. |
KnowledgeOperationMetadata
Metadata in google::longrunning::Operation for Knowledge operations.
Fields | |
---|---|
state |
Required. The current state of this operation. |
State
States of the operation.
Enums | |
---|---|
STATE_UNSPECIFIED |
State unspecified. |
PENDING |
The operation has been created. |
RUNNING |
The operation is currently running. |
DONE |
The operation is done, either cancelled or completed. |
LabelConversationResponse
The response for ConversationDatasets.LabelConversation
Fields | |
---|---|
annotated_conversation_dataset |
New annotated conversation dataset created by the labeling task. |
ListContextsRequest
The request message for Contexts.ListContexts
.
Fields | |
---|---|
parent |
Required. The session to list all contexts from. Format: Authorization requires the following Google IAM permission on the specified resource
|
page_size |
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListContextsResponse
The response message for Contexts.ListContexts
.
Fields | |
---|---|
contexts[] |
The list of contexts. There will be a maximum number of items returned based on the page_size field in the request. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
ListDocumentsRequest
Request message for Documents.ListDocuments
.
Fields | |
---|---|
parent |
Required. The knowledge base to list all documents for. Format: Authorization requires the following Google IAM permission on the specified resource
|
page_size |
Optional. The maximum number of items to return in a single page. By default 10 and at most 100. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListDocumentsResponse
Response message for Documents.ListDocuments
.
Fields | |
---|---|
documents[] |
The list of documents. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
ListEntityTypesRequest
The request message for EntityTypes.ListEntityTypes
.
Fields | |
---|---|
parent |
Required. The agent to list all entity types from. Format: Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language to list entity synonyms for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used. |
page_size |
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListEntityTypesResponse
The response message for EntityTypes.ListEntityTypes
.
Fields | |
---|---|
entity_types[] |
The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
ListIntentsRequest
The request message for Intents.ListIntents
.
Fields | |
---|---|
parent |
Required. The agent to list all intents from. Format: Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language to list training phrases, parameters and rich messages for. If not specified, the agent's default language is used. Many languages are supported. Note: languages must be enabled in the agent before they can be used. |
intent_view |
Optional. The resource view to apply to the returned intent. |
page_size |
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListIntentsResponse
The response message for Intents.ListIntents
.
Fields | |
---|---|
intents[] |
The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
ListKnowledgeBasesRequest
Request message for KnowledgeBases.ListKnowledgeBases
.
Fields | |
---|---|
parent |
Required. The project to list of knowledge bases for. Format: Authorization requires the following Google IAM permission on the specified resource
|
page_size |
Optional. The maximum number of items to return in a single page. By default 10 and at most 100. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListKnowledgeBasesResponse
Response message for KnowledgeBases.ListKnowledgeBases
.
Fields | |
---|---|
knowledge_bases[] |
The list of knowledge bases. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
ListSessionEntityTypesRequest
The request message for SessionEntityTypes.ListSessionEntityTypes
.
Fields | |
---|---|
parent |
Required. The session to list all session entity types from. Format: Authorization requires the following Google IAM permission on the specified resource
|
page_size |
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
ListSessionEntityTypesResponse
The response message for SessionEntityTypes.ListSessionEntityTypes
.
Fields | |
---|---|
session_entity_types[] |
The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
OriginalDetectIntentRequest
Represents the contents of the original request that was passed to the [Streaming]DetectIntent
call.
Fields | |
---|---|
source |
The source of this request, e.g., |
version |
Optional. The version of the protocol used for this request. This field is AoG-specific. |
payload |
Optional. This field is set to the value of the In particular for the Telephony Gateway this field has the form: { "telephony": { "caller_id": "+18558363987" } } Note: The caller ID field ( |
OutputAudioConfig
Instructs the speech synthesizer how to generate the output audio content.
Fields | |
---|---|
audio_encoding |
Required. Audio encoding of the synthesized audio content. |
sample_rate_hertz |
Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality). |
synthesize_speech_config |
Optional. Configuration of how speech should be synthesized. |
OutputAudioEncoding
Audio encoding of the output audio format in Text-To-Speech.
Enums | |
---|---|
OUTPUT_AUDIO_ENCODING_UNSPECIFIED |
Not specified. |
OUTPUT_AUDIO_ENCODING_LINEAR_16 |
Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header. |
OUTPUT_AUDIO_ENCODING_MP3 |
MP3 audio at 32kbps. |
OUTPUT_AUDIO_ENCODING_OGG_OPUS |
Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate. |
QueryInput
Represents the query input. It can contain either:
An audio config which instructs the speech recognizer how to process the speech audio.
A conversational query in the form of text.
An event that specifies which intent to trigger.
Fields | ||
---|---|---|
Union field input . Required. The input specification. input can be only one of the following: |
||
audio_config |
Instructs the speech recognizer how to process the speech audio. |
|
text |
The natural language text to be processed. |
|
event |
The event to be processed. |
QueryParameters
Represents the parameters of the conversational query.
Fields | |
---|---|
time_zone |
Optional. The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used. |
geo_location |
Optional. The geo location of this conversational query. |
contexts[] |
Optional. The collection of contexts to be activated before this query is executed. |
reset_contexts |
Optional. Specifies whether to delete all contexts in the current session before the new ones are activated. |
session_entity_types[] |
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. |
payload |
Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported. |
knowledge_base_names[] |
Optional. KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: |
sentiment_analysis_request_config |
Optional. Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed. Note: Sentiment Analysis is only currently available for Enterprise Edition agents. |
webhook_headers |
Optional. This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook alone with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc. |
QueryResult
Represents the result of conversational query or event processing.
Fields | |
---|---|
query_text |
The original conversational query text:
|
language_code |
The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes. |
speech_recognition_confidence |
The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult. |
action |
The action name from the matched intent. |
parameters |
The collection of extracted parameters. |
all_required_params_present |
This field is set to:
|
fulfillment_text |
The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, |
fulfillment_messages[] |
The collection of rich messages to present to the user. |
webhook_source |
If the query was fulfilled by a webhook call, this field is set to the value of the |
webhook_payload |
If the query was fulfilled by a webhook call, this field is set to the value of the |
output_contexts[] |
The collection of output contexts. If applicable, |
intent |
The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: |
intent_detection_confidence |
The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are |
diagnostic_info |
The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice. |
sentiment_analysis_result |
The sentiment analysis result, which depends on the |
knowledge_answers |
The result from Knowledge Connector (if any), ordered by decreasing |
ReloadDocumentRequest
Request message for Documents.ReloadDocument
.
Fields | |
---|---|
name |
The name of the document to reload. Format: |
gcs_source |
The path of gcs source file for reloading document content. |
RestoreAgentRequest
The request message for Agents.RestoreAgent
.
Fields | ||
---|---|---|
parent |
Required. The project that the agent to restore is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
|
Union field agent . Required. The agent to restore. agent can be only one of the following: |
||
agent_uri |
The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://". |
|
agent_content |
Zip compressed raw byte content for agent. |
SearchAgentsRequest
The request message for Agents.SearchAgents
.
Fields | |
---|---|
parent |
Required. The project to list agents from. Format: Authorization requires the following Google IAM permission on the specified resource
|
page_size |
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000. |
page_token |
Optional. The next_page_token value returned from a previous list request. |
SearchAgentsResponse
The response message for Agents.SearchAgents
.
Fields | |
---|---|
agents[] |
The list of agents. There will be a maximum number of items returned based on the page_size field in the request. |
next_page_token |
Token to retrieve the next page of results, or empty if there are no more results in the list. |
Sentiment
The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.
Fields | |
---|---|
score |
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment). |
magnitude |
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative). |
SentimentAnalysisRequestConfig
Configures the types of sentiment analysis to perform.
Fields | |
---|---|
analyze_query_text_sentiment |
Optional. Instructs the service to perform sentiment analysis on |
SentimentAnalysisResult
The result of sentiment analysis as configured by sentiment_analysis_request_config
.
Fields | |
---|---|
query_text_sentiment |
The sentiment analysis result for |
SessionEntityType
Represents a session entity type.
Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types").
Note: session entity types apply to all queries, regardless of the language.
Fields | |
---|---|
name |
Required. The unique identifier of this session entity type. Format:
|
entity_override_mode |
Required. Indicates whether the additional data should override or supplement the developer entity type definition. |
entities[] |
Required. The collection of entities associated with this session entity type. |
EntityOverrideMode
The types of modifications for a session entity type.
Enums | |
---|---|
ENTITY_OVERRIDE_MODE_UNSPECIFIED |
Not specified. This value should be never used. |
ENTITY_OVERRIDE_MODE_OVERRIDE |
The collection of session entities overrides the collection of entities in the corresponding developer entity type. |
ENTITY_OVERRIDE_MODE_SUPPLEMENT |
The collection of session entities extends the collection of entities in the corresponding developer entity type. Note: Even in this override mode calls to |
SetAgentRequest
The request message for Agents.SetAgent
.
Fields | |
---|---|
agent |
Required. The agent to update. Authorization requires the following Google IAM permission on the specified resource
|
update_mask |
Optional. The mask to control which fields get updated. |
SpeechContext
Hints for the speech recognizer to help with recognition in a specific conversation state.
Fields | |
---|---|
phrases[] |
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. This list can be used to: * improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent * add additional words to the speech recognizer vocabulary * ... See the Cloud Speech documentation for usage limits. |
boost |
Optional. Boost for this context compared to other contexts: * If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. * If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search. |
SpeechModelVariant
Variant of the specified Speech model
to use.
See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
Enums | |
---|---|
SPEECH_MODEL_VARIANT_UNSPECIFIED |
No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE. |
USE_BEST_AVAILABLE |
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for. Please see the Dialogflow docs for how to make your project eligible for enhanced models. |
USE_STANDARD |
Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models. |
USE_ENHANCED |
Use an enhanced model variant:
The Cloud Speech documentation describes which models have enhanced variants.
|
SpeechWordInfo
Information for a word recognized by the speech recognizer.
Fields | |
---|---|
word |
The word this info is for. |
start_offset |
Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary. |
end_offset |
Time offset relative to the beginning of the audio that corresponds to the end of the spoken word. This is an experimental feature and the accuracy of the time offset can vary. |
confidence |
The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided. |
SsmlVoiceGender
Gender of the voice as described in SSML voice element.
Enums | |
---|---|
SSML_VOICE_GENDER_UNSPECIFIED |
An unspecified gender, which means that the client doesn't care which gender the selected voice will have. |
SSML_VOICE_GENDER_MALE |
A male voice. |
SSML_VOICE_GENDER_FEMALE |
A female voice. |
SSML_VOICE_GENDER_NEUTRAL |
A gender-neutral voice. |
StreamingDetectIntentRequest
The top-level message sent by the client to the [StreamingDetectIntent][] method.
Multiple request messages should be sent in order:
- The first message must contain
StreamingDetectIntentRequest.session
, [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also containStreamingDetectIntentRequest.output_audio_config
. The message must not containStreamingDetectIntentRequest.input_audio
. - If
StreamingDetectIntentRequest.query_input
was set to [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [StreamingDetectIntentRequest.query_input.text][].
However, note that:
* Dialogflow will bill you for the audio duration so far.
* Dialogflow discards all Speech recognition results in favor of the
input text.
* Dialogflow will use the language code from the first message.
After you sent all input, you must half-close or abort the request stream.
Fields | |
---|---|
session |
Required. The name of the session the query is sent to. Format of the session name: Authorization requires the following Google IAM permission on the specified resource
|
query_params |
Optional. The parameters of this query. |
query_input |
Required. The input specification. It can be set to:
|
single_utterance |
DEPRECATED. Please use |
output_audio_config |
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated. |
input_audio |
Optional. The input audio content to be recognized. Must be sent if |
StreamingDetectIntentResponse
The top-level message returned from the StreamingDetectIntent
method.
Multiple response messages can be returned in order:
If the input was set to streaming audio, the first one or more messages contain
recognition_result
. Eachrecognition_result
represents a more complete transcript of what the user said. The lastrecognition_result
hasis_final
set totrue
.The next message contains
response_id
,query_result
,alternative_query_results
and optionallywebhook_status
if a WebHook was called.If
output_audio_config
was specified in the request or agent-level speech synthesizer is configured, all subsequent messages containoutput_audio
andoutput_audio_config
.
Fields | |
---|---|
response_id |
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues. |
recognition_result |
The result of speech recognition. |
query_result |
The selected results of the conversational query or event processing. See |
alternative_query_results[] |
If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing |
webhook_status |
Specifies the status of the webhook request. |
output_audio |
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the |
output_audio_config |
The config used by the speech synthesizer to generate the output audio. |
StreamingRecognitionResult
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
Example:
transcript: "tube"
transcript: "to be a"
transcript: "to be"
transcript: "to be or not to be" is_final: true
transcript: " that's"
transcript: " that is"
message_type:
END_OF_SINGLE_UTTERANCE
transcript: " that is the question" is_final: true
Only two of the responses contain final results (#4 and #8 indicated by is_final: true
). Concatenating these generates the full transcript: "to be or not to be that is the question".
In each response we populate:
for
TRANSCRIPT
:transcript
and possiblyis_final
.for
END_OF_SINGLE_UTTERANCE
: onlymessage_type
.
Fields | |
---|---|
message_type |
Type of the result message. |
transcript |
Transcript text representing the words that the user spoke. Populated if and only if |
is_final |
If |
confidence |
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if |
stability |
An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result: * If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for |
speech_word_info[] |
Word-specific information for the words recognized by Speech in |
speech_end_offset |
Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for |
MessageType
Type of the response message.
Enums | |
---|---|
MESSAGE_TYPE_UNSPECIFIED |
Not specified. Should never be used. |
TRANSCRIPT |
Message contains a (possibly partial) transcript. |
END_OF_SINGLE_UTTERANCE |
Event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if single_utterance was set to true , and is not used otherwise. |
SynthesizeSpeechConfig
Configuration of how speech should be synthesized.
Fields | |
---|---|
speaking_rate |
Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error. |
pitch |
Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch. |
volume_gain_db |
Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that. |
effects_profile_id[] |
Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given. |
voice |
Optional. The desired voice of the synthesized audio. |
TextInput
Represents the natural language text to be processed.
Fields | |
---|---|
text |
Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters. |
language_code |
Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
TrainAgentRequest
The request message for Agents.TrainAgent
.
Fields | |
---|---|
parent |
Required. The project that the agent to train is associated with. Format: Authorization requires the following Google IAM permission on the specified resource
|
UpdateContextRequest
The request message for Contexts.UpdateContext
.
Fields | |
---|---|
context |
Required. The context to update. Authorization requires the following Google IAM permission on the specified resource
|
update_mask |
Optional. The mask to control which fields get updated. |
UpdateDocumentRequest
Request message for Documents.UpdateDocument
.
Fields | |
---|---|
document |
Required. The document to update. |
update_mask |
Optional. Not specified means |
UpdateEntityTypeRequest
The request message for EntityTypes.UpdateEntityType
.
Fields | |
---|---|
entity_type |
Required. The entity type to update. Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language of entity synonyms defined in |
update_mask |
Optional. The mask to control which fields get updated. |
UpdateIntentRequest
The request message for Intents.UpdateIntent
.
Fields | |
---|---|
intent |
Required. The intent to update. Authorization requires the following Google IAM permission on the specified resource
|
language_code |
Optional. The language of training phrases, parameters and rich messages defined in |
update_mask |
Optional. The mask to control which fields get updated. |
intent_view |
Optional. The resource view to apply to the returned intent. |
UpdateKnowledgeBaseRequest
Request message for KnowledgeBases.UpdateKnowledgeBase
.
Fields | |
---|---|
knowledge_base |
Required. The knowledge base to update. |
update_mask |
Optional. Not specified means |
UpdateSessionEntityTypeRequest
The request message for SessionEntityTypes.UpdateSessionEntityType
.
Fields | |
---|---|
session_entity_type |
Required. The entity type to update. Format: Authorization requires the following Google IAM permission on the specified resource
|
update_mask |
Optional. The mask to control which fields get updated. |
ValidationError
Represents a single validation error.
Fields | |
---|---|
severity |
The severity of the error. |
entries[] |
The names of the entries that the error is associated with. Format:
|
error_message |
The detailed error messsage. |
Severity
Represents a level of severity.
Enums | |
---|---|
SEVERITY_UNSPECIFIED |
Not specified. This value should never be used. |
INFO |
The agent doesn't follow Dialogflow best practicies. |
WARNING |
The agent may not behave as expected. |
ERROR |
The agent may experience partial failures. |
CRITICAL |
The agent may completely fail. |
ValidationResult
Represents the output of agent validation.
Fields | |
---|---|
validation_errors[] |
Contains all validation errors. |
VoiceSelectionParams
Description of which voice to use for speech synthesis.
Fields | |
---|---|
name |
Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and |
ssml_gender |
Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and |
WebhookRequest
The request message for a webhook call.
Fields | |
---|---|
session |
The unique identifier of detectIntent request session. Can be used to identify end-user inside webhook implementation. Format: |
response_id |
The unique identifier of the response. Contains the same value as |
query_result |
The result of the conversational query or event processing. Contains the same value as |
alternative_query_results[] |
Alternative query results from KnowledgeService. |
original_detect_intent_request |
Optional. The contents of the original request that was passed to |
WebhookResponse
The response message for a webhook call.
Fields | |
---|---|
fulfillment_text |
Optional. The text to be shown on the screen. This value is passed directly to |
fulfillment_messages[] |
Optional. The collection of rich messages to present to the user. This value is passed directly to |
source |
Optional. This value is passed directly to |
payload |
Optional. This value is passed directly to This field can be used for Actions on Google responses. It should have a structure similar to the JSON message shown here. For more information, see Actions on Google Webhook Format { "google": { "expectUserResponse": true, "richResponse": { "items": [ { "simpleResponse": { "textToSpeech": "this is a simple response" } } ] } } } |
output_contexts[] |
Optional. The collection of output contexts. This value is passed directly to |
followup_event_input |
Optional. Makes the platform immediately invoke another |
end_interaction |
Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false. |
session_entity_types[] |
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. Setting the session entity types inside webhook overwrites the session entity types that have been set through |