REST Resource: projects.locations.agents.conversations

Resource: Conversation

Represents a conversation.

JSON representation
{
  "name": string,
  "type": enum (Type),
  "languageCode": string,
  "startTime": string,
  "duration": string,
  "metrics": {
    object (Metrics)
  },
  "intents": [
    {
      object (Intent)
    }
  ],
  "flows": [
    {
      object (Flow)
    }
  ],
  "pages": [
    {
      object (Page)
    }
  ],
  "interactions": [
    {
      object (Interaction)
    }
  ],
  "environment": {
    object (Environment)
  },
  "flowVersions": {
    string: string,
    ...
  }
}
Fields
name

string

Identifier. The identifier of the conversation. If conversation ID is reused, interactions happened later than 48 hours of the conversation's create time will be ignored. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/conversations/<ConversationID>

type

enum (Type)

The type of the conversation.

languageCode

string

The language of the conversation, which is the language of the first request in the conversation.

startTime

string (Timestamp format)

Start time of the conversation, which is the time of the first request of the conversation.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

duration

string (Duration format)

Duration of the conversation.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

metrics

object (Metrics)

Conversation metrics.

intents[]

object (Intent)

All the matched Intent in the conversation. Only name and displayName are filled in this message.

flows[]

object (Flow)

All the Flow the conversation has went through. Only name and displayName are filled in this message.

pages[]

object (Page)

All the Page the conversation has went through. Only name and displayName are filled in this message.

interactions[]

object (Interaction)

Interactions of the conversation. Only populated for conversations.get and empty for conversations.list.

environment

object (Environment)

Environment of the conversation. Only name and displayName are filled in this message.

flowVersions

map (key: string, value: string (int64 format))

Flow versions used in the conversation.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

Type

Represents the type of a conversation.

Enums
TYPE_UNSPECIFIED Not specified. This value should never be used.
AUDIO Audio conversation. A conversation is classified as an audio conversation if any request has STT input audio or any response has TTS output audio.
TEXT Text conversation. A conversation is classified as a text conversation if any request has text input and no request has STT input audio and no response has TTS output audio.
UNDETERMINED Default conversation type for a conversation. A conversation is classified as undetermined if none of the requests contain text or audio input (eg. event or intent input).

Metrics

Represents metrics for the conversation.

JSON representation
{
  "interactionCount": integer,
  "inputAudioDuration": string,
  "outputAudioDuration": string,
  "maxWebhookLatency": string,
  "hasEndInteraction": boolean,
  "hasLiveAgentHandoff": boolean,
  "averageMatchConfidence": number,
  "queryInputCount": {
    object (QueryInputCount)
  },
  "matchTypeCount": {
    object (MatchTypeCount)
  }
}
Fields
interactionCount

integer

The number of interactions in the conversation.

inputAudioDuration

string (Duration format)

Duration of all the input's audio in the conversation.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

outputAudioDuration

string (Duration format)

Duration of all the output's audio in the conversation.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

maxWebhookLatency

string (Duration format)

Maximum latency of the Webhook calls in the conversation.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

hasEndInteraction

boolean

A signal that indicates the interaction with the Dialogflow agent has ended. If any response has the ResponseMessage.end_interaction signal, this is set to true.

hasLiveAgentHandoff

boolean

Hands off conversation to a human agent. If any response has the ResponseMessage.live_agent_handoffsignal, this is set to true.

averageMatchConfidence

number

The average confidence all of the Match in the conversation. Values range from 0.0 (completely uncertain) to 1.0 (completely certain).

queryInputCount

object (QueryInputCount)

Query input counts.

matchTypeCount

object (MatchTypeCount)

Match type counts.

QueryInputCount

Count by types of QueryInput of the requests in the conversation.

JSON representation
{
  "textCount": integer,
  "intentCount": integer,
  "audioCount": integer,
  "eventCount": integer,
  "dtmfCount": integer
}
Fields
textCount

integer

The number of TextInput in the conversation.

intentCount

integer

The number of IntentInput in the conversation.

audioCount

integer

The number of AudioInput in the conversation.

eventCount

integer

The number of EventInput in the conversation.

dtmfCount

integer

The number of DtmfInput in the conversation.

MatchTypeCount

Count by Match.MatchType of the matches in the conversation.

JSON representation
{
  "unspecifiedCount": integer,
  "intentCount": integer,
  "directIntentCount": integer,
  "parameterFillingCount": integer,
  "noMatchCount": integer,
  "noInputCount": integer,
  "eventCount": integer
}
Fields
unspecifiedCount

integer

The number of matches with type Match.MatchType.MATCH_TYPE_UNSPECIFIED.

intentCount

integer

The number of matches with type Match.MatchType.INTENT.

directIntentCount

integer

The number of matches with type Match.MatchType.DIRECT_INTENT.

parameterFillingCount

integer

The number of matches with type Match.MatchType.PARAMETER_FILLING.

noMatchCount

integer

The number of matches with type Match.MatchType.NO_MATCH.

noInputCount

integer

The number of matches with type Match.MatchType.NO_INPUT.

eventCount

integer

The number of matches with type Match.MatchType.EVENT.

Interaction

Represents an interaction between an end user and a Dialogflow CX agent using V3 (Streaming)DetectIntent API, or an interaction between an end user and a Dialogflow CX agent using V2 (Streaming)AnalyzeContent API.

JSON representation
{
  "request": {
    object (DetectIntentRequest)
  },
  "response": {
    object (DetectIntentResponse)
  },
  "partialResponses": [
    {
      object (DetectIntentResponse)
    }
  ],
  "requestUtterances": string,
  "responseUtterances": string,
  "createTime": string,
  "missingTransition": {
    object (MissingTransition)
  }
}
Fields
request

object (DetectIntentRequest)

The request of the interaction.

response

object (DetectIntentResponse)

The final response of the interaction.

partialResponses[]

object (DetectIntentResponse)

The partial responses of the interaction. Empty if there is no partial response in the interaction. See the [partial response documentation][https://cloud.google.com/dialogflow/cx/docs/concept/fulfillment#queue].

requestUtterances

string

The input text or the transcript of the input audio in the request.

responseUtterances

string

The output text or the transcript of the output audio in the responses. If multiple output messages are returned, they will be concatenated into one.

createTime

string (Timestamp format)

The time that the interaction was created.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

missingTransition

object (MissingTransition)

Missing transition predicted for the interaction. This field is set only if the interaction match type was no-match.

DetectIntentRequest

The request to detect user's intent.

JSON representation
{
  "session": string,
  "queryParams": {
    object (QueryParameters)
  },
  "queryInput": {
    object (QueryInput)
  },
  "outputAudioConfig": {
    object (OutputAudioConfig)
  }
}
Fields
session

string

Required. The name of the session this query is sent to. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/sessions/<SessionID> or projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/environments/<EnvironmentID>/sessions/<SessionID>. If Environment ID is not specified, we assume default 'draft' environment. It's up to the API caller to choose an appropriate Session ID. It can be a random number or some type of session identifiers (preferably hashed). The length of the Session ID must not exceed 36 characters.

For more information, see the sessions guide.

Note: Always use agent versions for production traffic. See Versions and environments.

queryParams

object (QueryParameters)

The parameters of this query.

queryInput

object (QueryInput)

Required. The input specification.

outputAudioConfig

object (OutputAudioConfig)

Instructs the speech synthesizer how to generate the output audio.

QueryParameters

Represents the parameters of a conversational query.

JSON representation
{
  "timeZone": string,
  "geoLocation": {
    object (LatLng)
  },
  "sessionEntityTypes": [
    {
      object (SessionEntityType)
    }
  ],
  "payload": {
    object
  },
  "parameters": {
    object
  },
  "currentPage": string,
  "disableWebhook": boolean,
  "analyzeQueryTextSentiment": boolean,
  "webhookHeaders": {
    string: string,
    ...
  },
  "flowVersions": [
    string
  ],
  "currentPlaybook": string,
  "llmModelSettings": {
    object (LlmModelSettings)
  },
  "channel": string,
  "sessionTtl": string,
  "endUserMetadata": {
    object
  },
  "searchConfig": {
    object (SearchConfig)
  },
  "populateDataStoreConnectionSignals": boolean
}
Fields
timeZone

string

The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in the agent is used.

geoLocation

object (LatLng)

The geo location of this conversational query.

sessionEntityTypes[]

object (SessionEntityType)

Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.

payload

object (Struct format)

This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported. Some integrations that query a Dialogflow agent may provide additional information in the payload. In particular, for the Dialogflow Phone Gateway integration, this field has the form:

{
 "telephony": {
   "caller_id": "+18558363987"
 }
}
parameters

object (Struct format)

Additional parameters to be put into [session parameters][SessionInfo.parameters]. To remove a parameter from the session, clients should explicitly set the parameter value to null.

You can reference the session parameters in the agent with the following format: $session.params.parameter-id.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
currentPage

string

The unique identifier of the page to override the [current page][QueryResult.current_page] in the session. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/flows/<FlowID>/pages/<PageID>.

If currentPage is specified, the previous state of the session will be ignored by Dialogflow, including the [previous page][QueryResult.current_page] and the [previous session parameters][QueryResult.parameters]. In most cases, currentPage and parameters should be configured together to direct a session to a specific state.

disableWebhook

boolean

Whether to disable webhook calls for this request.

analyzeQueryTextSentiment

boolean

Configures whether sentiment analysis should be performed. If not provided, sentiment analysis is not performed.

webhookHeaders

map (key: string, value: string)

This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

flowVersions[]

string

A list of flow versions to override for the request. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/flows/<FlowID>/versions/<VersionID>.

If version 1 of flow X is included in this list, the traffic of flow X will go through version 1 regardless of the version configuration in the environment. Each flow can have at most one version specified in this list.

currentPlaybook

string

Optional. Start the session with the specified playbook. You can only specify the playbook at the beginning of the session. Otherwise, an error will be thrown.

Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/playbooks/<PlaybookID>.

llmModelSettings

object (LlmModelSettings)

Optional. Use the specified LLM model settings for processing the request.

channel

string

The channel which this query is for.

If specified, only the ResponseMessage associated with the channel will be returned. If no ResponseMessage is associated with the channel, it falls back to the ResponseMessage with unspecified channel.

If unspecified, the ResponseMessage with unspecified channel will be returned.

sessionTtl

string (Duration format)

Optional. Configure lifetime of the Dialogflow session. By default, a Dialogflow session remains active and its data is stored for 30 minutes after the last request is sent for the session. This value should be no longer than 1 day.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

endUserMetadata

object (Struct format)

Optional. Information about the end-user to improve the relevance and accuracy of generative answers.

This will be interpreted and used by a language model, so, for good results, the data should be self-descriptive, and in a simple structure.

Example:

{
  "subscription plan": "Business Premium Plus",
  "devices owned": [
    {"model": "Google Pixel 7"},
    {"model": "Google Pixel Tablet"}
  ]
}
searchConfig

object (SearchConfig)

Optional. Search configuration for UCS search queries.

populateDataStoreConnectionSignals

boolean

Optional. If set to true and data stores are involved in serving the request then DetectIntentResponse.query_result.data_store_connection_signals will be filled with data that can help evaluations.

LatLng

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.

JSON representation
{
  "latitude": number,
  "longitude": number
}
Fields
latitude

number

The latitude in degrees. It must be in the range [-90.0, +90.0].

longitude

number

The longitude in degrees. It must be in the range [-180.0, +180.0].

SearchConfig

Search configuration for UCS search queries.

JSON representation
{
  "boostSpecs": [
    {
      object (BoostSpecs)
    }
  ],
  "filterSpecs": [
    {
      object (FilterSpecs)
    }
  ]
}
Fields
boostSpecs[]

object (BoostSpecs)

Optional. Boosting configuration for the datastores.

filterSpecs[]

object (FilterSpecs)

Optional. Filter configuration for the datastores.

BoostSpecs

Boost specifications for data stores.

JSON representation
{
  "dataStores": [
    string
  ],
  "spec": [
    {
      object (BoostSpec)
    }
  ]
}
Fields
dataStores[]

string

Optional. Data Stores where the boosting configuration is applied. The full names of the referenced data stores. Formats: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} `projects/{project}/locations/{location}/dataStores/{dataStore}

spec[]

object (BoostSpec)

Optional. A list of boosting specifications.

BoostSpec

Boost specification to boost certain documents. A copy of google.cloud.discoveryengine.v1main.BoostSpec, field documentation is available at https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/BoostSpec

JSON representation
{
  "conditionBoostSpecs": [
    {
      object (ConditionBoostSpec)
    }
  ]
}
Fields
conditionBoostSpecs[]

object (ConditionBoostSpec)

Optional. Condition boost specifications. If a document matches multiple conditions in the specifictions, boost scores from these specifications are all applied and combined in a non-linear way. Maximum number of specifications is 20.

ConditionBoostSpec

Boost applies to documents which match a condition.

JSON representation
{
  "condition": string,
  "boost": number,
  "boostControlSpec": {
    object (BoostControlSpec)
  }
}
Fields
condition

string

Optional. An expression which specifies a boost condition. The syntax and supported fields are the same as a filter expression. Examples:

  • To boost documents with document ID "doc_1" or "doc_2", and color "Red" or "Blue":
    • (id: ANY("doc_1", "doc_2")) AND (color: ANY("Red","Blue"))
boost

number

Optional. Strength of the condition boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0.

Setting to 1.0 gives the document a big promotion. However, it does not necessarily mean that the boosted document will be the top result at all times, nor that other documents will be excluded. Results could still be shown even when none of them matches the condition. And results that are significantly more relevant to the search query can still trump your heavily favored but irrelevant documents.

Setting to -1.0 gives the document a big demotion. However, results that are deeply relevant might still be shown. The document will have an upstream battle to get a fairly high ranking, but it is not blocked out completely.

Setting to 0.0 means no boost applied. The boosting condition is ignored.

boostControlSpec

object (BoostControlSpec)

Optional. Complex specification for custom ranking based on customer defined attribute value.

BoostControlSpec

Specification for custom ranking based on customer specified attribute value. It provides more controls for customized ranking than the simple (condition, boost) combination above.

JSON representation
{
  "fieldName": string,
  "attributeType": enum (AttributeType),
  "interpolationType": enum (InterpolationType),
  "controlPoints": [
    {
      object (ControlPoint)
    }
  ]
}
Fields
fieldName

string

Optional. The name of the field whose value will be used to determine the boost amount.

attributeType

enum (AttributeType)

Optional. The attribute type to be used to determine the boost amount. The attribute value can be derived from the field value of the specified fieldName. In the case of numerical it is straightforward i.e. attributeValue = numerical_field_value. In the case of freshness however, attributeValue = (time.now() - datetime_field_value).

interpolationType

enum (InterpolationType)

Optional. The interpolation type to be applied to connect the control points listed below.

controlPoints[]

object (ControlPoint)

Optional. The control points used to define the curve. The monotonic function (defined through the interpolationType above) passes through the control points listed here.

AttributeType

The attribute(or function) for which the custom ranking is to be applied.

Enums
ATTRIBUTE_TYPE_UNSPECIFIED Unspecified AttributeType.
NUMERICAL The value of the numerical field will be used to dynamically update the boost amount. In this case, the attributeValue (the x value) of the control point will be the actual value of the numerical field for which the boostAmount is specified.
FRESHNESS For the freshness use case the attribute value will be the duration between the current time and the date in the datetime field specified. The value must be formatted as an XSD dayTimeDuration value (a restricted subset of an ISO 8601 duration value). The pattern for this is: [nD][T[nH][nM][nS]]. E.g. 5D, 3DT12H30M, T24H.

InterpolationType

The interpolation type to be applied. Default will be linear (Piecewise Linear).

Enums
INTERPOLATION_TYPE_UNSPECIFIED Interpolation type is unspecified. In this case, it defaults to Linear.
LINEAR Piecewise linear interpolation will be applied.

ControlPoint

The control points used to define the curve. The curve defined through these control points can only be monotonically increasing or decreasing(constant values are acceptable).

JSON representation
{
  "attributeValue": string,
  "boostAmount": number
}
Fields
attributeValue

string

Optional. Can be one of: 1. The numerical field value. 2. The duration spec for freshness: The value must be formatted as an XSD dayTimeDuration value (a restricted subset of an ISO 8601 duration value). The pattern for this is: [nD][T[nH][nM][nS]].

boostAmount

number

Optional. The value between -1 to 1 by which to boost the score if the attributeValue evaluates to the value specified above.

FilterSpecs

Filter specifications for data stores.

JSON representation
{
  "dataStores": [
    string
  ],
  "filter": string
}
Fields
dataStores[]

string

Optional. Data Stores where the boosting configuration is applied. The full names of the referenced data stores. Formats: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} `projects/{project}/locations/{location}/dataStores/{dataStore}

filter

string

Optional. The filter expression to be applied. Expression syntax is documented at https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata#filter-expression-syntax

OutputAudioConfig

Instructs the speech synthesizer how to generate the output audio content.

JSON representation
{
  "audioEncoding": enum (OutputAudioEncoding),
  "sampleRateHertz": integer,
  "synthesizeSpeechConfig": {
    object (SynthesizeSpeechConfig)
  }
}
Fields
audioEncoding

enum (OutputAudioEncoding)

Required. Audio encoding of the synthesized audio content.

sampleRateHertz

integer

Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).

synthesizeSpeechConfig

object (SynthesizeSpeechConfig)

Optional. Configuration of how speech should be synthesized. If not specified, Agent.text_to_speech_settings is applied.

OutputAudioEncoding

Audio encoding of the output audio format in Text-To-Speech.

Enums
OUTPUT_AUDIO_ENCODING_UNSPECIFIED Not specified.
OUTPUT_AUDIO_ENCODING_LINEAR_16 Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.
OUTPUT_AUDIO_ENCODING_MP3 MP3 audio at 32kbps.
OUTPUT_AUDIO_ENCODING_MP3_64_KBPS MP3 audio at 64kbps.
OUTPUT_AUDIO_ENCODING_OGG_OPUS Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.
OUTPUT_AUDIO_ENCODING_MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
OUTPUT_AUDIO_ENCODING_ALAW 8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law.

DetectIntentResponse

The message returned from the DetectIntent method.

JSON representation
{
  "responseId": string,
  "queryResult": {
    object (QueryResult)
  },
  "outputAudio": string,
  "outputAudioConfig": {
    object (OutputAudioConfig)
  },
  "responseType": enum (ResponseType),
  "allowCancellation": boolean
}
Fields
responseId

string

Output only. The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.

queryResult

object (QueryResult)

The result of the conversational query.

outputAudio

string (bytes format)

The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the queryResult.response_messages field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.

In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content.

A base64-encoded string.

outputAudioConfig

object (OutputAudioConfig)

The config used by the speech synthesizer to generate the output audio.

responseType

enum (ResponseType)

Response type.

allowCancellation

boolean

Indicates whether the partial response can be cancelled when a later response arrives. e.g. if the agent specified some music as partial response, it can be cancelled.

QueryResult

Represents the result of a conversational query.

JSON representation
{
  "languageCode": string,
  "parameters": {
    object
  },
  "responseMessages": [
    {
      object (ResponseMessage)
    }
  ],
  "webhookIds": [
    string
  ],
  "webhookDisplayNames": [
    string
  ],
  "webhookLatencies": [
    string
  ],
  "webhookTags": [
    string
  ],
  "webhookStatuses": [
    {
      object (Status)
    }
  ],
  "webhookPayloads": [
    {
      object
    }
  ],
  "currentPage": {
    object (Page)
  },
  "currentFlow": {
    object (Flow)
  },
  "intent": {
    object (Intent)
  },
  "intentDetectionConfidence": number,
  "match": {
    object (Match)
  },
  "diagnosticInfo": {
    object
  },
  "generativeInfo": {
    object (GenerativeInfo)
  },
  "sentimentAnalysisResult": {
    object (SentimentAnalysisResult)
  },
  "advancedSettings": {
    object (AdvancedSettings)
  },
  "allowAnswerFeedback": boolean,
  "dataStoreConnectionSignals": {
    object (DataStoreConnectionSignals)
  },

  // Union field query can be only one of the following:
  "text": string,
  "triggerIntent": string,
  "transcript": string,
  "triggerEvent": string,
  "dtmf": {
    object (DtmfInput)
  }
  // End of list of possible types for union field query.
}
Fields
languageCode

string

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.

parameters

object (Struct format)

The collected session parameters.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
responseMessages[]

object (ResponseMessage)

The list of rich messages returned to the client. Responses vary from simple text messages to more sophisticated, structured payloads used to drive complex logic.

webhookIds[]

string

The list of webhook ids in the order of call sequence.

webhookDisplayNames[]

string

The list of webhook display names in the order of call sequence.

webhookLatencies[]

string (Duration format)

The list of webhook latencies in the order of call sequence.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

webhookTags[]

string

The list of webhook tags in the order of call sequence.

webhookStatuses[]

object (Status)

The list of webhook call status in the order of call sequence.

webhookPayloads[]

object (Struct format)

The list of webhook payload in WebhookResponse.payload, in the order of call sequence. If some webhook call fails or doesn't return any payload, an empty Struct would be used instead.

currentPage

object (Page)

The current Page. Some, not all fields are filled in this message, including but not limited to name and displayName.

currentFlow

object (Flow)

The current Flow. Some, not all fields are filled in this message, including but not limited to name and displayName.

intent
(deprecated)

object (Intent)

The Intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name and displayName. This field is deprecated, please use QueryResult.match instead.

intentDetectionConfidence
(deprecated)

number

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. This field is deprecated, please use QueryResult.match instead.

match

object (Match)

Intent match result, could be an intent or an event.

diagnosticInfo

object (Struct format)

The free-form diagnostic info. For example, this field could contain webhook call latency. The fields of this data can change without notice, so you should not write code that depends on its structure.

One of the fields is called "Alternative Matched Intents", which may aid with debugging. The following describes these intent results:

  • The list is empty if no intent was matched to end-user input.
  • Only intents that are referenced in the currently active flow are included.
  • The matched intent is included.
  • Other intents that could have matched end-user input, but did not match because they are referenced by intent routes that are out of scope, are included.
  • Other intents referenced by intent routes in scope that matched end-user input, but had a lower confidence score.
generativeInfo

object (GenerativeInfo)

The information of a query if handled by generative agent resources.

sentimentAnalysisResult

object (SentimentAnalysisResult)

The sentiment analyss result, which depends on analyzeQueryTextSentiment, specified in the request.

advancedSettings

object (AdvancedSettings)

Returns the current advanced settings including IVR settings. Even though the operations configured by these settings are performed by Dialogflow, the client may need to perform special logic at the moment. For example, if Dialogflow exports audio to Google Cloud Storage, then the client may need to wait for the resulting object to appear in the bucket before proceeding.

allowAnswerFeedback

boolean

Indicates whether the Thumbs up/Thumbs down rating controls are need to be shown for the response in the Dialogflow Messenger widget.

dataStoreConnectionSignals

object (DataStoreConnectionSignals)

Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate_data_store_connection_signals is set to true in the request.

Union field query. The original conversational query. query can be only one of the following:
text

string

If natural language text was provided as input, this field will contain a copy of the text.

triggerIntent

string

If an intent was provided as input, this field will contain a copy of the intent identifier. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/intents/<IntentID>.

transcript

string

If natural language speech audio was provided as input, this field will contain the transcript for the audio.

triggerEvent

string

If an event was provided as input, this field will contain the name of the event.

dtmf

object (DtmfInput)

If a DTMF was provided as input, this field will contain a copy of the DtmfInput.

Match

Represents one match result of [MatchIntent][].

JSON representation
{
  "intent": {
    object (Intent)
  },
  "event": string,
  "parameters": {
    object
  },
  "resolvedInput": string,
  "matchType": enum (MatchType),
  "confidence": number
}
Fields
intent

object (Intent)

The Intent that matched the query. Some, not all fields are filled in this message, including but not limited to: name and displayName. Only filled for INTENT match type.

event

string

The event that matched the query. Filled for EVENT, NO_MATCH and NO_INPUT match types.

parameters

object (Struct format)

The collection of parameters extracted from the query.

Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:

  • MapKey type: string
  • MapKey value: parameter name
  • MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map.
  • MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
resolvedInput

string

Final text input which was matched during MatchIntent. This value can be different from original input sent in request because of spelling correction or other processing.

matchType

enum (MatchType)

Type of this Match.

confidence

number

The confidence of this match. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation.

MatchType

Type of a Match.

Enums
MATCH_TYPE_UNSPECIFIED Not specified. Should never be used.
INTENT The query was matched to an intent.
DIRECT_INTENT The query directly triggered an intent.
PARAMETER_FILLING The query was used for parameter filling.
NO_MATCH No match was found for the query.
NO_INPUT Indicates an empty query.
EVENT The query directly triggered an event.
KNOWLEDGE_CONNECTOR The query was matched to a Knowledge Connector answer.
PLAYBOOK The query was handled by a Playbook.

GenerativeInfo

Represents the information of a query if handled by generative agent resources.

JSON representation
{
  "currentPlaybooks": [
    string
  ],
  "actionTracingInfo": {
    object (Example)
  }
}
Fields
currentPlaybooks[]

string

The stack of playbooks that the conversation has currently entered, with the most recent one on the top.

actionTracingInfo

object (Example)

The actions performed by the generative playbook for the current agent response.

SentimentAnalysisResult

The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral.

JSON representation
{
  "score": number,
  "magnitude": number
}
Fields
score

number

Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).

magnitude

number

A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).

DataStoreConnectionSignals

Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ...

JSON representation
{
  "rewriterModelCallSignals": {
    object (RewriterModelCallSignals)
  },
  "rewrittenQuery": string,
  "searchSnippets": [
    {
      object (SearchSnippet)
    }
  ],
  "answerGenerationModelCallSignals": {
    object (AnswerGenerationModelCallSignals)
  },
  "answer": string,
  "answerParts": [
    {
      object (AnswerPart)
    }
  ],
  "citedSnippets": [
    {
      object (CitedSnippet)
    }
  ],
  "groundingSignals": {
    object (GroundingSignals)
  },
  "safetySignals": {
    object (SafetySignals)
  }
}
Fields
rewriterModelCallSignals

object (RewriterModelCallSignals)

Optional. Diagnostic info related to the rewriter model call.

rewrittenQuery

string

Optional. Rewritten string query used for search.

searchSnippets[]

object (SearchSnippet)

Optional. Search snippets included in the answer generation prompt.

answerGenerationModelCallSignals

object (AnswerGenerationModelCallSignals)

Optional. Diagnostic info related to the answer generation model call.

answer

string

Optional. The final compiled answer.

answerParts[]

object (AnswerPart)

Optional. Answer parts with relevant citations. Concatenation of texts should add up the answer (not counting whitespaces).

citedSnippets[]

object (CitedSnippet)

Optional. Snippets cited by the answer generation model from the most to least relevant.

groundingSignals

object (GroundingSignals)

Optional. Grounding signals.

safetySignals

object (SafetySignals)

Optional. Safety check result.

RewriterModelCallSignals

Diagnostic info related to the rewriter model call.

JSON representation
{
  "renderedPrompt": string,
  "modelOutput": string,
  "model": string
}
Fields
renderedPrompt

string

Prompt as sent to the model.

modelOutput

string

Output of the generative model.

model

string

Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown.

SearchSnippet

Search snippet details.

JSON representation
{
  "documentTitle": string,
  "documentUri": string,
  "text": string
}
Fields
documentTitle

string

Title of the enclosing document.

documentUri

string

Uri for the document. Present if specified for the document.

text

string

Text included in the prompt.

AnswerGenerationModelCallSignals

Diagnostic info related to the answer generation model call.

JSON representation
{
  "renderedPrompt": string,
  "modelOutput": string,
  "model": string
}
Fields
renderedPrompt

string

Prompt as sent to the model.

modelOutput

string

Output of the generative model.

model

string

Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown.

AnswerPart

Answer part with citation.

JSON representation
{
  "text": string,
  "supportingIndices": [
    integer
  ]
}
Fields
text

string

Substring of the answer.

supportingIndices[]

integer

Citations for this answer part. Indices of searchSnippets.

CitedSnippet

Snippet cited by the answer generation model.

JSON representation
{
  "searchSnippet": {
    object (SearchSnippet)
  },
  "snippetIndex": integer
}
Fields
searchSnippet

object (SearchSnippet)

Details of the snippet.

snippetIndex

integer

Index of the snippet in searchSnippets field.

GroundingSignals

Grounding signals.

JSON representation
{
  "decision": enum (GroundingDecision),
  "score": enum (GroundingScoreBucket)
}
Fields
decision

enum (GroundingDecision)

Represents the decision of the grounding check.

score

enum (GroundingScoreBucket)

Grounding score bucket setting.

GroundingDecision

Represents the decision of the grounding check.

Enums
GROUNDING_DECISION_UNSPECIFIED Decision not specified.
ACCEPTED_BY_GROUNDING Grounding have accepted the answer.
REJECTED_BY_GROUNDING Grounding have rejected the answer.

GroundingScoreBucket

Grounding score buckets.

Enums
GROUNDING_SCORE_BUCKET_UNSPECIFIED Score not specified.
VERY_LOW We have very low confidence that the answer is grounded.
LOW We have low confidence that the answer is grounded.
MEDIUM We have medium confidence that the answer is grounded.
HIGH We have high confidence that the answer is grounded.
VERY_HIGH We have very high confidence that the answer is grounded.

SafetySignals

Safety check results.

JSON representation
{
  "decision": enum (SafetyDecision),
  "bannedPhraseMatch": enum (BannedPhraseMatch),
  "matchedBannedPhrase": string
}
Fields
decision

enum (SafetyDecision)

Safety decision.

bannedPhraseMatch

enum (BannedPhraseMatch)

Specifies banned phrase match subject.

matchedBannedPhrase

string

The matched banned phrase if there was a match.

SafetyDecision

Safety decision. All kinds of check are incorporated into this final decision, including banned phrases check.

Enums
SAFETY_DECISION_UNSPECIFIED Decision not specified.
ACCEPTED_BY_SAFETY_CHECK No manual or automatic safety check fired.
REJECTED_BY_SAFETY_CHECK One ore more safety checks fired.

BannedPhraseMatch

Specifies banned phrase match subject.

Enums
BANNED_PHRASE_MATCH_UNSPECIFIED No banned phrase check was executed.
BANNED_PHRASE_MATCH_NONE All banned phrase checks led to no match.
BANNED_PHRASE_MATCH_QUERY A banned phrase matched the query.
BANNED_PHRASE_MATCH_RESPONSE A banned phrase matched the response.

ResponseType

Represents different DetectIntentResponse types.

Enums
RESPONSE_TYPE_UNSPECIFIED Not specified. This should never happen.
PARTIAL Partial response. e.g. Aggregated responses in a Fulfillment that enables return_partial_response can be returned as partial response. WARNING: partial response is not eligible for barge-in.
FINAL Final response.

MissingTransition

Information collected for DF CX agents in case NLU predicted an intent that was filtered out as being inactive which may indicate a missing transition and/or absent functionality.

JSON representation
{
  "intentDisplayName": string,
  "score": number
}
Fields
intentDisplayName

string

Name of the intent that could have triggered.

score

number

Score of the above intent. The higher it is the more likely a transition was missed on a given page.

Methods

delete

Deletes the specified conversation.

get

Retrieves the specified conversation.

list

Returns the list of all conversations.