- Resource: Conversation
- Type
- Metrics
- QueryInputCount
- MatchTypeCount
- Interaction
- DetectIntentRequest
- QueryParameters
- LatLng
- SearchConfig
- BoostSpecs
- BoostSpec
- ConditionBoostSpec
- BoostControlSpec
- AttributeType
- InterpolationType
- ControlPoint
- FilterSpecs
- OutputAudioConfig
- OutputAudioEncoding
- DetectIntentResponse
- QueryResult
- Match
- MatchType
- GenerativeInfo
- SentimentAnalysisResult
- DataStoreConnectionSignals
- RewriterModelCallSignals
- SearchSnippet
- AnswerGenerationModelCallSignals
- AnswerPart
- CitedSnippet
- GroundingSignals
- GroundingDecision
- GroundingScoreBucket
- SafetySignals
- SafetyDecision
- BannedPhraseMatch
- ResponseType
- MissingTransition
- Methods
Resource: Conversation
Represents a conversation.
JSON representation |
---|
{ "name": string, "type": enum ( |
Fields | |
---|---|
name |
Identifier. The identifier of the conversation. If conversation ID is reused, interactions happened later than 48 hours of the conversation's create time will be ignored. Format: |
type |
The type of the conversation. |
language |
The language of the conversation, which is the language of the first request in the conversation. |
start |
Start time of the conversation, which is the time of the first request of the conversation. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
duration |
Duration of the conversation. A duration in seconds with up to nine fractional digits, ending with ' |
metrics |
Conversation metrics. |
intents[] |
All the matched |
flows[] |
All the |
pages[] |
All the |
interactions[] |
Interactions of the conversation. Only populated for |
environment |
Environment of the conversation. Only |
flow |
Flow versions used in the conversation. An object containing a list of |
Type
Represents the type of a conversation.
Enums | |
---|---|
TYPE_UNSPECIFIED |
Not specified. This value should never be used. |
AUDIO |
Audio conversation. A conversation is classified as an audio conversation if any request has STT input audio or any response has TTS output audio. |
TEXT |
Text conversation. A conversation is classified as a text conversation if any request has text input and no request has STT input audio and no response has TTS output audio. |
UNDETERMINED |
Default conversation type for a conversation. A conversation is classified as undetermined if none of the requests contain text or audio input (eg. event or intent input). |
Metrics
Represents metrics for the conversation.
JSON representation |
---|
{ "interactionCount": integer, "inputAudioDuration": string, "outputAudioDuration": string, "maxWebhookLatency": string, "hasEndInteraction": boolean, "hasLiveAgentHandoff": boolean, "averageMatchConfidence": number, "queryInputCount": { object ( |
Fields | |
---|---|
interaction |
The number of interactions in the conversation. |
input |
Duration of all the input's audio in the conversation. A duration in seconds with up to nine fractional digits, ending with ' |
output |
Duration of all the output's audio in the conversation. A duration in seconds with up to nine fractional digits, ending with ' |
max |
Maximum latency of the A duration in seconds with up to nine fractional digits, ending with ' |
has |
A signal that indicates the interaction with the Dialogflow agent has ended. If any response has the |
has |
Hands off conversation to a human agent. If any response has the |
average |
The average confidence all of the |
query |
Query input counts. |
match |
Match type counts. |
QueryInputCount
Count by types of QueryInput
of the requests in the conversation.
JSON representation |
---|
{ "textCount": integer, "intentCount": integer, "audioCount": integer, "eventCount": integer, "dtmfCount": integer } |
Fields | |
---|---|
text |
The number of |
intent |
The number of |
audio |
The number of |
event |
The number of |
dtmf |
The number of |
MatchTypeCount
Count by Match.MatchType
of the matches in the conversation.
JSON representation |
---|
{ "unspecifiedCount": integer, "intentCount": integer, "directIntentCount": integer, "parameterFillingCount": integer, "noMatchCount": integer, "noInputCount": integer, "eventCount": integer } |
Fields | |
---|---|
unspecified |
The number of matches with type |
intent |
The number of matches with type |
direct |
The number of matches with type |
parameter |
The number of matches with type |
no |
The number of matches with type |
no |
The number of matches with type |
event |
The number of matches with type |
Interaction
Represents an interaction between an end user and a Dialogflow CX agent using V3 (Streaming)DetectIntent API, or an interaction between an end user and a Dialogflow CX agent using V2 (Streaming)AnalyzeContent API.
JSON representation |
---|
{ "request": { object ( |
Fields | |
---|---|
request |
The request of the interaction. |
response |
The final response of the interaction. |
partial |
The partial responses of the interaction. Empty if there is no partial response in the interaction. See the [partial response documentation][https://cloud.google.com/dialogflow/cx/docs/concept/fulfillment#queue]. |
request |
The input text or the transcript of the input audio in the request. |
response |
The output text or the transcript of the output audio in the responses. If multiple output messages are returned, they will be concatenated into one. |
create |
The time that the interaction was created. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
missing |
Missing transition predicted for the interaction. This field is set only if the interaction match type was no-match. |
DetectIntentRequest
The request to detect user's intent.
JSON representation |
---|
{ "session": string, "queryParams": { object ( |
Fields | |
---|---|
session |
Required. The name of the session this query is sent to. Format: For more information, see the sessions guide. Note: Always use agent versions for production traffic. See Versions and environments. |
query |
The parameters of this query. |
query |
Required. The input specification. |
output |
Instructs the speech synthesizer how to generate the output audio. |
QueryParameters
Represents the parameters of a conversational query.
JSON representation |
---|
{ "timeZone": string, "geoLocation": { object ( |
Fields | |
---|---|
time |
The time zone of this conversational query from the time zone database, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in the agent is used. |
geo |
The geo location of this conversational query. |
session |
Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. |
payload |
This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported. Some integrations that query a Dialogflow agent may provide additional information in the payload. In particular, for the Dialogflow Phone Gateway integration, this field has the form:
|
parameters |
Additional parameters to be put into [session parameters][SessionInfo.parameters]. To remove a parameter from the session, clients should explicitly set the parameter value to null. You can reference the session parameters in the agent with the following format: $session.params.parameter-id. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:
|
current |
The unique identifier of the If |
disable |
Whether to disable webhook calls for this request. |
analyze |
Configures whether sentiment analysis should be performed. If not provided, sentiment analysis is not performed. |
webhook |
This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc. An object containing a list of |
flow |
A list of flow versions to override for the request. Format: If version 1 of flow X is included in this list, the traffic of flow X will go through version 1 regardless of the version configuration in the environment. Each flow can have at most one version specified in this list. |
current |
Optional. Start the session with the specified Format: |
llm |
Optional. Use the specified LLM model settings for processing the request. |
channel |
The channel which this query is for. If specified, only the If unspecified, the |
session |
Optional. Configure lifetime of the Dialogflow session. By default, a Dialogflow session remains active and its data is stored for 30 minutes after the last request is sent for the session. This value should be no longer than 1 day. A duration in seconds with up to nine fractional digits, ending with ' |
end |
Optional. Information about the end-user to improve the relevance and accuracy of generative answers. This will be interpreted and used by a language model, so, for good results, the data should be self-descriptive, and in a simple structure. Example:
|
search |
Optional. Search configuration for UCS search queries. |
populate |
Optional. If set to true and data stores are involved in serving the request then DetectIntentResponse.query_result.data_store_connection_signals will be filled with data that can help evaluations. |
LatLng
An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
JSON representation |
---|
{ "latitude": number, "longitude": number } |
Fields | |
---|---|
latitude |
The latitude in degrees. It must be in the range [-90.0, +90.0]. |
longitude |
The longitude in degrees. It must be in the range [-180.0, +180.0]. |
SearchConfig
Search configuration for UCS search queries.
JSON representation |
---|
{ "boostSpecs": [ { object ( |
Fields | |
---|---|
boost |
Optional. Boosting configuration for the datastores. |
filter |
Optional. Filter configuration for the datastores. |
BoostSpecs
Boost specifications for data stores.
JSON representation |
---|
{
"dataStores": [
string
],
"spec": [
{
object ( |
Fields | |
---|---|
data |
Optional. Data Stores where the boosting configuration is applied. The full names of the referenced data stores. Formats: |
spec[] |
Optional. A list of boosting specifications. |
BoostSpec
Boost specification to boost certain documents. A copy of google.cloud.discoveryengine.v1main.BoostSpec, field documentation is available at https://cloud.google.com/generative-ai-app-builder/docs/reference/rest/v1alpha/BoostSpec
JSON representation |
---|
{
"conditionBoostSpecs": [
{
object ( |
Fields | |
---|---|
condition |
Optional. Condition boost specifications. If a document matches multiple conditions in the specifictions, boost scores from these specifications are all applied and combined in a non-linear way. Maximum number of specifications is 20. |
ConditionBoostSpec
Boost applies to documents which match a condition.
JSON representation |
---|
{
"condition": string,
"boost": number,
"boostControlSpec": {
object ( |
Fields | |
---|---|
condition |
Optional. An expression which specifies a boost condition. The syntax and supported fields are the same as a filter expression. Examples:
|
boost |
Optional. Strength of the condition boost, which should be in [-1, 1]. Negative boost means demotion. Default is 0.0. Setting to 1.0 gives the document a big promotion. However, it does not necessarily mean that the boosted document will be the top result at all times, nor that other documents will be excluded. Results could still be shown even when none of them matches the condition. And results that are significantly more relevant to the search query can still trump your heavily favored but irrelevant documents. Setting to -1.0 gives the document a big demotion. However, results that are deeply relevant might still be shown. The document will have an upstream battle to get a fairly high ranking, but it is not blocked out completely. Setting to 0.0 means no boost applied. The boosting condition is ignored. |
boost |
Optional. Complex specification for custom ranking based on customer defined attribute value. |
BoostControlSpec
Specification for custom ranking based on customer specified attribute value. It provides more controls for customized ranking than the simple (condition, boost) combination above.
JSON representation |
---|
{ "fieldName": string, "attributeType": enum ( |
Fields | |
---|---|
field |
Optional. The name of the field whose value will be used to determine the boost amount. |
attribute |
Optional. The attribute type to be used to determine the boost amount. The attribute value can be derived from the field value of the specified fieldName. In the case of numerical it is straightforward i.e. attributeValue = numerical_field_value. In the case of freshness however, attributeValue = (time.now() - datetime_field_value). |
interpolation |
Optional. The interpolation type to be applied to connect the control points listed below. |
control |
Optional. The control points used to define the curve. The monotonic function (defined through the interpolationType above) passes through the control points listed here. |
AttributeType
The attribute(or function) for which the custom ranking is to be applied.
Enums | |
---|---|
ATTRIBUTE_TYPE_UNSPECIFIED |
Unspecified AttributeType. |
NUMERICAL |
The value of the numerical field will be used to dynamically update the boost amount. In this case, the attributeValue (the x value) of the control point will be the actual value of the numerical field for which the boostAmount is specified. |
FRESHNESS |
For the freshness use case the attribute value will be the duration between the current time and the date in the datetime field specified. The value must be formatted as an XSD dayTimeDuration value (a restricted subset of an ISO 8601 duration value). The pattern for this is: [nD][T[nH][nM][nS]] . E.g. 5D , 3DT12H30M , T24H . |
InterpolationType
The interpolation type to be applied. Default will be linear (Piecewise Linear).
Enums | |
---|---|
INTERPOLATION_TYPE_UNSPECIFIED |
Interpolation type is unspecified. In this case, it defaults to Linear. |
LINEAR |
Piecewise linear interpolation will be applied. |
ControlPoint
The control points used to define the curve. The curve defined through these control points can only be monotonically increasing or decreasing(constant values are acceptable).
JSON representation |
---|
{ "attributeValue": string, "boostAmount": number } |
Fields | |
---|---|
attribute |
Optional. Can be one of: 1. The numerical field value. 2. The duration spec for freshness: The value must be formatted as an XSD |
boost |
Optional. The value between -1 to 1 by which to boost the score if the attributeValue evaluates to the value specified above. |
FilterSpecs
Filter specifications for data stores.
JSON representation |
---|
{ "dataStores": [ string ], "filter": string } |
Fields | |
---|---|
data |
Optional. Data Stores where the boosting configuration is applied. The full names of the referenced data stores. Formats: |
filter |
Optional. The filter expression to be applied. Expression syntax is documented at https://cloud.google.com/generative-ai-app-builder/docs/filter-search-metadata#filter-expression-syntax |
OutputAudioConfig
Instructs the speech synthesizer how to generate the output audio content.
JSON representation |
---|
{ "audioEncoding": enum ( |
Fields | |
---|---|
audio |
Required. Audio encoding of the synthesized audio content. |
sample |
Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality). |
synthesize |
Optional. Configuration of how speech should be synthesized. If not specified, |
OutputAudioEncoding
Audio encoding of the output audio format in Text-To-Speech.
Enums | |
---|---|
OUTPUT_AUDIO_ENCODING_UNSPECIFIED |
Not specified. |
OUTPUT_AUDIO_ENCODING_LINEAR_16 |
Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header. |
OUTPUT_AUDIO_ENCODING_MP3 |
MP3 audio at 32kbps. |
OUTPUT_AUDIO_ENCODING_MP3_64_KBPS |
MP3 audio at 64kbps. |
OUTPUT_AUDIO_ENCODING_OGG_OPUS |
Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate. |
OUTPUT_AUDIO_ENCODING_MULAW |
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
OUTPUT_AUDIO_ENCODING_ALAW |
8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law. |
DetectIntentResponse
The message returned from the DetectIntent method.
JSON representation |
---|
{ "responseId": string, "queryResult": { object ( |
Fields | |
---|---|
response |
Output only. The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues. |
query |
The result of the conversational query. |
output |
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the In some scenarios, multiple output audio fields may be present in the response structure. In these cases, only the top-most-level audio output has content. A base64-encoded string. |
output |
The config used by the speech synthesizer to generate the output audio. |
response |
Response type. |
allow |
Indicates whether the partial response can be cancelled when a later response arrives. e.g. if the agent specified some music as partial response, it can be cancelled. |
QueryResult
Represents the result of a conversational query.
JSON representation |
---|
{ "languageCode": string, "parameters": { object }, "responseMessages": [ { object ( |
Fields | |
---|---|
language |
The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes. |
parameters |
The collected Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:
|
response |
The list of rich messages returned to the client. Responses vary from simple text messages to more sophisticated, structured payloads used to drive complex logic. |
webhook |
The list of webhook ids in the order of call sequence. |
webhook |
The list of webhook display names in the order of call sequence. |
webhook |
The list of webhook latencies in the order of call sequence. A duration in seconds with up to nine fractional digits, ending with ' |
webhook |
The list of webhook tags in the order of call sequence. |
webhook |
The list of webhook call status in the order of call sequence. |
webhook |
The list of webhook payload in |
current |
The current |
current |
The current |
intent |
The |
intentDetectionConfidence |
The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. This field is deprecated, please use |
match |
Intent match result, could be an intent or an event. |
diagnostic |
The free-form diagnostic info. For example, this field could contain webhook call latency. The fields of this data can change without notice, so you should not write code that depends on its structure. One of the fields is called "Alternative Matched Intents", which may aid with debugging. The following describes these intent results:
|
generative |
The information of a query if handled by generative agent resources. |
sentiment |
The sentiment analyss result, which depends on |
advanced |
Returns the current advanced settings including IVR settings. Even though the operations configured by these settings are performed by Dialogflow, the client may need to perform special logic at the moment. For example, if Dialogflow exports audio to Google Cloud Storage, then the client may need to wait for the resulting object to appear in the bucket before proceeding. |
allow |
Indicates whether the Thumbs up/Thumbs down rating controls are need to be shown for the response in the Dialogflow Messenger widget. |
data |
Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate_data_store_connection_signals is set to true in the request. |
Union field query . The original conversational query. query can be only one of the following: |
|
text |
If |
trigger |
If an |
transcript |
If |
trigger |
If an |
dtmf |
If a |
Match
Represents one match result of [MatchIntent][].
JSON representation |
---|
{ "intent": { object ( |
Fields | |
---|---|
intent |
The |
event |
The event that matched the query. Filled for |
parameters |
The collection of parameters extracted from the query. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:
|
resolved |
Final text input which was matched during MatchIntent. This value can be different from original input sent in request because of spelling correction or other processing. |
match |
Type of this |
confidence |
The confidence of this match. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. |
MatchType
Type of a Match.
Enums | |
---|---|
MATCH_TYPE_UNSPECIFIED |
Not specified. Should never be used. |
INTENT |
The query was matched to an intent. |
DIRECT_INTENT |
The query directly triggered an intent. |
PARAMETER_FILLING |
The query was used for parameter filling. |
NO_MATCH |
No match was found for the query. |
NO_INPUT |
Indicates an empty query. |
EVENT |
The query directly triggered an event. |
KNOWLEDGE_CONNECTOR |
The query was matched to a Knowledge Connector answer. |
PLAYBOOK |
The query was handled by a . |
GenerativeInfo
Represents the information of a query if handled by generative agent resources.
JSON representation |
---|
{
"currentPlaybooks": [
string
],
"actionTracingInfo": {
object ( |
Fields | |
---|---|
current |
The stack of |
action |
The actions performed by the generative playbook for the current agent response. |
SentimentAnalysisResult
The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral.
JSON representation |
---|
{ "score": number, "magnitude": number } |
Fields | |
---|---|
score |
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment). |
magnitude |
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative). |
DataStoreConnectionSignals
Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ...
JSON representation |
---|
{ "rewriterModelCallSignals": { object ( |
Fields | |
---|---|
rewriter |
Optional. Diagnostic info related to the rewriter model call. |
rewritten |
Optional. Rewritten string query used for search. |
search |
Optional. Search snippets included in the answer generation prompt. |
answer |
Optional. Diagnostic info related to the answer generation model call. |
answer |
Optional. The final compiled answer. |
answer |
Optional. Answer parts with relevant citations. Concatenation of texts should add up the |
cited |
Optional. Snippets cited by the answer generation model from the most to least relevant. |
grounding |
Optional. Grounding signals. |
safety |
Optional. Safety check result. |
RewriterModelCallSignals
Diagnostic info related to the rewriter model call.
JSON representation |
---|
{ "renderedPrompt": string, "modelOutput": string, "model": string } |
Fields | |
---|---|
rendered |
Prompt as sent to the model. |
model |
Output of the generative model. |
model |
Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown. |
SearchSnippet
Search snippet details.
JSON representation |
---|
{ "documentTitle": string, "documentUri": string, "text": string } |
Fields | |
---|---|
document |
Title of the enclosing document. |
document |
Uri for the document. Present if specified for the document. |
text |
Text included in the prompt. |
AnswerGenerationModelCallSignals
Diagnostic info related to the answer generation model call.
JSON representation |
---|
{ "renderedPrompt": string, "modelOutput": string, "model": string } |
Fields | |
---|---|
rendered |
Prompt as sent to the model. |
model |
Output of the generative model. |
model |
Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown. |
AnswerPart
Answer part with citation.
JSON representation |
---|
{ "text": string, "supportingIndices": [ integer ] } |
Fields | |
---|---|
text |
Substring of the answer. |
supporting |
Citations for this answer part. Indices of |
CitedSnippet
Snippet cited by the answer generation model.
JSON representation |
---|
{
"searchSnippet": {
object ( |
Fields | |
---|---|
search |
Details of the snippet. |
snippet |
Index of the snippet in |
GroundingSignals
Grounding signals.
JSON representation |
---|
{ "decision": enum ( |
Fields | |
---|---|
decision |
Represents the decision of the grounding check. |
score |
Grounding score bucket setting. |
GroundingDecision
Represents the decision of the grounding check.
Enums | |
---|---|
GROUNDING_DECISION_UNSPECIFIED |
Decision not specified. |
ACCEPTED_BY_GROUNDING |
Grounding have accepted the answer. |
REJECTED_BY_GROUNDING |
Grounding have rejected the answer. |
GroundingScoreBucket
Grounding score buckets.
Enums | |
---|---|
GROUNDING_SCORE_BUCKET_UNSPECIFIED |
Score not specified. |
VERY_LOW |
We have very low confidence that the answer is grounded. |
LOW |
We have low confidence that the answer is grounded. |
MEDIUM |
We have medium confidence that the answer is grounded. |
HIGH |
We have high confidence that the answer is grounded. |
VERY_HIGH |
We have very high confidence that the answer is grounded. |
SafetySignals
Safety check results.
JSON representation |
---|
{ "decision": enum ( |
Fields | |
---|---|
decision |
Safety decision. |
banned |
Specifies banned phrase match subject. |
matched |
The matched banned phrase if there was a match. |
SafetyDecision
Safety decision. All kinds of check are incorporated into this final decision, including banned phrases check.
Enums | |
---|---|
SAFETY_DECISION_UNSPECIFIED |
Decision not specified. |
ACCEPTED_BY_SAFETY_CHECK |
No manual or automatic safety check fired. |
REJECTED_BY_SAFETY_CHECK |
One ore more safety checks fired. |
BannedPhraseMatch
Specifies banned phrase match subject.
Enums | |
---|---|
BANNED_PHRASE_MATCH_UNSPECIFIED |
No banned phrase check was executed. |
BANNED_PHRASE_MATCH_NONE |
All banned phrase checks led to no match. |
BANNED_PHRASE_MATCH_QUERY |
A banned phrase matched the query. |
BANNED_PHRASE_MATCH_RESPONSE |
A banned phrase matched the response. |
ResponseType
Represents different DetectIntentResponse types.
Enums | |
---|---|
RESPONSE_TYPE_UNSPECIFIED |
Not specified. This should never happen. |
PARTIAL |
Partial response. e.g. Aggregated responses in a Fulfillment that enables return_partial_response can be returned as partial response. WARNING: partial response is not eligible for barge-in. |
FINAL |
Final response. |
MissingTransition
Information collected for DF CX agents in case NLU predicted an intent that was filtered out as being inactive which may indicate a missing transition and/or absent functionality.
JSON representation |
---|
{ "intentDisplayName": string, "score": number } |
Fields | |
---|---|
intent |
Name of the intent that could have triggered. |
score |
Score of the above intent. The higher it is the more likely a transition was missed on a given page. |
Methods |
|
---|---|
|
Deletes the specified conversation. |
|
Retrieves the specified conversation. |
|
Returns the list of all conversations. |