StreamingAnalyzeContentResponse(
mapping=None, *, ignore_unknown_fields=False, **kwargs
)
The top-level message returned from the StreamingAnalyzeContent
method.
Multiple response messages can be returned in order:
If the input was set to streaming audio, the first one or more messages contain
recognition_result
. Eachrecognition_result
represents a more complete transcript of what the user said. The lastrecognition_result
hasis_final
set totrue
.In virtual agent stage: if
enable_partial_automated_agent_reply
is true, the following N (currently 1 <= N <= 4) messages containautomated_agent_reply
and optionallyreply_audio
returned by the virtual agent. The first (N-1)automated_agent_reply
\ s will haveautomated_agent_reply_type
set toPARTIAL
. The lastautomated_agent_reply
hasautomated_agent_reply_type
set toFINAL
. Ifenable_partial_automated_agent_reply
is not enabled, response stream only contains the final reply.In human assist stage: the following N (N >= 1) messages contain
human_agent_suggestion_results
,end_user_suggestion_results
ormessage
.
Attributes |
|
---|---|
Name | Description |
recognition_result |
google.cloud.dialogflow_v2.types.StreamingRecognitionResult
The result of speech recognition. |
reply_text |
str
The output text content. This field is set if an automated agent responded with a text for the user. |
reply_audio |
google.cloud.dialogflow_v2.types.OutputAudio
The audio data bytes encoded as specified in the request. This field is set if: - The reply_audio_config field is specified in the
request.
- The automated agent, which this output comes from,
responded with audio. In such case, the
reply_audio.config field contains settings used to
synthesize the speech.
In some scenarios, multiple output audio fields may be
present in the response structure. In these cases, only the
top-most-level audio output has content.
|
automated_agent_reply |
google.cloud.dialogflow_v2.types.AutomatedAgentReply
Note that in [AutomatedAgentReply.DetectIntentResponse][], [Sessions.DetectIntentResponse.output_audio][] and [Sessions.DetectIntentResponse.output_audio_config][] are always empty, use reply_audio instead. |
message |
google.cloud.dialogflow_v2.types.Message
Message analyzed by CCAI. |
human_agent_suggestion_results |
MutableSequence[google.cloud.dialogflow_v2.types.SuggestionResult]
The suggestions for most recent human agent. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.human_agent_suggestion_config. |
end_user_suggestion_results |
MutableSequence[google.cloud.dialogflow_v2.types.SuggestionResult]
The suggestions for end user. The order is the same as HumanAgentAssistantConfig.SuggestionConfig.feature_configs of HumanAgentAssistantConfig.end_user_suggestion_config. |
dtmf_parameters |
google.cloud.dialogflow_v2.types.DtmfParameters
Indicates the parameters of DTMF. |
debugging_info |
google.cloud.dialogflow_v2.types.CloudConversationDebuggingInfo
Debugging info that would get populated when StreamingAnalyzeContentRequest.enable_debugging_info is
set to true.
|