public sealed class StreamingAnalyzeContentRequest : IMessage<StreamingAnalyzeContentRequest>, IEquatable<StreamingAnalyzeContentRequest>, IDeepCloneable<StreamingAnalyzeContentRequest>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Dialogflow v2beta1 API class StreamingAnalyzeContentRequest.
The top-level message sent by the client to the [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2beta1.Participants.StreamingAnalyzeContent] method.
Multiple request messages should be sent in order:
The first message must contain [participant][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.participant], [config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] and optionally [query_params][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.query_params]. If you want to receive an audio response, it should also contain [reply_audio_config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.reply_audio_config]. The message must not contain [input][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.input].
If [config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] in the first message was set to [audio_config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.audio_config], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.input_audio] to continue with Speech recognition. If you decide to rather analyze text input after you already started Speech recognition, please send a message with [StreamingAnalyzeContentRequest.input_text][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.input_text].
However, note that:
- Dialogflow will bill you for the audio so far.
- Dialogflow discards all Speech recognition results in favor of the text input.
- If [StreamingAnalyzeContentRequest.config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.config] in the first message was set to [StreamingAnalyzeContentRequest.text_config][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.text_config], then the second message must contain only [input_text][google.cloud.dialogflow.v2beta1.StreamingAnalyzeContentRequest.input_text]. Moreover, you must not send more than two messages.
After you sent all input, you must half-close or abort the request stream.
Implements
IMessageStreamingAnalyzeContentRequest, IEquatableStreamingAnalyzeContentRequest, IDeepCloneableStreamingAnalyzeContentRequest, IBufferMessage, IMessageNamespace
Google.Cloud.Dialogflow.V2Beta1Assembly
Google.Cloud.Dialogflow.V2Beta1.dll
Constructors
StreamingAnalyzeContentRequest()
public StreamingAnalyzeContentRequest()
StreamingAnalyzeContentRequest(StreamingAnalyzeContentRequest)
public StreamingAnalyzeContentRequest(StreamingAnalyzeContentRequest other)
Parameter | |
---|---|
Name | Description |
other | StreamingAnalyzeContentRequest |
Properties
AssistQueryParams
public AssistQueryParameters AssistQueryParams { get; set; }
Parameters for a human assist query.
Property Value | |
---|---|
Type | Description |
AssistQueryParameters |
AudioConfig
public InputAudioConfig AudioConfig { get; set; }
Instructs the speech recognizer how to process the speech audio.
Property Value | |
---|---|
Type | Description |
InputAudioConfig |
ConfigCase
public StreamingAnalyzeContentRequest.ConfigOneofCase ConfigCase { get; }
Property Value | |
---|---|
Type | Description |
StreamingAnalyzeContentRequestConfigOneofCase |
CxCurrentPage
public string CxCurrentPage { get; set; }
The unique identifier of the CX page to override the current_page
in the
session.
Format: projects/<Project ID>/locations/<Location ID>/agents/<Agent
ID>/flows/<Flow ID>/pages/<Page ID>
.
If cx_current_page
is specified, the previous state of the session will
be ignored by Dialogflow CX, including the [previous
page][QueryResult.current_page] and the [previous session
parameters][QueryResult.parameters]. In most cases, cx_current_page
and
cx_parameters
should be configured together to direct a session to a
specific state.
Note: this field should only be used if you are connecting to a Dialogflow CX agent.
Property Value | |
---|---|
Type | Description |
string |
CxParameters
public Struct CxParameters { get; set; }
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.
Note: this field should only be used if you are connecting to a Dialogflow CX agent.
Property Value | |
---|---|
Type | Description |
Struct |
EnableDebuggingInfo
public bool EnableDebuggingInfo { get; set; }
if true, StreamingAnalyzeContentResponse.debugging_info
will get
populated.
Property Value | |
---|---|
Type | Description |
bool |
EnableExtendedStreaming
public bool EnableExtendedStreaming { get; set; }
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response.
Restrictions:
- Timeout: 3 mins.
- Audio Encoding: only supports [AudioEncoding.AUDIO_ENCODING_LINEAR_16][google.cloud.dialogflow.v2beta1.AudioEncoding.AUDIO_ENCODING_LINEAR_16] and [AudioEncoding.AUDIO_ENCODING_MULAW][google.cloud.dialogflow.v2beta1.AudioEncoding.AUDIO_ENCODING_MULAW]
- Lifecycle: conversation should be in
Assist Stage
, go to [Conversation.CreateConversation][] for more information.
InvalidArgument Error will be returned if the one of restriction checks failed.
You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
Property Value | |
---|---|
Type | Description |
bool |
EnablePartialAutomatedAgentReply
public bool EnablePartialAutomatedAgentReply { get; set; }
Enable partial virtual agent responses. If this flag is not enabled,
response stream still contains only one final response even if some
Fulfillment
s in Dialogflow virtual agent have been configured to return
partial responses.
Property Value | |
---|---|
Type | Description |
bool |
HasInputAudio
public bool HasInputAudio { get; }
Gets whether the "input_audio" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasInputEvent
public bool HasInputEvent { get; }
Gets whether the "input_event" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasInputIntent
public bool HasInputIntent { get; }
Gets whether the "input_intent" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasInputText
public bool HasInputText { get; }
Gets whether the "input_text" field is set
Property Value | |
---|---|
Type | Description |
bool |
InputAudio
public ByteString InputAudio { get; set; }
The input audio content to be recognized. Must be sent if audio_config
is set in the first message. The complete audio over all streaming
messages must not exceed 1 minute.
Property Value | |
---|---|
Type | Description |
ByteString |
InputCase
public StreamingAnalyzeContentRequest.InputOneofCase InputCase { get; }
Property Value | |
---|---|
Type | Description |
StreamingAnalyzeContentRequestInputOneofCase |
InputDtmf
public TelephonyDtmfEvents InputDtmf { get; set; }
The DTMF digits used to invoke intent and fill in parameter value.
This input is ignored if the previous response indicated that DTMF input is not accepted.
Property Value | |
---|---|
Type | Description |
TelephonyDtmfEvents |
InputEvent
public string InputEvent { get; set; }
The input event name. This can only be sent once and would cancel the ongoing speech recognition if any.
Property Value | |
---|---|
Type | Description |
string |
InputIntent
public string InputIntent { get; set; }
The intent to be triggered on V3 agent.
Format: projects/<Project ID>/locations/<Location ID>/locations/
<Location ID>/agents/<Agent ID>/intents/<Intent ID>
.
Property Value | |
---|---|
Type | Description |
string |
InputText
public string InputText { get; set; }
The UTF-8 encoded natural language text to be processed. Must be sent if
text_config
is set in the first message. Text length must not exceed
256 bytes for virtual agent interactions. The input_text
field can be
only sent once, and would cancel the speech recognition if any ongoing.
Property Value | |
---|---|
Type | Description |
string |
Participant
public string Participant { get; set; }
Required. The name of the participant this text comes from.
Format: projects/<Project ID>/locations/<Location
ID>/conversations/<Conversation ID>/participants/<Participant ID>
.
Property Value | |
---|---|
Type | Description |
string |
ParticipantAsParticipantName
public ParticipantName ParticipantAsParticipantName { get; set; }
ParticipantName-typed view over the Participant resource name property.
Property Value | |
---|---|
Type | Description |
ParticipantName |
QueryParams
public QueryParameters QueryParams { get; set; }
Parameters for a Dialogflow virtual-agent query.
Property Value | |
---|---|
Type | Description |
QueryParameters |
ReplyAudioConfig
public OutputAudioConfig ReplyAudioConfig { get; set; }
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
Property Value | |
---|---|
Type | Description |
OutputAudioConfig |
TextConfig
public InputTextConfig TextConfig { get; set; }
The natural language text to be processed.
Property Value | |
---|---|
Type | Description |
InputTextConfig |