Google Cloud Dialogflow v2 API - Class StreamingAnalyzeContentRequest (4.6.0)

public sealed class StreamingAnalyzeContentRequest : IMessage<StreamingAnalyzeContentRequest>, IEquatable<StreamingAnalyzeContentRequest>, IDeepCloneable<StreamingAnalyzeContentRequest>, IBufferMessage, IMessage

Reference documentation and code samples for the Google Cloud Dialogflow v2 API class StreamingAnalyzeContentRequest.

The top-level message sent by the client to the [Participants.StreamingAnalyzeContent][google.cloud.dialogflow.v2.Participants.StreamingAnalyzeContent] method.

Multiple request messages should be sent in order:

  1. The first message must contain [participant][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.participant], [config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] and optionally [query_params][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.query_params]. If you want to receive an audio response, it should also contain [reply_audio_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.reply_audio_config]. The message must not contain [input][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input].

  2. If [config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] in the first message was set to [audio_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.audio_config], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input_audio] to continue with Speech recognition. However, note that:

  • Dialogflow will bill you for the audio so far.
  • Dialogflow discards all Speech recognition results in favor of the text input.
  1. If [StreamingAnalyzeContentRequest.config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.config] in the first message was set to [StreamingAnalyzeContentRequest.text_config][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.text_config], then the second message must contain only [input_text][google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest.input_text]. Moreover, you must not send more than two messages.

After you sent all input, you must half-close or abort the request stream.

Inheritance

Object > StreamingAnalyzeContentRequest

Namespace

Google.Cloud.Dialogflow.V2

Assembly

Google.Cloud.Dialogflow.V2.dll

Constructors

StreamingAnalyzeContentRequest()

public StreamingAnalyzeContentRequest()

StreamingAnalyzeContentRequest(StreamingAnalyzeContentRequest)

public StreamingAnalyzeContentRequest(StreamingAnalyzeContentRequest other)
Parameter
NameDescription
otherStreamingAnalyzeContentRequest

Properties

AssistQueryParams

public AssistQueryParameters AssistQueryParams { get; set; }

Parameters for a human assist query.

Property Value
TypeDescription
AssistQueryParameters

AudioConfig

public InputAudioConfig AudioConfig { get; set; }

Instructs the speech recognizer how to process the speech audio.

Property Value
TypeDescription
InputAudioConfig

ConfigCase

public StreamingAnalyzeContentRequest.ConfigOneofCase ConfigCase { get; }
Property Value
TypeDescription
StreamingAnalyzeContentRequest.ConfigOneofCase

CxParameters

public Struct CxParameters { get; set; }

Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.

Note: this field should only be used if you are connecting to a Dialogflow CX agent.

Property Value
TypeDescription
Struct

EnablePartialAutomatedAgentReply

public bool EnablePartialAutomatedAgentReply { get; set; }

Enable partial virtual agent responses. If this flag is not enabled, response stream still contains only one final response even if some Fulfillments in Dialogflow virtual agent have been configured to return partial responses.

Property Value
TypeDescription
Boolean

InputAudio

public ByteString InputAudio { get; set; }

The input audio content to be recognized. Must be sent if audio_config is set in the first message. The complete audio over all streaming messages must not exceed 1 minute.

Property Value
TypeDescription
ByteString

InputCase

public StreamingAnalyzeContentRequest.InputOneofCase InputCase { get; }
Property Value
TypeDescription
StreamingAnalyzeContentRequest.InputOneofCase

InputDtmf

public TelephonyDtmfEvents InputDtmf { get; set; }

The DTMF digits used to invoke intent and fill in parameter value.

This input is ignored if the previous response indicated that DTMF input is not accepted.

Property Value
TypeDescription
TelephonyDtmfEvents

InputText

public string InputText { get; set; }

The UTF-8 encoded natural language text to be processed. Must be sent if text_config is set in the first message. Text length must not exceed 256 bytes for virtual agent interactions. The input_text field can be only sent once.

Property Value
TypeDescription
String

Participant

public string Participant { get; set; }

Required. The name of the participant this text comes from. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>.

Property Value
TypeDescription
String

ParticipantAsParticipantName

public ParticipantName ParticipantAsParticipantName { get; set; }

ParticipantName-typed view over the Participant resource name property.

Property Value
TypeDescription
ParticipantName

QueryParams

public QueryParameters QueryParams { get; set; }

Parameters for a Dialogflow virtual-agent query.

Property Value
TypeDescription
QueryParameters

ReplyAudioConfig

public OutputAudioConfig ReplyAudioConfig { get; set; }

Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.

Property Value
TypeDescription
OutputAudioConfig

TextConfig

public InputTextConfig TextConfig { get; set; }

The natural language text to be processed.

Property Value
TypeDescription
InputTextConfig