Class StreamingAnalyzeContentRequest (2.27.0)

    mapping=None, *, ignore_unknown_fields=False, **kwargs

The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.

Multiple request messages should be sent in order:

  1. The first message must contain participant, config and optionally query_params. If you want to receive an audio response, it should also contain reply_audio_config. The message must not contain input.

  2. If config in the first message was set to audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. However, note that:

    • Dialogflow will bill you for the audio so far.
    • Dialogflow discards all Speech recognition results in favor of the text input.
  3. If StreamingAnalyzeContentRequest.config in the first message was set to StreamingAnalyzeContentRequest.text_config, then the second message must contain only input_text. Moreover, you must not send more than two messages.

After you sent all input, you must half-close or abort the request stream.

This message has oneof_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof:


participant str
Required. The name of the participant this text comes from. Format: projects/.
Instructs the speech recognizer how to process the speech audio. This field is a member of oneof_ config.
The natural language text to be processed. This field is a member of oneof_ config.
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
input_audio bytes
The input audio content to be recognized. Must be sent if audio_config is set in the first message. The complete audio over all streaming messages must not exceed 1 minute. This field is a member of oneof_ input.
input_text str
The UTF-8 encoded natural language text to be processed. Must be sent if text_config is set in the first message. Text length must not exceed 256 bytes for virtual agent interactions. The input_text field can be only sent once, and would cancel the speech recognition if any ongoing. This field is a member of oneof_ input.
The DTMF digits used to invoke intent and fill in parameter value. This input is ignored if the previous response indicated that DTMF input is not accepted. This field is a member of oneof_ input.
Parameters for a Dialogflow virtual-agent query.
Parameters for a human assist query.
cx_parameters google.protobuf.struct_pb2.Struct
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
enable_extended_streaming bool
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response. Restrictions: - Timeout: 3 mins. - Audio Encoding: only supports AudioEncoding.AUDIO_ENCODING_LINEAR_16 and AudioEncoding.AUDIO_ENCODING_MULAW - Lifecycle: conversation should be in Assist Stage, go to [Conversation.CreateConversation][] for more information. InvalidArgument Error will be returned if the one of restriction checks failed. You can find more details in
enable_partial_automated_agent_reply bool
Enable partial virtual agent responses. If this flag is not enabled, response stream still contains only one final response even if some Fulfillment\ s in Dialogflow virtual agent have been configured to return partial responses.
enable_debugging_info bool
If true, StreamingAnalyzeContentResponse.debugging_info will get populated.