StreamingAnalyzeContentRequest(
mapping=None, *, ignore_unknown_fields=False, **kwargs
)
The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.
Multiple request messages should be sent in order:
The first message must contain participant, config and optionally query_params. If you want to receive an audio response, it should also contain reply_audio_config. The message must not contain input.
If config in the first message was set to audio_config, all subsequent messages must contain input_audio to continue with Speech recognition. However, note that:
- Dialogflow will bill you for the audio so far.
- Dialogflow discards all Speech recognition results in favor of the text input.
If StreamingAnalyzeContentRequest.config in the first message was set to StreamingAnalyzeContentRequest.text_config, then the second message must contain only input_text. Moreover, you must not send more than two messages.
After you sent all input, you must half-close or abort the request stream.
This message has oneof
_ fields (mutually exclusive fields).
For each oneof, at most one member field can be set at the same time.
Setting any member of the oneof automatically clears all other
members.
.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
Attributes | |
---|---|
Name | Description |
participant |
str
Required. The name of the participant this text comes from. Format: projects/ .
|
audio_config |
google.cloud.dialogflow_v2.types.InputAudioConfig
Instructs the speech recognizer how to process the speech audio. This field is a member of oneof _ config .
|
text_config |
google.cloud.dialogflow_v2.types.InputTextConfig
The natural language text to be processed. This field is a member of oneof _ config .
|
reply_audio_config |
google.cloud.dialogflow_v2.types.OutputAudioConfig
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled. |
input_audio |
bytes
The input audio content to be recognized. Must be sent if audio_config is set in the first message. The complete
audio over all streaming messages must not exceed 1 minute.
This field is a member of oneof _ input .
|
input_text |
str
The UTF-8 encoded natural language text to be processed. Must be sent if text_config is set in the first message.
Text length must not exceed 256 bytes for virtual agent
interactions. The input_text field can be only sent
once, and would cancel the speech recognition if any
ongoing.
This field is a member of oneof _ input .
|
input_dtmf |
google.cloud.dialogflow_v2.types.TelephonyDtmfEvents
The DTMF digits used to invoke intent and fill in parameter value. This input is ignored if the previous response indicated that DTMF input is not accepted. This field is a member of oneof _ input .
|
query_params |
google.cloud.dialogflow_v2.types.QueryParameters
Parameters for a Dialogflow virtual-agent query. |
assist_query_params |
google.cloud.dialogflow_v2.types.AssistQueryParameters
Parameters for a human assist query. |
cx_parameters |
google.protobuf.struct_pb2.Struct
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent. |
enable_partial_automated_agent_reply |
bool
Enable partial virtual agent responses. If this flag is not enabled, response stream still contains only one final response even if some Fulfillment \ s in Dialogflow
virtual agent have been configured to return partial
responses.
|
enable_debugging_info |
bool
If true, StreamingAnalyzeContentResponse.debugging_info
will get populated.
|