Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class StreamingDetectIntentResponse.
The top-level message returned from the StreamingDetectIntent method.
Multiple response messages can be returned in order:
- If the
StreamingDetectIntentRequest.input_audio
field was
set, the
recognition_result
field is populated for one or more messages. See the StreamingRecognitionResult message for details about the result message sequence. - The next message contains
response_id
,query_result
and optionallywebhook_status
if a WebHook was called.
Generated from protobuf message google.cloud.dialogflow.v2.StreamingDetectIntentResponse
Namespace
Google \ Cloud \ Dialogflow \ V2Methods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
data |
array
Optional. Data for populating the Message object. |
↳ response_id |
string
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues. |
↳ recognition_result |
StreamingRecognitionResult
The result of speech recognition. |
↳ query_result |
QueryResult
The result of the conversational query or event processing. |
↳ webhook_status |
Google\Rpc\Status
Specifies the status of the webhook request. |
↳ output_audio |
string
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the |
↳ output_audio_config |
OutputAudioConfig
The config used by the speech synthesizer to generate the output audio. |
↳ debugging_info |
CloudConversationDebuggingInfo
Debugging info that would get populated when StreamingDetectIntentRequest.enable_debugging_info is set to true. |
getResponseId
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
Returns | |
---|---|
Type | Description |
string |
setResponseId
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getRecognitionResult
The result of speech recognition.
Returns | |
---|---|
Type | Description |
StreamingRecognitionResult|null |
hasRecognitionResult
clearRecognitionResult
setRecognitionResult
The result of speech recognition.
Parameter | |
---|---|
Name | Description |
var |
StreamingRecognitionResult
|
Returns | |
---|---|
Type | Description |
$this |
getQueryResult
The result of the conversational query or event processing.
Returns | |
---|---|
Type | Description |
QueryResult|null |
hasQueryResult
clearQueryResult
setQueryResult
The result of the conversational query or event processing.
Parameter | |
---|---|
Name | Description |
var |
QueryResult
|
Returns | |
---|---|
Type | Description |
$this |
getWebhookStatus
Specifies the status of the webhook request.
Returns | |
---|---|
Type | Description |
Google\Rpc\Status|null |
hasWebhookStatus
clearWebhookStatus
setWebhookStatus
Specifies the status of the webhook request.
Parameter | |
---|---|
Name | Description |
var |
Google\Rpc\Status
|
Returns | |
---|---|
Type | Description |
$this |
getOutputAudio
The audio data bytes encoded as specified in the request.
Note: The output audio is generated based on the values of default platform
text responses found in the query_result.fulfillment_messages
field. If
multiple default text responses exist, they will be concatenated when
generating audio. If no default platform text responses exist, the
generated audio content will be empty.
In some scenarios, multiple output audio fields may be present in the
response structure. In these cases, only the top-most-level audio output
has content.
Returns | |
---|---|
Type | Description |
string |
setOutputAudio
The audio data bytes encoded as specified in the request.
Note: The output audio is generated based on the values of default platform
text responses found in the query_result.fulfillment_messages
field. If
multiple default text responses exist, they will be concatenated when
generating audio. If no default platform text responses exist, the
generated audio content will be empty.
In some scenarios, multiple output audio fields may be present in the
response structure. In these cases, only the top-most-level audio output
has content.
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getOutputAudioConfig
The config used by the speech synthesizer to generate the output audio.
Returns | |
---|---|
Type | Description |
OutputAudioConfig|null |
hasOutputAudioConfig
clearOutputAudioConfig
setOutputAudioConfig
The config used by the speech synthesizer to generate the output audio.
Parameter | |
---|---|
Name | Description |
var |
OutputAudioConfig
|
Returns | |
---|---|
Type | Description |
$this |
getDebuggingInfo
Debugging info that would get populated when StreamingDetectIntentRequest.enable_debugging_info is set to true.
Returns | |
---|---|
Type | Description |
CloudConversationDebuggingInfo|null |
hasDebuggingInfo
clearDebuggingInfo
setDebuggingInfo
Debugging info that would get populated when StreamingDetectIntentRequest.enable_debugging_info is set to true.
Parameter | |
---|---|
Name | Description |
var |
CloudConversationDebuggingInfo
|
Returns | |
---|---|
Type | Description |
$this |