- JSON representation
- TextInput
- IntentInput
- AudioInput
- InputAudioConfig
- AudioEncoding
- SpeechModelVariant
- BargeInConfig
- EventInput
- ToolCallResult
- Error
Represents the query input. It can contain one of:
A conversational query in the form of text.
An intent query that specifies which intent to trigger.
Natural language speech audio to be processed.
An event to be triggered.
DTMF digits to invoke an intent and fill in parameter value.
The results of a tool executed by the client.
JSON representation |
---|
{ "languageCode": string, // Union field |
Fields | |
---|---|
language |
Required. The language of the input. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
Union field input . Required. The input specification. input can be only one of the following: |
|
text |
The natural language text to be processed. |
intent |
The intent to be triggered. |
audio |
The natural language speech audio to be processed. |
event |
The event to be triggered. |
dtmf |
The DTMF event to be handled. |
tool |
The results of a tool executed by the client. |
TextInput
Represents the natural language text to be processed.
JSON representation |
---|
{ "text": string } |
Fields | |
---|---|
text |
Required. The UTF-8 encoded natural language text to be processed. |
IntentInput
Represents the intent to trigger programmatically rather than as a result of natural language processing.
JSON representation |
---|
{ "intent": string } |
Fields | |
---|---|
intent |
Required. The unique identifier of the intent. Format: |
AudioInput
Represents the natural speech audio to be processed.
JSON representation |
---|
{
"config": {
object ( |
Fields | |
---|---|
config |
Required. Instructs the speech recognizer how to process the speech audio. |
audio |
The natural language speech audio to be processed. A single request can contain up to 2 minutes of speech audio data. The For non-streaming audio detect intent, both A base64-encoded string. |
InputAudioConfig
Instructs the speech recognizer on how to process the audio content.
JSON representation |
---|
{ "audioEncoding": enum ( |
Fields | |
---|---|
audio |
Required. Audio encoding of the audio content to process. |
sample |
Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details. |
enable |
Optional. If |
phrase |
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. |
model |
Optional. Which Speech model to select for the given request. For more information, see Speech models. |
model |
Optional. Which variant of the |
single |
Optional. If |
barge |
Configuration of barge-in behavior during the streaming of input audio. |
opt |
If |
AudioEncoding
Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.
Enums | |
---|---|
AUDIO_ENCODING_UNSPECIFIED |
Not specified. |
AUDIO_ENCODING_LINEAR_16 |
Uncompressed 16-bit signed little-endian samples (Linear PCM). |
AUDIO_ENCODING_FLAC |
FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16 . FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported. |
AUDIO_ENCODING_MULAW |
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
AUDIO_ENCODING_AMR |
Adaptive Multi-Rate Narrowband codec. sampleRateHertz must be 8000. |
AUDIO_ENCODING_AMR_WB |
Adaptive Multi-Rate Wideband codec. sampleRateHertz must be 16000. |
AUDIO_ENCODING_OGG_OPUS |
Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz must be 16000. |
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE |
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte . It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz must be 16000. |
AUDIO_ENCODING_ALAW |
8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law. |
SpeechModelVariant
Variant of the specified Speech model
to use.
See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
Enums | |
---|---|
SPEECH_MODEL_VARIANT_UNSPECIFIED |
No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE. |
USE_BEST_AVAILABLE |
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for. |
USE_STANDARD |
Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models. |
USE_ENHANCED |
Use an enhanced model variant:
The Cloud Speech documentation describes which models have enhanced variants. |
BargeInConfig
Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request.
The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases:
No barge-in phase: which goes first and during which speech detection should not be carried out.
Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase.
The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length from the the start of the input audio.
No-speech event is a response with END_OF_UTTERANCE without any transcript following up.
JSON representation |
---|
{ "noBargeInDuration": string, "totalDuration": string } |
Fields | |
---|---|
no |
Duration that is not eligible for barge-in at the beginning of the input audio. A duration in seconds with up to nine fractional digits, ending with ' |
total |
Total duration for the playback at the beginning of the input audio. A duration in seconds with up to nine fractional digits, ending with ' |
EventInput
Represents the event to trigger.
JSON representation |
---|
{ "event": string } |
Fields | |
---|---|
event |
Name of the event. |
ToolCallResult
The result of calling a tool's action that has been executed by the client.
JSON representation |
---|
{ "tool": string, "action": string, // Union field |
Fields | |
---|---|
tool |
Required. The |
action |
Required. The name of the tool's action associated with this call. |
Union field result . The tool call's result. result can be only one of the following: |
|
error |
The tool call's error. |
output |
The tool call's output parameters. |
Error
An error produced by the tool call.
JSON representation |
---|
{ "message": string } |
Fields | |
---|---|
message |
Optional. The error message of the function. |