- JSON representation
- InputAudioConfig
- AudioEncoding
- SpeechContext
- SpeechModelVariant
- TextInput
- EventInput
Represents the query input. It can contain either:
An audio config which instructs the speech recognizer how to process the speech audio.
A conversational query in the form of text.
An event that specifies which intent to trigger.
JSON representation | |
---|---|
{ // Union field |
Fields | ||
---|---|---|
Union field input . Required. The input specification. input can be only one of the following: |
||
audioConfig |
Instructs the speech recognizer how to process the speech audio. |
|
text |
The natural language text to be processed. |
|
event |
The event to be processed. |
InputAudioConfig
Instructs the speech recognizer on how to process the audio content.
JSON representation | |
---|---|
{ "audioEncoding": enum ( |
Fields | |
---|---|
audioEncoding |
Required. Audio encoding of the audio content to process. |
sampleRateHertz |
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details. |
languageCode |
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
enableWordInfo |
If |
phraseHints[] |
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speechContexts instead. If you specify both phraseHints and speechContexts, Dialogflow will treat the phraseHints as a single additional SpeechContext. |
speechContexts[] |
Context information to assist speech recognition. See the Cloud Speech documentation for more details. |
model |
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. |
modelVariant |
Which variant of the |
singleUtterance |
If |
AudioEncoding
Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.
Enums | |
---|---|
AUDIO_ENCODING_UNSPECIFIED |
Not specified. |
AUDIO_ENCODING_LINEAR_16 |
Uncompressed 16-bit signed little-endian samples (Linear PCM). |
AUDIO_ENCODING_FLAC |
FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16 . FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported. |
AUDIO_ENCODING_MULAW |
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
AUDIO_ENCODING_AMR |
Adaptive Multi-Rate Narrowband codec. sampleRateHertz must be 8000. |
AUDIO_ENCODING_AMR_WB |
Adaptive Multi-Rate Wideband codec. sampleRateHertz must be 16000. |
AUDIO_ENCODING_OGG_OPUS |
Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz must be 16000. |
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE |
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte . It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz must be 16000. |
SpeechContext
Hints for the speech recognizer to help with recognition in a specific conversation state.
JSON representation | |
---|---|
{ "phrases": [ string ], "boost": number } |
Fields | |
---|---|
phrases[] |
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. This list can be used to:
See the Cloud Speech documentation for usage limits. |
boost |
Optional. Boost for this context compared to other contexts:
Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search. |
SpeechModelVariant
Variant of the specified Speech model
to use.
See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
Enums | |
---|---|
SPEECH_MODEL_VARIANT_UNSPECIFIED |
No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE. |
USE_BEST_AVAILABLE |
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for. Please see the Dialogflow docs for how to make your project eligible for enhanced models. |
USE_STANDARD |
Use standard model variant even if an enhanced model is available. See the Cloud Speech documentation for details about enhanced models. |
USE_ENHANCED |
Use an enhanced model variant:
The Cloud Speech documentation describes which models have enhanced variants.
|
TextInput
Represents the natural language text to be processed.
JSON representation | |
---|---|
{ "text": string, "languageCode": string } |
Fields | |
---|---|
text |
Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters. |
languageCode |
Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |
EventInput
Events allow for matching intents by event name instead of the natural language input. For instance, input <event: { name: "welcome_event",
parameters: { name: "Sam" } }>
can trigger a personalized welcome response. The parameter name
may be used by the agent in the response: "Hello #welcome_event.name! What can I do for you today?"
.
JSON representation | |
---|---|
{ "name": string, "parameters": { object }, "languageCode": string } |
Fields | |
---|---|
name |
Required. The unique identifier of the event. |
parameters |
The collection of parameters associated with the event. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs:
|
languageCode |
Required. The language of this query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. |