QueryInput

Represents the query input. It can contain either:

  1. An audio config which instructs the speech recognizer how to process the speech audio.

  2. A conversational query in the form of text,.

  3. An event that specifies which intent to trigger.

JSON representation
{

  // Union field input can be only one of the following:
  "audioConfig": {
    object(InputAudioConfig)
  },
  "text": {
    object(TextInput)
  },
  "event": {
    object(EventInput)
  }
  // End of list of possible types for union field input.
}
Fields
Union field input. Required. The input specification. input can be only one of the following:
audioConfig

object(InputAudioConfig)

Instructs the speech recognizer how to process the speech audio.

text

object(TextInput)

The natural language text to be processed.

event

object(EventInput)

The event to be processed.

InputAudioConfig

Instructs the speech recognizer how to process the audio content.

JSON representation
{
  "audioEncoding": enum(AudioEncoding),
  "sampleRateHertz": number,
  "languageCode": string,
  "phraseHints": [
    string
  ],
  "model": string
}
Fields
audioEncoding

enum(AudioEncoding)

Required. Audio encoding of the audio content to process.

sampleRateHertz

number

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

languageCode

string

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

phraseHints[]

string

Optional. The collection of phrase hints which are used to boost accuracy of speech recognition. Refer to Cloud Speech API documentation for more details.

model

string

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

AudioEncoding

Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.

Enums
AUDIO_ENCODING_UNSPECIFIED Not specified.
AUDIO_ENCODING_LINEAR_16 Uncompressed 16-bit signed little-endian samples (Linear PCM).
AUDIO_ENCODING_FLAC FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.
AUDIO_ENCODING_MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
AUDIO_ENCODING_AMR Adaptive Multi-Rate Narrowband codec. sampleRateHertz must be 8000.
AUDIO_ENCODING_AMR_WB Adaptive Multi-Rate Wideband codec. sampleRateHertz must be 16000.
AUDIO_ENCODING_OGG_OPUS Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz must be 16000.
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz must be 16000.

TextInput

Represents the natural language text to be processed.

JSON representation
{
  "text": string,
  "languageCode": string
}
Fields
text

string

Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters.

languageCode

string

Required. The language of this conversational query. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

Was this page helpful? Let us know how we did:

Send feedback about...

Dialogflow Enterprise Edition