Provides information to the recognizer that specifies how to process the request.

JSON representation
  "encoding": enum (AudioEncoding),
  "sampleRateHertz": integer,
  "audioChannelCount": integer,
  "enableSeparateRecognitionPerChannel": boolean,
  "languageCode": string,
  "maxAlternatives": integer,
  "profanityFilter": boolean,
  "speechContexts": [
      object (SpeechContext)
  "enableWordTimeOffsets": boolean,
  "enableAutomaticPunctuation": boolean,
  "diarizationConfig": {
    object (SpeakerDiarizationConfig)
  "metadata": {
    object (RecognitionMetadata)
  "model": string,
  "useEnhanced": boolean

enum (AudioEncoding)

Encoding of audio data sent in all RecognitionAudio messages. This field is optional for FLAC and WAV audio files and required for all other audio formats. For details, see AudioEncoding.



Sample rate in Hertz of the audio data sent in all RecognitionAudio messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see AudioEncoding.



The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16 and FLAC are 1-8. Valid values for OGG_OPUS are '1'-'254'. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1. If 0 or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set enableSeparateRecognitionPerChannel to 'true'.



This needs to be set to true explicitly and audioChannelCount > 1 to get each channel recognized separately. The recognition result will contain a channelTag field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: audioChannelCount multiplied by the length of the audio.



Required. The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.



Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative messages within each SpeechRecognitionResult. The server may return fewer than maxAlternatives. Valid values are 0-30. A value of 0 or 1 will return a maximum of one. If omitted, will return a maximum of one.



If set to true, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to false or omitted, profanities won't be filtered out.


object (SpeechContext)

Array of SpeechContext. A means to provide context to assist the speech recognition. For more information, see speech adaptation.



If true, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If false, no word-level time offset information is returned. The default is false.



If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.


object (SpeakerDiarizationConfig)

Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.


object (RecognitionMetadata)

Metadata regarding this request.



Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig.

Model Description


Best for short queries such as voice commands or voice search.


Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate).


Best for audio that originated from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate.


Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate.



Set to true to use an enhanced model for speech recognition. If useEnhanced is set to true and the model field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio.

If useEnhanced is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.


The encoding of the audio data sent in the request.

All encodings support only 1 channel (mono) audio, unless the audioChannelCount and enableSeparateRecognitionPerChannel fields are set.

For best results, the audio source should be captured and transmitted using a lossless encoding (FLAC or LINEAR16). The accuracy of the speech recognition can be reduced if lossy codecs are used to capture or transmit audio, particularly if background noise is present. Lossy codecs include MULAW, AMR, AMR_WB, OGG_OPUS, SPEEX_WITH_HEADER_BYTE, MP3.

The FLAC and WAV audio file formats include a header that describes the included audio content. You can request recognition for WAV files that contain either LINEAR16 or MULAW encoded audio. If you send FLAC or WAV audio file format in your request, you do not need to specify an AudioEncoding; the audio encoding format is determined from the file header. If you specify an AudioEncoding when you send send FLAC or WAV audio, the encoding configuration must match the encoding described in the audio header; otherwise the request returns an google.rpc.Code.INVALID_ARGUMENT error code.

LINEAR16 Uncompressed 16-bit signed little-endian samples (Linear PCM).
FLAC FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of LINEAR16. FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported.
MULAW 8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
AMR Adaptive Multi-Rate Narrowband codec. sampleRateHertz must be 8000.
AMR_WB Adaptive Multi-Rate Wideband codec. sampleRateHertz must be 16000.
OGG_OPUS Opus encoded audio frames in Ogg container (OggOpus). sampleRateHertz must be one of 8000, 12000, 16000, 24000, or 48000.
SPEEX_WITH_HEADER_BYTE Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS is highly preferred over Speex encoding. The Speex encoding supported by Cloud Speech API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte. It is a variant of the RTP Speex encoding defined in RFC 5574. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz must be 16000.


Provides "hints" to the speech recognizer to favor specific words and phrases in the results.

JSON representation
  "phrases": [


A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.

List items can also be set to classes for groups of words that represent common concepts that occur in natural language. For example, rather than providing phrase hints for every month of the year, using the $MONTH class improves the likelihood of correctly transcribing audio that includes months.


Config to enable speaker diarization.

JSON representation
  "enableSpeakerDiarization": boolean,
  "minSpeakerCount": integer,
  "maxSpeakerCount": integer,
  "speakerTag": integer


If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo.



Minimum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 2.



Maximum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 6.



Output only. Unused.


Description of audio data to be recognized.

JSON representation
  "interactionType": enum (InteractionType),
  "industryNaicsCodeOfAudio": integer,
  "microphoneDistance": enum (MicrophoneDistance),
  "originalMediaType": enum (OriginalMediaType),
  "recordingDeviceType": enum (RecordingDeviceType),
  "recordingDeviceName": string,
  "originalMimeType": string,
  "audioTopic": string

enum (InteractionType)

The use case most closely describing the audio content to be recognized.


integer (uint32 format)

The industry vertical to which this speech recognition request most closely applies. This is most indicative of the topics contained in the audio. Use the 6-digit NAICS code to identify the industry vertical - see


enum (MicrophoneDistance)

The audio type that most closely describes the audio being recognized.


enum (OriginalMediaType)

The original media the speech was recorded on.


enum (RecordingDeviceType)

The type of device the speech was recorded with.



The device used to make the recording. Examples 'Nexus 5X' or 'Polycom SoundStation IP 6000' or 'POTS' or 'VoIP' or 'Cardioid Microphone'.



Mime type of the original audio file. For example audio/m4a, audio/x-alaw-basic, audio/mp3, audio/3gpp. A list of possible audio mime types is maintained at



Description of the content. Eg. "Recordings of federal supreme court hearings from 2012".


Use case categories that the audio recognition request can be described by.

INTERACTION_TYPE_UNSPECIFIED Use case is either unknown or is something other than one of the other values below.
DISCUSSION Multiple people in a conversation or discussion. For example in a meeting with two or more people actively participating. Typically all the primary people speaking would be in the same room (if not, see PHONE_CALL)
PRESENTATION One or more persons lecturing or presenting to others, mostly uninterrupted.
PHONE_CALL A phone-call or video-conference in which two or more people, who are not in the same room, are actively participating.
VOICEMAIL A recorded message intended for another person to listen to.
PROFESSIONALLY_PRODUCED Professionally produced audio (eg. TV Show, Podcast).
VOICE_COMMAND Transcribe voice commands, such as for controlling a device.
DICTATION Transcribe speech to text to create a written document, such as a text-message, email or report.


Enumerates the types of capture settings describing an audio file.

NEARFIELD The audio was captured from a closely placed microphone. Eg. phone, dictaphone, or handheld microphone. Generally if there speaker is within 1 meter of the microphone.
MIDFIELD The speaker if within 3 meters of the microphone.
FARFIELD The speaker is more than 3 meters away from the microphone.


The original media the speech was recorded on.

ORIGINAL_MEDIA_TYPE_UNSPECIFIED Unknown original media type.
AUDIO The speech data is an audio recording.
VIDEO The speech data originally recorded on a video.


The type of device the speech was recorded with.

RECORDING_DEVICE_TYPE_UNSPECIFIED The recording device is unknown.
SMARTPHONE Speech was recorded on a smartphone.
PC Speech was recorded using a personal computer or tablet.
PHONE_LINE Speech was recorded over a phone line.
VEHICLE Speech was recorded in a vehicle.
OTHER_OUTDOOR_DEVICE Speech was recorded outdoors.
OTHER_INDOOR_DEVICE Speech was recorded indoors.