Provides information to the recognizer that specifies how to process the request.
Sample rate in Hertz of the audio data sent in all
RecognitionAudio
messages. Valid values are: 8000-48000.
16000 is optimal. For best results, set the sampling rate of
the audio source to 16000 Hz. If that's not possible, use the
native sample rate of the audio source (instead of re-
sampling). This field is optional for FLAC and WAV audio
files, but is required for all other audio formats. For
details, see [AudioEncoding][google.cloud.speech.v1.Recognitio
nConfig.AudioEncoding].
This needs to be set to true
explicitly and
audio_channel_count
> 1 to get each channel recognized
separately. The recognition result will contain a
channel_tag
field to state which channel that result
belongs to. If this is not true, we will only recognize the
first channel. The request is billed cumulatively for all
channels recognized: audio_channel_count
multiplied by the
length of the audio.
Maximum number of recognition hypotheses to be returned.
Specifically, the maximum number of
SpeechRecognitionAlternative
messages within each
SpeechRecognitionResult
. The server may return fewer than
max_alternatives
. Valid values are 0
-30
. A value
of 0
or 1
will return a maximum of one. If omitted,
will return a maximum of one.
Array of
SpeechContext. A means
to provide context to assist the speech recognition. For more
information, see speech adaptation
<https://cloud.google.com/speech-to-text/docs/context-
strength>
__.
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. Note: This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature.
Metadata regarding this request.
Set to true to use an enhanced model for speech recognition.
If use_enhanced
is set to true and the model
field is
not set, then an appropriate enhanced model is chosen if an
enhanced model exists for the audio. If use_enhanced
is
true and an enhanced version of the specified model does not
exist, then the speech is recognized using the standard
version of the specified model.