InputAudioConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Instructs the speech recognizer how to process the audio content.
Attributes | |
---|---|
Name | Description |
audio_encoding |
google.cloud.dialogflow_v2.types.AudioEncoding
Required. Audio encoding of the audio content to process. |
sample_rate_hertz |
int
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to `Cloud Speech API documentation |
language_code |
str
Required. The language of the supplied audio. Dialogflow does not do translations. See `Language Support |
enable_word_info |
bool
If true , Dialogflow returns
SpeechWordInfo
in
StreamingRecognitionResult
with information about the recognized speech words, e.g.
start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
|
phrase_hints |
MutableSequence[str]
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See `the Cloud Speech documentation |
speech_contexts |
MutableSequence[google.cloud.dialogflow_v2.types.SpeechContext]
Context information to assist speech recognition. See `the Cloud Speech documentation |
model |
str
Optional. Which Speech model to select for the given request. For more information, see `Speech models |
model_variant |
google.cloud.dialogflow_v2.types.SpeechModelVariant
Which variant of the [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use. |
single_utterance |
bool
If false (default), recognition does not cease until the
client closes the stream. If true , the recognizer will
detect a single spoken utterance in input audio. Recognition
ceases when it detects the audio's voice has stopped or
paused. In this case, once a detected intent is received,
the client should close the stream and start a new request
with a new stream as needed. Note: This setting is relevant
only for streaming methods. Note: When specified,
InputAudioConfig.single_utterance takes precedence over
StreamingDetectIntentRequest.single_utterance.
|
disable_no_speech_recognized_event |
bool
Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result,
trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.
|
enable_automatic_punctuation |
bool
Enable automatic punctuation option at the speech backend. |