public final class InputAudioConfig extends GeneratedMessageV3 implements InputAudioConfigOrBuilder
Instructs the speech recognizer on how to process the audio content.
Protobuf type google.cloud.dialogflow.v2beta1.InputAudioConfig
Fields
public static final int AUDIO_ENCODING_FIELD_NUMBER
Field Value
public static final int DISABLE_NO_SPEECH_RECOGNIZED_EVENT_FIELD_NUMBER
Field Value
public static final int ENABLE_WORD_INFO_FIELD_NUMBER
Field Value
public static final int LANGUAGE_CODE_FIELD_NUMBER
Field Value
public static final int MODEL_FIELD_NUMBER
Field Value
public static final int MODEL_VARIANT_FIELD_NUMBER
Field Value
public static final int PHRASE_HINTS_FIELD_NUMBER
Field Value
public static final int SAMPLE_RATE_HERTZ_FIELD_NUMBER
Field Value
public static final int SINGLE_UTTERANCE_FIELD_NUMBER
Field Value
SPEECH_CONTEXTS_FIELD_NUMBER
public static final int SPEECH_CONTEXTS_FIELD_NUMBER
Field Value
Methods
public boolean equals(Object obj)
Parameter
Returns
Overrides
public AudioEncoding getAudioEncoding()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Returns
public int getAudioEncodingValue()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Returns
Type | Description |
int | The enum numeric value on the wire for audioEncoding.
|
public static InputAudioConfig getDefaultInstance()
Returns
public InputAudioConfig getDefaultInstanceForType()
Returns
public static final Descriptors.Descriptor getDescriptor()
Returns
public boolean getDisableNoSpeechRecognizedEvent()
Only used in Participants.AnalyzeContent and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Returns
Type | Description |
boolean | The disableNoSpeechRecognizedEvent.
|
public boolean getEnableWordInfo()
If true
, Dialogflow returns SpeechWordInfo in
StreamingRecognitionResult with information about the recognized speech
words, e.g. start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
bool enable_word_info = 13;
Returns
Type | Description |
boolean | The enableWordInfo.
|
public String getLanguageCode()
Required. The language of the supplied audio. Dialogflow does not do
translations. See Language
Support
for a list of the currently supported language codes. Note that queries in
the same session do not necessarily need to specify the same language.
string language_code = 3;
Returns
Type | Description |
String | The languageCode.
|
public ByteString getLanguageCodeBytes()
Required. The language of the supplied audio. Dialogflow does not do
translations. See Language
Support
for a list of the currently supported language codes. Note that queries in
the same session do not necessarily need to specify the same language.
string language_code = 3;
Returns
Type | Description |
ByteString | The bytes for languageCode.
|
Which Speech model to select for the given request. Select the
model best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the InputAudioConfig.
If enhanced speech model is enabled for the agent and an enhanced
version of the specified model for the language does not exist, then the
speech is recognized using the standard version of the specified model.
Refer to
Cloud Speech API
documentation
for more details.
string model = 7;
Returns
Type | Description |
String | The model.
|
public ByteString getModelBytes()
Which Speech model to select for the given request. Select the
model best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the InputAudioConfig.
If enhanced speech model is enabled for the agent and an enhanced
version of the specified model for the language does not exist, then the
speech is recognized using the standard version of the specified model.
Refer to
Cloud Speech API
documentation
for more details.
string model = 7;
Returns
public SpeechModelVariant getModelVariant()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Returns
public int getModelVariantValue()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Returns
Type | Description |
int | The enum numeric value on the wire for modelVariant.
|
public Parser<InputAudioConfig> getParserForType()
Returns
Overrides
public String getPhraseHints(int index)
Parameter
Name | Description |
index | int
The index of the element to return.
|
Returns
Type | Description |
String | The phraseHints at the given index.
|
public ByteString getPhraseHintsBytes(int index)
Parameter
Name | Description |
index | int
The index of the value to return.
|
Returns
Type | Description |
ByteString | The bytes of the phraseHints at the given index.
|
public int getPhraseHintsCount()
Returns
Type | Description |
int | The count of phraseHints.
|
public ProtocolStringList getPhraseHintsList()
Returns
public int getSampleRateHertz()
Required. Sample rate (in Hertz) of the audio content sent in the query.
Refer to
Cloud Speech API
documentation for
more details.
int32 sample_rate_hertz = 2;
Returns
Type | Description |
int | The sampleRateHertz.
|
public int getSerializedSize()
Returns
Overrides
public boolean getSingleUtterance()
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Returns
Type | Description |
boolean | The singleUtterance.
|
getSpeechContexts(int index)
public SpeechContext getSpeechContexts(int index)
Context information to assist speech recognition.
See the Cloud Speech
documentation
for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter
Returns
getSpeechContextsCount()
public int getSpeechContextsCount()
Context information to assist speech recognition.
See the Cloud Speech
documentation
for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns
getSpeechContextsList()
public List<SpeechContext> getSpeechContextsList()
Context information to assist speech recognition.
See the Cloud Speech
documentation
for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns
getSpeechContextsOrBuilder(int index)
public SpeechContextOrBuilder getSpeechContextsOrBuilder(int index)
Context information to assist speech recognition.
See the Cloud Speech
documentation
for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter
Returns
getSpeechContextsOrBuilderList()
public List<? extends SpeechContextOrBuilder> getSpeechContextsOrBuilderList()
Context information to assist speech recognition.
See the Cloud Speech
documentation
for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns
Type | Description |
List<? extends com.google.cloud.dialogflow.v2beta1.SpeechContextOrBuilder> | |
public final UnknownFieldSet getUnknownFields()
Returns
Overrides
Returns
Overrides
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Overrides
public final boolean isInitialized()
Returns
Overrides
public static InputAudioConfig.Builder newBuilder()
Returns
public static InputAudioConfig.Builder newBuilder(InputAudioConfig prototype)
Parameter
Returns
public InputAudioConfig.Builder newBuilderForType()
Returns
protected InputAudioConfig.Builder newBuilderForType(GeneratedMessageV3.BuilderParent parent)
Parameter
Returns
Overrides
protected Object newInstance(GeneratedMessageV3.UnusedPrivateParameter unused)
Parameter
Returns
Overrides
public static InputAudioConfig parseDelimitedFrom(InputStream input)
Parameter
Returns
Exceptions
public static InputAudioConfig parseDelimitedFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static InputAudioConfig parseFrom(byte[] data)
Parameter
Name | Description |
data | byte[]
|
Returns
Exceptions
public static InputAudioConfig parseFrom(byte[] data, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static InputAudioConfig parseFrom(ByteString data)
Parameter
Returns
Exceptions
public static InputAudioConfig parseFrom(ByteString data, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static InputAudioConfig parseFrom(CodedInputStream input)
Parameter
Returns
Exceptions
public static InputAudioConfig parseFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static InputAudioConfig parseFrom(InputStream input)
Parameter
Returns
Exceptions
public static InputAudioConfig parseFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static InputAudioConfig parseFrom(ByteBuffer data)
Parameter
Returns
Exceptions
public static InputAudioConfig parseFrom(ByteBuffer data, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Exceptions
public static Parser<InputAudioConfig> parser()
Returns
public InputAudioConfig.Builder toBuilder()
Returns
public void writeTo(CodedOutputStream output)
Parameter
Overrides
Exceptions