- 4.59.0 (latest)
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.45.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.33.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.1
- 4.8.6
- 4.7.5
- 4.6.0
- 4.5.11
- 4.4.0
- 4.3.1
public static final class InputAudioConfig.Builder extends GeneratedMessageV3.Builder<InputAudioConfig.Builder> implements InputAudioConfigOrBuilder
Instructs the speech recognizer on how to process the audio content.
Protobuf type google.cloud.dialogflow.v2beta1.InputAudioConfig
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > InputAudioConfig.BuilderImplements
InputAudioConfigOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Type | Description |
Descriptor |
Methods
addAllPhraseHints(Iterable<String> values)
public InputAudioConfig.Builder addAllPhraseHints(Iterable<String> values)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
values | Iterable<String> The phraseHints to add. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
addAllSpeechContexts(Iterable<? extends SpeechContext> values)
public InputAudioConfig.Builder addAllSpeechContexts(Iterable<? extends SpeechContext> values)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
values | Iterable<? extends com.google.cloud.dialogflow.v2beta1.SpeechContext> |
Type | Description |
InputAudioConfig.Builder |
addPhraseHints(String value)
public InputAudioConfig.Builder addPhraseHints(String value)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
value | String The phraseHints to add. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
addPhraseHintsBytes(ByteString value)
public InputAudioConfig.Builder addPhraseHintsBytes(ByteString value)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
value | ByteString The bytes of the phraseHints to add. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public InputAudioConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(SpeechContext value)
public InputAudioConfig.Builder addSpeechContexts(SpeechContext value)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
value | SpeechContext |
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder addSpeechContexts(SpeechContext.Builder builderForValue)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
builderForValue | SpeechContext.Builder |
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(int index, SpeechContext value)
public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext value)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
value | SpeechContext |
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(int index, SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext.Builder builderForValue)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
builderForValue | SpeechContext.Builder |
Type | Description |
InputAudioConfig.Builder |
addSpeechContextsBuilder()
public SpeechContext.Builder addSpeechContextsBuilder()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
SpeechContext.Builder |
addSpeechContextsBuilder(int index)
public SpeechContext.Builder addSpeechContextsBuilder(int index)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
Type | Description |
SpeechContext.Builder |
build()
public InputAudioConfig build()
Type | Description |
InputAudioConfig |
buildPartial()
public InputAudioConfig buildPartial()
Type | Description |
InputAudioConfig |
clear()
public InputAudioConfig.Builder clear()
Type | Description |
InputAudioConfig.Builder |
clearAudioEncoding()
public InputAudioConfig.Builder clearAudioEncoding()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearDisableNoSpeechRecognizedEvent()
public InputAudioConfig.Builder clearDisableNoSpeechRecognizedEvent()
Only used in Participants.AnalyzeContent and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearEnableWordInfo()
public InputAudioConfig.Builder clearEnableWordInfo()
If true
, Dialogflow returns SpeechWordInfo in
StreamingRecognitionResult with information about the recognized speech
words, e.g. start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
bool enable_word_info = 13;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearField(Descriptors.FieldDescriptor field)
public InputAudioConfig.Builder clearField(Descriptors.FieldDescriptor field)
Name | Description |
field | FieldDescriptor |
Type | Description |
InputAudioConfig.Builder |
clearLanguageCode()
public InputAudioConfig.Builder clearLanguageCode()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearModel()
public InputAudioConfig.Builder clearModel()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string model = 7;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearModelVariant()
public InputAudioConfig.Builder clearModelVariant()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearOneof(Descriptors.OneofDescriptor oneof)
public InputAudioConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Name | Description |
oneof | OneofDescriptor |
Type | Description |
InputAudioConfig.Builder |
clearPhraseHints()
public InputAudioConfig.Builder clearPhraseHints()
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearSampleRateHertz()
public InputAudioConfig.Builder clearSampleRateHertz()
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearSingleUtterance()
public InputAudioConfig.Builder clearSingleUtterance()
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
clearSpeechContexts()
public InputAudioConfig.Builder clearSpeechContexts()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
InputAudioConfig.Builder |
clone()
public InputAudioConfig.Builder clone()
Type | Description |
InputAudioConfig.Builder |
getAudioEncoding()
public AudioEncoding getAudioEncoding()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Type | Description |
AudioEncoding | The audioEncoding. |
getAudioEncodingValue()
public int getAudioEncodingValue()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Type | Description |
int | The enum numeric value on the wire for audioEncoding. |
getDefaultInstanceForType()
public InputAudioConfig getDefaultInstanceForType()
Type | Description |
InputAudioConfig |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Type | Description |
Descriptor |
getDisableNoSpeechRecognizedEvent()
public boolean getDisableNoSpeechRecognizedEvent()
Only used in Participants.AnalyzeContent and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Type | Description |
boolean | The disableNoSpeechRecognizedEvent. |
getEnableWordInfo()
public boolean getEnableWordInfo()
If true
, Dialogflow returns SpeechWordInfo in
StreamingRecognitionResult with information about the recognized speech
words, e.g. start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
bool enable_word_info = 13;
Type | Description |
boolean | The enableWordInfo. |
getLanguageCode()
public String getLanguageCode()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3;
Type | Description |
String | The languageCode. |
getLanguageCodeBytes()
public ByteString getLanguageCodeBytes()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3;
Type | Description |
ByteString | The bytes for languageCode. |
getModel()
public String getModel()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string model = 7;
Type | Description |
String | The model. |
getModelBytes()
public ByteString getModelBytes()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string model = 7;
Type | Description |
ByteString | The bytes for model. |
getModelVariant()
public SpeechModelVariant getModelVariant()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Type | Description |
SpeechModelVariant | The modelVariant. |
getModelVariantValue()
public int getModelVariantValue()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Type | Description |
int | The enum numeric value on the wire for modelVariant. |
getPhraseHints(int index)
public String getPhraseHints(int index)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
index | int The index of the element to return. |
Type | Description |
String | The phraseHints at the given index. |
getPhraseHintsBytes(int index)
public ByteString getPhraseHintsBytes(int index)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
index | int The index of the value to return. |
Type | Description |
ByteString | The bytes of the phraseHints at the given index. |
getPhraseHintsCount()
public int getPhraseHintsCount()
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Type | Description |
int | The count of phraseHints. |
getPhraseHintsList()
public ProtocolStringList getPhraseHintsList()
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Type | Description |
ProtocolStringList | A list containing the phraseHints. |
getSampleRateHertz()
public int getSampleRateHertz()
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2;
Type | Description |
int | The sampleRateHertz. |
getSingleUtterance()
public boolean getSingleUtterance()
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Type | Description |
boolean | The singleUtterance. |
getSpeechContexts(int index)
public SpeechContext getSpeechContexts(int index)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
Type | Description |
SpeechContext |
getSpeechContextsBuilder(int index)
public SpeechContext.Builder getSpeechContextsBuilder(int index)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
Type | Description |
SpeechContext.Builder |
getSpeechContextsBuilderList()
public List<SpeechContext.Builder> getSpeechContextsBuilderList()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
List<Builder> |
getSpeechContextsCount()
public int getSpeechContextsCount()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
int |
getSpeechContextsList()
public List<SpeechContext> getSpeechContextsList()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
List<SpeechContext> |
getSpeechContextsOrBuilder(int index)
public SpeechContextOrBuilder getSpeechContextsOrBuilder(int index)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
Type | Description |
SpeechContextOrBuilder |
getSpeechContextsOrBuilderList()
public List<? extends SpeechContextOrBuilder> getSpeechContextsOrBuilderList()
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Type | Description |
List<? extends com.google.cloud.dialogflow.v2beta1.SpeechContextOrBuilder> |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Type | Description |
boolean |
mergeFrom(InputAudioConfig other)
public InputAudioConfig.Builder mergeFrom(InputAudioConfig other)
Name | Description |
other | InputAudioConfig |
Type | Description |
InputAudioConfig.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public InputAudioConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | CodedInputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
InputAudioConfig.Builder |
Type | Description |
IOException |
mergeFrom(Message other)
public InputAudioConfig.Builder mergeFrom(Message other)
Name | Description |
other | Message |
Type | Description |
InputAudioConfig.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final InputAudioConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
InputAudioConfig.Builder |
removeSpeechContexts(int index)
public InputAudioConfig.Builder removeSpeechContexts(int index)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
Type | Description |
InputAudioConfig.Builder |
setAudioEncoding(AudioEncoding value)
public InputAudioConfig.Builder setAudioEncoding(AudioEncoding value)
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Name | Description |
value | AudioEncoding The audioEncoding to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setAudioEncodingValue(int value)
public InputAudioConfig.Builder setAudioEncodingValue(int value)
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;
Name | Description |
value | int The enum numeric value on the wire for audioEncoding to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setDisableNoSpeechRecognizedEvent(boolean value)
public InputAudioConfig.Builder setDisableNoSpeechRecognizedEvent(boolean value)
Only used in Participants.AnalyzeContent and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Name | Description |
value | boolean The disableNoSpeechRecognizedEvent to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setEnableWordInfo(boolean value)
public InputAudioConfig.Builder setEnableWordInfo(boolean value)
If true
, Dialogflow returns SpeechWordInfo in
StreamingRecognitionResult with information about the recognized speech
words, e.g. start and end time offsets. If false or unspecified, Speech
doesn't return any word-level information.
bool enable_word_info = 13;
Name | Description |
value | boolean The enableWordInfo to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setField(Descriptors.FieldDescriptor field, Object value)
public InputAudioConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
InputAudioConfig.Builder |
setLanguageCode(String value)
public InputAudioConfig.Builder setLanguageCode(String value)
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3;
Name | Description |
value | String The languageCode to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setLanguageCodeBytes(ByteString value)
public InputAudioConfig.Builder setLanguageCodeBytes(ByteString value)
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3;
Name | Description |
value | ByteString The bytes for languageCode to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setModel(String value)
public InputAudioConfig.Builder setModel(String value)
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string model = 7;
Name | Description |
value | String The model to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setModelBytes(ByteString value)
public InputAudioConfig.Builder setModelBytes(ByteString value)
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string model = 7;
Name | Description |
value | ByteString The bytes for model to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setModelVariant(SpeechModelVariant value)
public InputAudioConfig.Builder setModelVariant(SpeechModelVariant value)
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Name | Description |
value | SpeechModelVariant The modelVariant to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setModelVariantValue(int value)
public InputAudioConfig.Builder setModelVariantValue(int value)
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Name | Description |
value | int The enum numeric value on the wire for modelVariant to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setPhraseHints(int index, String value)
public InputAudioConfig.Builder setPhraseHints(int index, String value)
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.
repeated string phrase_hints = 4 [deprecated = true];
Name | Description |
index | int The index to set the value at. |
value | String The phraseHints to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public InputAudioConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Name | Description |
field | FieldDescriptor |
index | int |
value | Object |
Type | Description |
InputAudioConfig.Builder |
setSampleRateHertz(int value)
public InputAudioConfig.Builder setSampleRateHertz(int value)
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2;
Name | Description |
value | int The sampleRateHertz to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setSingleUtterance(boolean value)
public InputAudioConfig.Builder setSingleUtterance(boolean value)
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Name | Description |
value | boolean The singleUtterance to set. |
Type | Description |
InputAudioConfig.Builder | This builder for chaining. |
setSpeechContexts(int index, SpeechContext value)
public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext value)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
value | SpeechContext |
Type | Description |
InputAudioConfig.Builder |
setSpeechContexts(int index, SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext.Builder builderForValue)
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Name | Description |
index | int |
builderForValue | SpeechContext.Builder |
Type | Description |
InputAudioConfig.Builder |
setUnknownFields(UnknownFieldSet unknownFields)
public final InputAudioConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
InputAudioConfig.Builder |