Class InputAudioConfig.Builder (4.5.11)

public static final class InputAudioConfig.Builder extends GeneratedMessageV3.Builder<InputAudioConfig.Builder> implements InputAudioConfigOrBuilder

Instructs the speech recognizer on how to process the audio content.

Protobuf type google.cloud.dialogflow.v2beta1.InputAudioConfig

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
TypeDescription
Descriptor

Methods

addAllPhraseHints(Iterable<String> values)

public InputAudioConfig.Builder addAllPhraseHints(Iterable<String> values)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameter
NameDescription
valuesIterable<String>

The phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addAllSpeechContexts(Iterable<? extends SpeechContext> values)

public InputAudioConfig.Builder addAllSpeechContexts(Iterable<? extends SpeechContext> values)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
valuesIterable<? extends com.google.cloud.dialogflow.v2beta1.SpeechContext>
Returns
TypeDescription
InputAudioConfig.Builder

addPhraseHints(String value)

public InputAudioConfig.Builder addPhraseHints(String value)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameter
NameDescription
valueString

The phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addPhraseHintsBytes(ByteString value)

public InputAudioConfig.Builder addPhraseHintsBytes(ByteString value)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameter
NameDescription
valueByteString

The bytes of the phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

addSpeechContexts(SpeechContext value)

public InputAudioConfig.Builder addSpeechContexts(SpeechContext value)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
valueSpeechContext
Returns
TypeDescription
InputAudioConfig.Builder

addSpeechContexts(SpeechContext.Builder builderForValue)

public InputAudioConfig.Builder addSpeechContexts(SpeechContext.Builder builderForValue)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
builderForValueSpeechContext.Builder
Returns
TypeDescription
InputAudioConfig.Builder

addSpeechContexts(int index, SpeechContext value)

public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext value)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameters
NameDescription
indexint
valueSpeechContext
Returns
TypeDescription
InputAudioConfig.Builder

addSpeechContexts(int index, SpeechContext.Builder builderForValue)

public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext.Builder builderForValue)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameters
NameDescription
indexint
builderForValueSpeechContext.Builder
Returns
TypeDescription
InputAudioConfig.Builder

addSpeechContextsBuilder()

public SpeechContext.Builder addSpeechContextsBuilder()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
SpeechContext.Builder

addSpeechContextsBuilder(int index)

public SpeechContext.Builder addSpeechContextsBuilder(int index)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
indexint
Returns
TypeDescription
SpeechContext.Builder

build()

public InputAudioConfig build()
Returns
TypeDescription
InputAudioConfig

buildPartial()

public InputAudioConfig buildPartial()
Returns
TypeDescription
InputAudioConfig

clear()

public InputAudioConfig.Builder clear()
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearAudioEncoding()

public InputAudioConfig.Builder clearAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearDisableNoSpeechRecognizedEvent()

public InputAudioConfig.Builder clearDisableNoSpeechRecognizedEvent()

Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.

bool disable_no_speech_recognized_event = 14;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearEnableWordInfo()

public InputAudioConfig.Builder clearEnableWordInfo()

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearField(Descriptors.FieldDescriptor field)

public InputAudioConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
NameDescription
fieldFieldDescriptor
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearLanguageCode()

public InputAudioConfig.Builder clearLanguageCode()

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

string language_code = 3;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearModel()

public InputAudioConfig.Builder clearModel()

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearModelVariant()

public InputAudioConfig.Builder clearModelVariant()

Which variant of the Speech model to use.

.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearOneof(Descriptors.OneofDescriptor oneof)

public InputAudioConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
NameDescription
oneofOneofDescriptor
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearPhraseHints()

public InputAudioConfig.Builder clearPhraseHints()

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearSampleRateHertz()

public InputAudioConfig.Builder clearSampleRateHertz()

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearSingleUtterance()

public InputAudioConfig.Builder clearSingleUtterance()

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

bool single_utterance = 8;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearSpeechContexts()

public InputAudioConfig.Builder clearSpeechContexts()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
InputAudioConfig.Builder

clone()

public InputAudioConfig.Builder clone()
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

getAudioEncoding()

public AudioEncoding getAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;

Returns
TypeDescription
AudioEncoding

The audioEncoding.

getAudioEncodingValue()

public int getAudioEncodingValue()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;

Returns
TypeDescription
int

The enum numeric value on the wire for audioEncoding.

getDefaultInstanceForType()

public InputAudioConfig getDefaultInstanceForType()
Returns
TypeDescription
InputAudioConfig

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
TypeDescription
Descriptor
Overrides

getDisableNoSpeechRecognizedEvent()

public boolean getDisableNoSpeechRecognizedEvent()

Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.

bool disable_no_speech_recognized_event = 14;

Returns
TypeDescription
boolean

The disableNoSpeechRecognizedEvent.

getEnableWordInfo()

public boolean getEnableWordInfo()

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
TypeDescription
boolean

The enableWordInfo.

getLanguageCode()

public String getLanguageCode()

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

string language_code = 3;

Returns
TypeDescription
String

The languageCode.

getLanguageCodeBytes()

public ByteString getLanguageCodeBytes()

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

string language_code = 3;

Returns
TypeDescription
ByteString

The bytes for languageCode.

getModel()

public String getModel()

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
String

The model.

getModelBytes()

public ByteString getModelBytes()

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
ByteString

The bytes for model.

getModelVariant()

public SpeechModelVariant getModelVariant()

Which variant of the Speech model to use.

.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
SpeechModelVariant

The modelVariant.

getModelVariantValue()

public int getModelVariantValue()

Which variant of the Speech model to use.

.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
int

The enum numeric value on the wire for modelVariant.

getPhraseHints(int index)

public String getPhraseHints(int index)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameter
NameDescription
indexint

The index of the element to return.

Returns
TypeDescription
String

The phraseHints at the given index.

getPhraseHintsBytes(int index)

public ByteString getPhraseHintsBytes(int index)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameter
NameDescription
indexint

The index of the value to return.

Returns
TypeDescription
ByteString

The bytes of the phraseHints at the given index.

getPhraseHintsCount()

public int getPhraseHintsCount()

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Returns
TypeDescription
int

The count of phraseHints.

getPhraseHintsList()

public ProtocolStringList getPhraseHintsList()

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Returns
TypeDescription
ProtocolStringList

A list containing the phraseHints.

getSampleRateHertz()

public int getSampleRateHertz()

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
TypeDescription
int

The sampleRateHertz.

getSingleUtterance()

public boolean getSingleUtterance()

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

bool single_utterance = 8;

Returns
TypeDescription
boolean

The singleUtterance.

getSpeechContexts(int index)

public SpeechContext getSpeechContexts(int index)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
indexint
Returns
TypeDescription
SpeechContext

getSpeechContextsBuilder(int index)

public SpeechContext.Builder getSpeechContextsBuilder(int index)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
indexint
Returns
TypeDescription
SpeechContext.Builder

getSpeechContextsBuilderList()

public List<SpeechContext.Builder> getSpeechContextsBuilderList()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
List<Builder>

getSpeechContextsCount()

public int getSpeechContextsCount()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
int

getSpeechContextsList()

public List<SpeechContext> getSpeechContextsList()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
List<SpeechContext>

getSpeechContextsOrBuilder(int index)

public SpeechContextOrBuilder getSpeechContextsOrBuilder(int index)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
indexint
Returns
TypeDescription
SpeechContextOrBuilder

getSpeechContextsOrBuilderList()

public List<? extends SpeechContextOrBuilder> getSpeechContextsOrBuilderList()

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Returns
TypeDescription
List<? extends com.google.cloud.dialogflow.v2beta1.SpeechContextOrBuilder>

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
TypeDescription
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
TypeDescription
boolean
Overrides

mergeFrom(InputAudioConfig other)

public InputAudioConfig.Builder mergeFrom(InputAudioConfig other)
Parameter
NameDescription
otherInputAudioConfig
Returns
TypeDescription
InputAudioConfig.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public InputAudioConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
NameDescription
inputCodedInputStream
extensionRegistryExtensionRegistryLite
Returns
TypeDescription
InputAudioConfig.Builder
Overrides Exceptions
TypeDescription
IOException

mergeFrom(Message other)

public InputAudioConfig.Builder mergeFrom(Message other)
Parameter
NameDescription
otherMessage
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

mergeUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

removeSpeechContexts(int index)

public InputAudioConfig.Builder removeSpeechContexts(int index)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameter
NameDescription
indexint
Returns
TypeDescription
InputAudioConfig.Builder

setAudioEncoding(AudioEncoding value)

public InputAudioConfig.Builder setAudioEncoding(AudioEncoding value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;

Parameter
NameDescription
valueAudioEncoding

The audioEncoding to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setAudioEncodingValue(int value)

public InputAudioConfig.Builder setAudioEncodingValue(int value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1;

Parameter
NameDescription
valueint

The enum numeric value on the wire for audioEncoding to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setDisableNoSpeechRecognizedEvent(boolean value)

public InputAudioConfig.Builder setDisableNoSpeechRecognizedEvent(boolean value)

Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent.

bool disable_no_speech_recognized_event = 14;

Parameter
NameDescription
valueboolean

The disableNoSpeechRecognizedEvent to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setEnableWordInfo(boolean value)

public InputAudioConfig.Builder setEnableWordInfo(boolean value)

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Parameter
NameDescription
valueboolean

The enableWordInfo to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

setLanguageCode(String value)

public InputAudioConfig.Builder setLanguageCode(String value)

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

string language_code = 3;

Parameter
NameDescription
valueString

The languageCode to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setLanguageCodeBytes(ByteString value)

public InputAudioConfig.Builder setLanguageCodeBytes(ByteString value)

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

string language_code = 3;

Parameter
NameDescription
valueByteString

The bytes for languageCode to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModel(String value)

public InputAudioConfig.Builder setModel(String value)

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Parameter
NameDescription
valueString

The model to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelBytes(ByteString value)

public InputAudioConfig.Builder setModelBytes(ByteString value)

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Parameter
NameDescription
valueByteString

The bytes for model to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelVariant(SpeechModelVariant value)

public InputAudioConfig.Builder setModelVariant(SpeechModelVariant value)

Which variant of the Speech model to use.

.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;

Parameter
NameDescription
valueSpeechModelVariant

The modelVariant to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelVariantValue(int value)

public InputAudioConfig.Builder setModelVariantValue(int value)

Which variant of the Speech model to use.

.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;

Parameter
NameDescription
valueint

The enum numeric value on the wire for modelVariant to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setPhraseHints(int index, String value)

public InputAudioConfig.Builder setPhraseHints(int index, String value)

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext.

repeated string phrase_hints = 4 [deprecated = true];

Parameters
NameDescription
indexint

The index to set the value at.

valueString

The phraseHints to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public InputAudioConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
NameDescription
fieldFieldDescriptor
indexint
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

setSampleRateHertz(int value)

public InputAudioConfig.Builder setSampleRateHertz(int value)

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Parameter
NameDescription
valueint

The sampleRateHertz to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setSingleUtterance(boolean value)

public InputAudioConfig.Builder setSingleUtterance(boolean value)

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.

bool single_utterance = 8;

Parameter
NameDescription
valueboolean

The singleUtterance to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setSpeechContexts(int index, SpeechContext value)

public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext value)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameters
NameDescription
indexint
valueSpeechContext
Returns
TypeDescription
InputAudioConfig.Builder

setSpeechContexts(int index, SpeechContext.Builder builderForValue)

public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext.Builder builderForValue)

Context information to assist speech recognition. See the Cloud Speech documentation for more details.

repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;

Parameters
NameDescription
indexint
builderForValueSpeechContext.Builder
Returns
TypeDescription
InputAudioConfig.Builder

setUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputAudioConfig.Builder
Overrides