Class InputAudioConfig.Builder (0.13.0)

public static final class InputAudioConfig.Builder extends GeneratedMessageV3.Builder<InputAudioConfig.Builder> implements InputAudioConfigOrBuilder

Instructs the speech recognizer on how to process the audio content.

Protobuf type google.cloud.dialogflow.cx.v3beta1.InputAudioConfig

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
TypeDescription
Descriptor

Methods

addAllPhraseHints(Iterable<String> values)

public InputAudioConfig.Builder addAllPhraseHints(Iterable<String> values)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
NameDescription
valuesIterable<String>

The phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addPhraseHints(String value)

public InputAudioConfig.Builder addPhraseHints(String value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
NameDescription
valueString

The phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addPhraseHintsBytes(ByteString value)

public InputAudioConfig.Builder addPhraseHintsBytes(ByteString value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
NameDescription
valueByteString

The bytes of the phraseHints to add.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

build()

public InputAudioConfig build()
Returns
TypeDescription
InputAudioConfig

buildPartial()

public InputAudioConfig buildPartial()
Returns
TypeDescription
InputAudioConfig

clear()

public InputAudioConfig.Builder clear()
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearAudioEncoding()

public InputAudioConfig.Builder clearAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearEnableWordInfo()

public InputAudioConfig.Builder clearEnableWordInfo()

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearField(Descriptors.FieldDescriptor field)

public InputAudioConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
NameDescription
fieldFieldDescriptor
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearModel()

public InputAudioConfig.Builder clearModel()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearModelVariant()

public InputAudioConfig.Builder clearModelVariant()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearOneof(Descriptors.OneofDescriptor oneof)

public InputAudioConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
NameDescription
oneofOneofDescriptor
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

clearPhraseHints()

public InputAudioConfig.Builder clearPhraseHints()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearSampleRateHertz()

public InputAudioConfig.Builder clearSampleRateHertz()

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clearSingleUtterance()

public InputAudioConfig.Builder clearSingleUtterance()

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

clone()

public InputAudioConfig.Builder clone()
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

getAudioEncoding()

public AudioEncoding getAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
AudioEncoding

The audioEncoding.

getAudioEncodingValue()

public int getAudioEncodingValue()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
int

The enum numeric value on the wire for audioEncoding.

getDefaultInstanceForType()

public InputAudioConfig getDefaultInstanceForType()
Returns
TypeDescription
InputAudioConfig

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
TypeDescription
Descriptor
Overrides

getEnableWordInfo()

public boolean getEnableWordInfo()

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
TypeDescription
boolean

The enableWordInfo.

getModel()

public String getModel()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
String

The model.

getModelBytes()

public ByteString getModelBytes()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Returns
TypeDescription
ByteString

The bytes for model.

getModelVariant()

public SpeechModelVariant getModelVariant()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
SpeechModelVariant

The modelVariant.

getModelVariantValue()

public int getModelVariantValue()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
TypeDescription
int

The enum numeric value on the wire for modelVariant.

getPhraseHints(int index)

public String getPhraseHints(int index)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
NameDescription
indexint

The index of the element to return.

Returns
TypeDescription
String

The phraseHints at the given index.

getPhraseHintsBytes(int index)

public ByteString getPhraseHintsBytes(int index)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
NameDescription
indexint

The index of the value to return.

Returns
TypeDescription
ByteString

The bytes of the phraseHints at the given index.

getPhraseHintsCount()

public int getPhraseHintsCount()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
TypeDescription
int

The count of phraseHints.

getPhraseHintsList()

public ProtocolStringList getPhraseHintsList()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
TypeDescription
ProtocolStringList

A list containing the phraseHints.

getSampleRateHertz()

public int getSampleRateHertz()

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
TypeDescription
int

The sampleRateHertz.

getSingleUtterance()

public boolean getSingleUtterance()

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Returns
TypeDescription
boolean

The singleUtterance.

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
TypeDescription
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
TypeDescription
boolean
Overrides

mergeFrom(InputAudioConfig other)

public InputAudioConfig.Builder mergeFrom(InputAudioConfig other)
Parameter
NameDescription
otherInputAudioConfig
Returns
TypeDescription
InputAudioConfig.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public InputAudioConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
NameDescription
inputCodedInputStream
extensionRegistryExtensionRegistryLite
Returns
TypeDescription
InputAudioConfig.Builder
Overrides Exceptions
TypeDescription
IOException

mergeFrom(Message other)

public InputAudioConfig.Builder mergeFrom(Message other)
Parameter
NameDescription
otherMessage
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

mergeUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

setAudioEncoding(AudioEncoding value)

public InputAudioConfig.Builder setAudioEncoding(AudioEncoding value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
NameDescription
valueAudioEncoding

The audioEncoding to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setAudioEncodingValue(int value)

public InputAudioConfig.Builder setAudioEncodingValue(int value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
NameDescription
valueint

The enum numeric value on the wire for audioEncoding to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setEnableWordInfo(boolean value)

public InputAudioConfig.Builder setEnableWordInfo(boolean value)

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Parameter
NameDescription
valueboolean

The enableWordInfo to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
NameDescription
fieldFieldDescriptor
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

setModel(String value)

public InputAudioConfig.Builder setModel(String value)

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Parameter
NameDescription
valueString

The model to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelBytes(ByteString value)

public InputAudioConfig.Builder setModelBytes(ByteString value)

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.

string model = 7;

Parameter
NameDescription
valueByteString

The bytes for model to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelVariant(SpeechModelVariant value)

public InputAudioConfig.Builder setModelVariant(SpeechModelVariant value)

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Parameter
NameDescription
valueSpeechModelVariant

The modelVariant to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setModelVariantValue(int value)

public InputAudioConfig.Builder setModelVariantValue(int value)

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Parameter
NameDescription
valueint

The enum numeric value on the wire for modelVariant to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setPhraseHints(int index, String value)

public InputAudioConfig.Builder setPhraseHints(int index, String value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameters
NameDescription
indexint

The index to set the value at.

valueString

The phraseHints to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public InputAudioConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
NameDescription
fieldFieldDescriptor
indexint
valueObject
Returns
TypeDescription
InputAudioConfig.Builder
Overrides

setSampleRateHertz(int value)

public InputAudioConfig.Builder setSampleRateHertz(int value)

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Parameter
NameDescription
valueint

The sampleRateHertz to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setSingleUtterance(boolean value)

public InputAudioConfig.Builder setSingleUtterance(boolean value)

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Parameter
NameDescription
valueboolean

The singleUtterance to set.

Returns
TypeDescription
InputAudioConfig.Builder

This builder for chaining.

setUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
NameDescription
unknownFieldsUnknownFieldSet
Returns
TypeDescription
InputAudioConfig.Builder
Overrides