Class InputAudioConfig.Builder (0.40.0)

public static final class InputAudioConfig.Builder extends GeneratedMessageV3.Builder<InputAudioConfig.Builder> implements InputAudioConfigOrBuilder

Instructs the speech recognizer on how to process the audio content.

Protobuf type google.cloud.dialogflow.cx.v3beta1.InputAudioConfig

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
Type Description
Descriptor

Methods

addAllPhraseHints(Iterable<String> values)

public InputAudioConfig.Builder addAllPhraseHints(Iterable<String> values)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
Name Description
values Iterable<String>

The phraseHints to add.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

addPhraseHints(String value)

public InputAudioConfig.Builder addPhraseHints(String value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
Name Description
value String

The phraseHints to add.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

addPhraseHintsBytes(ByteString value)

public InputAudioConfig.Builder addPhraseHintsBytes(ByteString value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
Name Description
value ByteString

The bytes of the phraseHints to add.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
InputAudioConfig.Builder
Overrides

build()

public InputAudioConfig build()
Returns
Type Description
InputAudioConfig

buildPartial()

public InputAudioConfig buildPartial()
Returns
Type Description
InputAudioConfig

clear()

public InputAudioConfig.Builder clear()
Returns
Type Description
InputAudioConfig.Builder
Overrides

clearAudioEncoding()

public InputAudioConfig.Builder clearAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearEnableWordInfo()

public InputAudioConfig.Builder clearEnableWordInfo()

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearField(Descriptors.FieldDescriptor field)

public InputAudioConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Name Description
field FieldDescriptor
Returns
Type Description
InputAudioConfig.Builder
Overrides

clearModel()

public InputAudioConfig.Builder clearModel()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search (best for very short utterances and commands)

string model = 7;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearModelVariant()

public InputAudioConfig.Builder clearModelVariant()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearOneof(Descriptors.OneofDescriptor oneof)

public InputAudioConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Name Description
oneof OneofDescriptor
Returns
Type Description
InputAudioConfig.Builder
Overrides

clearPhraseHints()

public InputAudioConfig.Builder clearPhraseHints()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearSampleRateHertz()

public InputAudioConfig.Builder clearSampleRateHertz()

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clearSingleUtterance()

public InputAudioConfig.Builder clearSingleUtterance()

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

clone()

public InputAudioConfig.Builder clone()
Returns
Type Description
InputAudioConfig.Builder
Overrides

getAudioEncoding()

public AudioEncoding getAudioEncoding()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
Type Description
AudioEncoding

The audioEncoding.

getAudioEncodingValue()

public int getAudioEncodingValue()

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
Type Description
int

The enum numeric value on the wire for audioEncoding.

getDefaultInstanceForType()

public InputAudioConfig getDefaultInstanceForType()
Returns
Type Description
InputAudioConfig

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
Type Description
Descriptor
Overrides

getEnableWordInfo()

public boolean getEnableWordInfo()

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Returns
Type Description
boolean

The enableWordInfo.

getModel()

public String getModel()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search (best for very short utterances and commands)

string model = 7;

Returns
Type Description
String

The model.

getModelBytes()

public ByteString getModelBytes()

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search (best for very short utterances and commands)

string model = 7;

Returns
Type Description
ByteString

The bytes for model.

getModelVariant()

public SpeechModelVariant getModelVariant()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
Type Description
SpeechModelVariant

The modelVariant.

getModelVariantValue()

public int getModelVariantValue()

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Returns
Type Description
int

The enum numeric value on the wire for modelVariant.

getPhraseHints(int index)

public String getPhraseHints(int index)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
Name Description
index int

The index of the element to return.

Returns
Type Description
String

The phraseHints at the given index.

getPhraseHintsBytes(int index)

public ByteString getPhraseHintsBytes(int index)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameter
Name Description
index int

The index of the value to return.

Returns
Type Description
ByteString

The bytes of the phraseHints at the given index.

getPhraseHintsCount()

public int getPhraseHintsCount()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
Type Description
int

The count of phraseHints.

getPhraseHintsList()

public ProtocolStringList getPhraseHintsList()

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Returns
Type Description
ProtocolStringList

A list containing the phraseHints.

getSampleRateHertz()

public int getSampleRateHertz()

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Returns
Type Description
int

The sampleRateHertz.

getSingleUtterance()

public boolean getSingleUtterance()

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Returns
Type Description
boolean

The singleUtterance.

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Type Description
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
Type Description
boolean
Overrides

mergeFrom(InputAudioConfig other)

public InputAudioConfig.Builder mergeFrom(InputAudioConfig other)
Parameter
Name Description
other InputAudioConfig
Returns
Type Description
InputAudioConfig.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public InputAudioConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Name Description
input CodedInputStream
extensionRegistry ExtensionRegistryLite
Returns
Type Description
InputAudioConfig.Builder
Overrides
Exceptions
Type Description
IOException

mergeFrom(Message other)

public InputAudioConfig.Builder mergeFrom(Message other)
Parameter
Name Description
other Message
Returns
Type Description
InputAudioConfig.Builder
Overrides

mergeUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
InputAudioConfig.Builder
Overrides

setAudioEncoding(AudioEncoding value)

public InputAudioConfig.Builder setAudioEncoding(AudioEncoding value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
Name Description
value AudioEncoding

The audioEncoding to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setAudioEncodingValue(int value)

public InputAudioConfig.Builder setAudioEncodingValue(int value)

Required. Audio encoding of the audio content to process.

.google.cloud.dialogflow.cx.v3beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];

Parameter
Name Description
value int

The enum numeric value on the wire for audioEncoding to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setEnableWordInfo(boolean value)

public InputAudioConfig.Builder setEnableWordInfo(boolean value)

Optional. If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.

bool enable_word_info = 13;

Parameter
Name Description
value boolean

The enableWordInfo to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setField(Descriptors.FieldDescriptor field, Object value)

public InputAudioConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
InputAudioConfig.Builder
Overrides

setModel(String value)

public InputAudioConfig.Builder setModel(String value)

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search (best for very short utterances and commands)

string model = 7;

Parameter
Name Description
value String

The model to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setModelBytes(ByteString value)

public InputAudioConfig.Builder setModelBytes(ByteString value)

Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:

  • phone_call (best for Agent Assist and telephony)
  • latest_short (best for Dialogflow non-telephony)
  • command_and_search (best for very short utterances and commands)

string model = 7;

Parameter
Name Description
value ByteString

The bytes for model to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setModelVariant(SpeechModelVariant value)

public InputAudioConfig.Builder setModelVariant(SpeechModelVariant value)

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Parameter
Name Description
value SpeechModelVariant

The modelVariant to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setModelVariantValue(int value)

public InputAudioConfig.Builder setModelVariantValue(int value)

Optional. Which variant of the Speech model to use.

.google.cloud.dialogflow.cx.v3beta1.SpeechModelVariant model_variant = 10;

Parameter
Name Description
value int

The enum numeric value on the wire for modelVariant to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setPhraseHints(int index, String value)

public InputAudioConfig.Builder setPhraseHints(int index, String value)

Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.

See the Cloud Speech documentation for more details.

repeated string phrase_hints = 4;

Parameters
Name Description
index int

The index to set the value at.

value String

The phraseHints to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public InputAudioConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Name Description
field FieldDescriptor
index int
value Object
Returns
Type Description
InputAudioConfig.Builder
Overrides

setSampleRateHertz(int value)

public InputAudioConfig.Builder setSampleRateHertz(int value)

Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.

int32 sample_rate_hertz = 2;

Parameter
Name Description
value int

The sampleRateHertz to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setSingleUtterance(boolean value)

public InputAudioConfig.Builder setSingleUtterance(boolean value)

Optional. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.

bool single_utterance = 8;

Parameter
Name Description
value boolean

The singleUtterance to set.

Returns
Type Description
InputAudioConfig.Builder

This builder for chaining.

setUnknownFields(UnknownFieldSet unknownFields)

public final InputAudioConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
InputAudioConfig.Builder
Overrides