- 4.59.0 (latest)
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.45.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.33.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.1
- 4.8.6
- 4.7.5
- 4.6.0
- 4.5.11
- 4.4.0
- 4.3.1
public static final class InputAudioConfig.Builder extends GeneratedMessageV3.Builder<InputAudioConfig.Builder> implements InputAudioConfigOrBuilder
Instructs the speech recognizer on how to process the audio content.
Protobuf type google.cloud.dialogflow.v2beta1.InputAudioConfig
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > InputAudioConfig.BuilderImplements
InputAudioConfigOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns | |
---|---|
Type | Description |
Descriptor |
Methods
addAllPhraseHints(Iterable<String> values) (deprecated)
public InputAudioConfig.Builder addAllPhraseHints(Iterable<String> values)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameter | |
---|---|
Name | Description |
values |
Iterable<String> The phraseHints to add. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
addAllSpeechContexts(Iterable<? extends SpeechContext> values)
public InputAudioConfig.Builder addAllSpeechContexts(Iterable<? extends SpeechContext> values)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
values |
Iterable<? extends com.google.cloud.dialogflow.v2beta1.SpeechContext> |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addPhraseHints(String value) (deprecated)
public InputAudioConfig.Builder addPhraseHints(String value)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameter | |
---|---|
Name | Description |
value |
String The phraseHints to add. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
addPhraseHintsBytes(ByteString value) (deprecated)
public InputAudioConfig.Builder addPhraseHintsBytes(ByteString value)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameter | |
---|---|
Name | Description |
value |
ByteString The bytes of the phraseHints to add. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public InputAudioConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(SpeechContext value)
public InputAudioConfig.Builder addSpeechContexts(SpeechContext value)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
value |
SpeechContext |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder addSpeechContexts(SpeechContext.Builder builderForValue)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
builderForValue |
SpeechContext.Builder |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(int index, SpeechContext value)
public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext value)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameters | |
---|---|
Name | Description |
index |
int |
value |
SpeechContext |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addSpeechContexts(int index, SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder addSpeechContexts(int index, SpeechContext.Builder builderForValue)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
SpeechContext.Builder |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
addSpeechContextsBuilder()
public SpeechContext.Builder addSpeechContextsBuilder()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
SpeechContext.Builder |
addSpeechContextsBuilder(int index)
public SpeechContext.Builder addSpeechContextsBuilder(int index)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechContext.Builder |
build()
public InputAudioConfig build()
Returns | |
---|---|
Type | Description |
InputAudioConfig |
buildPartial()
public InputAudioConfig buildPartial()
Returns | |
---|---|
Type | Description |
InputAudioConfig |
clear()
public InputAudioConfig.Builder clear()
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
clearAudioEncoding()
public InputAudioConfig.Builder clearAudioEncoding()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearBargeInConfig()
public InputAudioConfig.Builder clearBargeInConfig()
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
clearDisableNoSpeechRecognizedEvent()
public InputAudioConfig.Builder clearDisableNoSpeechRecognizedEvent()
Only used in
Participants.AnalyzeContent
and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearEnableAutomaticPunctuation()
public InputAudioConfig.Builder clearEnableAutomaticPunctuation()
Enable automatic punctuation option at the speech backend.
bool enable_automatic_punctuation = 17;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearEnableWordInfo()
public InputAudioConfig.Builder clearEnableWordInfo()
If true
, Dialogflow returns
SpeechWordInfo in
StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
bool enable_word_info = 13;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearField(Descriptors.FieldDescriptor field)
public InputAudioConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter | |
---|---|
Name | Description |
field |
FieldDescriptor |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
clearLanguageCode()
public InputAudioConfig.Builder clearLanguageCode()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearModel()
public InputAudioConfig.Builder clearModel()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search (best for very short utterances and commands)
string model = 7;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearModelVariant()
public InputAudioConfig.Builder clearModelVariant()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearOneof(Descriptors.OneofDescriptor oneof)
public InputAudioConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter | |
---|---|
Name | Description |
oneof |
OneofDescriptor |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
clearPhraseHints() (deprecated)
public InputAudioConfig.Builder clearPhraseHints()
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearSampleRateHertz()
public InputAudioConfig.Builder clearSampleRateHertz()
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearSingleUtterance()
public InputAudioConfig.Builder clearSingleUtterance()
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
clearSpeechContexts()
public InputAudioConfig.Builder clearSpeechContexts()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
clone()
public InputAudioConfig.Builder clone()
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
getAudioEncoding()
public AudioEncoding getAudioEncoding()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
AudioEncoding |
The audioEncoding. |
getAudioEncodingValue()
public int getAudioEncodingValue()
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
int |
The enum numeric value on the wire for audioEncoding. |
getBargeInConfig()
public BargeInConfig getBargeInConfig()
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Returns | |
---|---|
Type | Description |
BargeInConfig |
The bargeInConfig. |
getBargeInConfigBuilder()
public BargeInConfig.Builder getBargeInConfigBuilder()
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Returns | |
---|---|
Type | Description |
BargeInConfig.Builder |
getBargeInConfigOrBuilder()
public BargeInConfigOrBuilder getBargeInConfigOrBuilder()
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Returns | |
---|---|
Type | Description |
BargeInConfigOrBuilder |
getDefaultInstanceForType()
public InputAudioConfig getDefaultInstanceForType()
Returns | |
---|---|
Type | Description |
InputAudioConfig |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns | |
---|---|
Type | Description |
Descriptor |
getDisableNoSpeechRecognizedEvent()
public boolean getDisableNoSpeechRecognizedEvent()
Only used in
Participants.AnalyzeContent
and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Returns | |
---|---|
Type | Description |
boolean |
The disableNoSpeechRecognizedEvent. |
getEnableAutomaticPunctuation()
public boolean getEnableAutomaticPunctuation()
Enable automatic punctuation option at the speech backend.
bool enable_automatic_punctuation = 17;
Returns | |
---|---|
Type | Description |
boolean |
The enableAutomaticPunctuation. |
getEnableWordInfo()
public boolean getEnableWordInfo()
If true
, Dialogflow returns
SpeechWordInfo in
StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
bool enable_word_info = 13;
Returns | |
---|---|
Type | Description |
boolean |
The enableWordInfo. |
getLanguageCode()
public String getLanguageCode()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
String |
The languageCode. |
getLanguageCodeBytes()
public ByteString getLanguageCodeBytes()
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for languageCode. |
getModel()
public String getModel()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search (best for very short utterances and commands)
string model = 7;
Returns | |
---|---|
Type | Description |
String |
The model. |
getModelBytes()
public ByteString getModelBytes()
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search (best for very short utterances and commands)
string model = 7;
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for model. |
getModelVariant()
public SpeechModelVariant getModelVariant()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Returns | |
---|---|
Type | Description |
SpeechModelVariant |
The modelVariant. |
getModelVariantValue()
public int getModelVariantValue()
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Returns | |
---|---|
Type | Description |
int |
The enum numeric value on the wire for modelVariant. |
getPhraseHints(int index) (deprecated)
public String getPhraseHints(int index)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameter | |
---|---|
Name | Description |
index |
int The index of the element to return. |
Returns | |
---|---|
Type | Description |
String |
The phraseHints at the given index. |
getPhraseHintsBytes(int index) (deprecated)
public ByteString getPhraseHintsBytes(int index)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameter | |
---|---|
Name | Description |
index |
int The index of the value to return. |
Returns | |
---|---|
Type | Description |
ByteString |
The bytes of the phraseHints at the given index. |
getPhraseHintsCount() (deprecated)
public int getPhraseHintsCount()
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Returns | |
---|---|
Type | Description |
int |
The count of phraseHints. |
getPhraseHintsList() (deprecated)
public ProtocolStringList getPhraseHintsList()
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Returns | |
---|---|
Type | Description |
ProtocolStringList |
A list containing the phraseHints. |
getSampleRateHertz()
public int getSampleRateHertz()
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
int |
The sampleRateHertz. |
getSingleUtterance()
public boolean getSingleUtterance()
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Returns | |
---|---|
Type | Description |
boolean |
The singleUtterance. |
getSpeechContexts(int index)
public SpeechContext getSpeechContexts(int index)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechContext |
getSpeechContextsBuilder(int index)
public SpeechContext.Builder getSpeechContextsBuilder(int index)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechContext.Builder |
getSpeechContextsBuilderList()
public List<SpeechContext.Builder> getSpeechContextsBuilderList()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
List<Builder> |
getSpeechContextsCount()
public int getSpeechContextsCount()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
int |
getSpeechContextsList()
public List<SpeechContext> getSpeechContextsList()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
List<SpeechContext> |
getSpeechContextsOrBuilder(int index)
public SpeechContextOrBuilder getSpeechContextsOrBuilder(int index)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechContextOrBuilder |
getSpeechContextsOrBuilderList()
public List<? extends SpeechContextOrBuilder> getSpeechContextsOrBuilderList()
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Returns | |
---|---|
Type | Description |
List<? extends com.google.cloud.dialogflow.v2beta1.SpeechContextOrBuilder> |
hasBargeInConfig()
public boolean hasBargeInConfig()
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Returns | |
---|---|
Type | Description |
boolean |
Whether the bargeInConfig field is set. |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns | |
---|---|
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Returns | |
---|---|
Type | Description |
boolean |
mergeBargeInConfig(BargeInConfig value)
public InputAudioConfig.Builder mergeBargeInConfig(BargeInConfig value)
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Parameter | |
---|---|
Name | Description |
value |
BargeInConfig |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
mergeFrom(InputAudioConfig other)
public InputAudioConfig.Builder mergeFrom(InputAudioConfig other)
Parameter | |
---|---|
Name | Description |
other |
InputAudioConfig |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public InputAudioConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters | |
---|---|
Name | Description |
input |
CodedInputStream |
extensionRegistry |
ExtensionRegistryLite |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
Exceptions | |
---|---|
Type | Description |
IOException |
mergeFrom(Message other)
public InputAudioConfig.Builder mergeFrom(Message other)
Parameter | |
---|---|
Name | Description |
other |
Message |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final InputAudioConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
removeSpeechContexts(int index)
public InputAudioConfig.Builder removeSpeechContexts(int index)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setAudioEncoding(AudioEncoding value)
public InputAudioConfig.Builder setAudioEncoding(AudioEncoding value)
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
AudioEncoding The audioEncoding to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setAudioEncodingValue(int value)
public InputAudioConfig.Builder setAudioEncodingValue(int value)
Required. Audio encoding of the audio content to process.
.google.cloud.dialogflow.v2beta1.AudioEncoding audio_encoding = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
int The enum numeric value on the wire for audioEncoding to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setBargeInConfig(BargeInConfig value)
public InputAudioConfig.Builder setBargeInConfig(BargeInConfig value)
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Parameter | |
---|---|
Name | Description |
value |
BargeInConfig |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setBargeInConfig(BargeInConfig.Builder builderForValue)
public InputAudioConfig.Builder setBargeInConfig(BargeInConfig.Builder builderForValue)
Configuration of barge-in behavior during the streaming of input audio.
.google.cloud.dialogflow.v2beta1.BargeInConfig barge_in_config = 15;
Parameter | |
---|---|
Name | Description |
builderForValue |
BargeInConfig.Builder |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setDisableNoSpeechRecognizedEvent(boolean value)
public InputAudioConfig.Builder setDisableNoSpeechRecognizedEvent(boolean value)
Only used in
Participants.AnalyzeContent
and
Participants.StreamingAnalyzeContent.
If false
and recognition doesn't return any result, trigger
NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool disable_no_speech_recognized_event = 14;
Parameter | |
---|---|
Name | Description |
value |
boolean The disableNoSpeechRecognizedEvent to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setEnableAutomaticPunctuation(boolean value)
public InputAudioConfig.Builder setEnableAutomaticPunctuation(boolean value)
Enable automatic punctuation option at the speech backend.
bool enable_automatic_punctuation = 17;
Parameter | |
---|---|
Name | Description |
value |
boolean The enableAutomaticPunctuation to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setEnableWordInfo(boolean value)
public InputAudioConfig.Builder setEnableWordInfo(boolean value)
If true
, Dialogflow returns
SpeechWordInfo in
StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
bool enable_word_info = 13;
Parameter | |
---|---|
Name | Description |
value |
boolean The enableWordInfo to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setField(Descriptors.FieldDescriptor field, Object value)
public InputAudioConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setLanguageCode(String value)
public InputAudioConfig.Builder setLanguageCode(String value)
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
String The languageCode to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setLanguageCodeBytes(ByteString value)
public InputAudioConfig.Builder setLanguageCodeBytes(ByteString value)
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
ByteString The bytes for languageCode to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setModel(String value)
public InputAudioConfig.Builder setModel(String value)
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search (best for very short utterances and commands)
string model = 7;
Parameter | |
---|---|
Name | Description |
value |
String The model to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setModelBytes(ByteString value)
public InputAudioConfig.Builder setModelBytes(ByteString value)
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search (best for very short utterances and commands)
string model = 7;
Parameter | |
---|---|
Name | Description |
value |
ByteString The bytes for model to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setModelVariant(SpeechModelVariant value)
public InputAudioConfig.Builder setModelVariant(SpeechModelVariant value)
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Parameter | |
---|---|
Name | Description |
value |
SpeechModelVariant The modelVariant to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setModelVariantValue(int value)
public InputAudioConfig.Builder setModelVariantValue(int value)
Which variant of the Speech model to use.
.google.cloud.dialogflow.v2beta1.SpeechModelVariant model_variant = 10;
Parameter | |
---|---|
Name | Description |
value |
int The enum numeric value on the wire for modelVariant to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setPhraseHints(int index, String value) (deprecated)
public InputAudioConfig.Builder setPhraseHints(int index, String value)
Deprecated. google.cloud.dialogflow.v2beta1.InputAudioConfig.phrase_hints is deprecated. See google/cloud/dialogflow/v2beta1/audio_config.proto;l=174
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
This field is deprecated. Please use speech_contexts
instead. If you
specify both phrase_hints
and speech_contexts
, Dialogflow will
treat the phrase_hints
as a single additional SpeechContext
.
repeated string phrase_hints = 4 [deprecated = true];
Parameters | |
---|---|
Name | Description |
index |
int The index to set the value at. |
value |
String The phraseHints to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public InputAudioConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
index |
int |
value |
Object |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setSampleRateHertz(int value)
public InputAudioConfig.Builder setSampleRateHertz(int value)
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
int32 sample_rate_hertz = 2 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
int The sampleRateHertz to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setSingleUtterance(boolean value)
public InputAudioConfig.Builder setSingleUtterance(boolean value)
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool single_utterance = 8;
Parameter | |
---|---|
Name | Description |
value |
boolean The singleUtterance to set. |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
This builder for chaining. |
setSpeechContexts(int index, SpeechContext value)
public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext value)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameters | |
---|---|
Name | Description |
index |
int |
value |
SpeechContext |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setSpeechContexts(int index, SpeechContext.Builder builderForValue)
public InputAudioConfig.Builder setSpeechContexts(int index, SpeechContext.Builder builderForValue)
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
repeated .google.cloud.dialogflow.v2beta1.SpeechContext speech_contexts = 11;
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
SpeechContext.Builder |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |
setUnknownFields(UnknownFieldSet unknownFields)
public final InputAudioConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
InputAudioConfig.Builder |