- 4.50.0 (latest)
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.44.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.32.0
- 4.31.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.16.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.0
- 4.8.0
- 4.7.0
- 4.6.0
- 4.4.0
- 4.3.0
- 4.2.0
- 4.1.0
- 4.0.0
- 3.0.0
- 2.6.1
- 2.5.9
- 2.4.0
- 2.3.0
- 2.2.15
public static final class StreamingRecognitionConfig.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.
Protobuf type google.cloud.speech.v1p1beta1.StreamingRecognitionConfig
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > StreamingRecognitionConfig.BuilderImplements
StreamingRecognitionConfigOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Type | Description |
Descriptor |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
StreamingRecognitionConfig.Builder |
build()
public StreamingRecognitionConfig build()
Type | Description |
StreamingRecognitionConfig |
buildPartial()
public StreamingRecognitionConfig buildPartial()
Type | Description |
StreamingRecognitionConfig |
clear()
public StreamingRecognitionConfig.Builder clear()
Type | Description |
StreamingRecognitionConfig.Builder |
clearConfig()
public StreamingRecognitionConfig.Builder clearConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
StreamingRecognitionConfig.Builder |
clearEnableVoiceActivityEvents()
public StreamingRecognitionConfig.Builder clearEnableVoiceActivityEvents()
If true
, responses with voice activity speech events will be returned as
they are detected.
bool enable_voice_activity_events = 5;
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
clearField(Descriptors.FieldDescriptor field)
public StreamingRecognitionConfig.Builder clearField(Descriptors.FieldDescriptor field)
Name | Description |
field | FieldDescriptor |
Type | Description |
StreamingRecognitionConfig.Builder |
clearInterimResults()
public StreamingRecognitionConfig.Builder clearInterimResults()
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
clearOneof(Descriptors.OneofDescriptor oneof)
public StreamingRecognitionConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Name | Description |
oneof | OneofDescriptor |
Type | Description |
StreamingRecognitionConfig.Builder |
clearSingleUtterance()
public StreamingRecognitionConfig.Builder clearSingleUtterance()
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional fielduseEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects a model based on any other parameters that you set inRecognitionConfig
.
bool single_utterance = 2;
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
clearVoiceActivityTimeout()
public StreamingRecognitionConfig.Builder clearVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Type | Description |
StreamingRecognitionConfig.Builder |
clone()
public StreamingRecognitionConfig.Builder clone()
Type | Description |
StreamingRecognitionConfig.Builder |
getConfig()
public RecognitionConfig getConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
RecognitionConfig | The config. |
getConfigBuilder()
public RecognitionConfig.Builder getConfigBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
RecognitionConfig.Builder |
getConfigOrBuilder()
public RecognitionConfigOrBuilder getConfigOrBuilder()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
RecognitionConfigOrBuilder |
getDefaultInstanceForType()
public StreamingRecognitionConfig getDefaultInstanceForType()
Type | Description |
StreamingRecognitionConfig |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Type | Description |
Descriptor |
getEnableVoiceActivityEvents()
public boolean getEnableVoiceActivityEvents()
If true
, responses with voice activity speech events will be returned as
they are detected.
bool enable_voice_activity_events = 5;
Type | Description |
boolean | The enableVoiceActivityEvents. |
getInterimResults()
public boolean getInterimResults()
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Type | Description |
boolean | The interimResults. |
getSingleUtterance()
public boolean getSingleUtterance()
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional fielduseEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects a model based on any other parameters that you set inRecognitionConfig
.
bool single_utterance = 2;
Type | Description |
boolean | The singleUtterance. |
getVoiceActivityTimeout()
public StreamingRecognitionConfig.VoiceActivityTimeout getVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Type | Description |
StreamingRecognitionConfig.VoiceActivityTimeout | The voiceActivityTimeout. |
getVoiceActivityTimeoutBuilder()
public StreamingRecognitionConfig.VoiceActivityTimeout.Builder getVoiceActivityTimeoutBuilder()
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Type | Description |
StreamingRecognitionConfig.VoiceActivityTimeout.Builder |
getVoiceActivityTimeoutOrBuilder()
public StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder getVoiceActivityTimeoutOrBuilder()
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Type | Description |
StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder |
hasConfig()
public boolean hasConfig()
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
boolean | Whether the config field is set. |
hasVoiceActivityTimeout()
public boolean hasVoiceActivityTimeout()
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Type | Description |
boolean | Whether the voiceActivityTimeout field is set. |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Type | Description |
boolean |
mergeConfig(RecognitionConfig value)
public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Name | Description |
value | RecognitionConfig |
Type | Description |
StreamingRecognitionConfig.Builder |
mergeFrom(StreamingRecognitionConfig other)
public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
Name | Description |
other | StreamingRecognitionConfig |
Type | Description |
StreamingRecognitionConfig.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public StreamingRecognitionConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | CodedInputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
StreamingRecognitionConfig.Builder |
Type | Description |
IOException |
mergeFrom(Message other)
public StreamingRecognitionConfig.Builder mergeFrom(Message other)
Name | Description |
other | Message |
Type | Description |
StreamingRecognitionConfig.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
StreamingRecognitionConfig.Builder |
mergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
public StreamingRecognitionConfig.Builder mergeVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Name | Description |
value | StreamingRecognitionConfig.VoiceActivityTimeout |
Type | Description |
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig value)
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Name | Description |
value | RecognitionConfig |
Type | Description |
StreamingRecognitionConfig.Builder |
setConfig(RecognitionConfig.Builder builderForValue)
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
Required. Provides information to the recognizer that specifies how to process the request.
.google.cloud.speech.v1p1beta1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Name | Description |
builderForValue | RecognitionConfig.Builder |
Type | Description |
StreamingRecognitionConfig.Builder |
setEnableVoiceActivityEvents(boolean value)
public StreamingRecognitionConfig.Builder setEnableVoiceActivityEvents(boolean value)
If true
, responses with voice activity speech events will be returned as
they are detected.
bool enable_voice_activity_events = 5;
Name | Description |
value | boolean The enableVoiceActivityEvents to set. |
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
setField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
StreamingRecognitionConfig.Builder |
setInterimResults(boolean value)
public StreamingRecognitionConfig.Builder setInterimResults(boolean value)
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Name | Description |
value | boolean The interimResults to set. |
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public StreamingRecognitionConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Name | Description |
field | FieldDescriptor |
index | int |
value | Object |
Type | Description |
StreamingRecognitionConfig.Builder |
setSingleUtterance(boolean value)
public StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional fielduseEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects a model based on any other parameters that you set inRecognitionConfig
.
bool single_utterance = 2;
Name | Description |
value | boolean The singleUtterance to set. |
Type | Description |
StreamingRecognitionConfig.Builder | This builder for chaining. |
setUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
StreamingRecognitionConfig.Builder |
setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout value)
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Name | Description |
value | StreamingRecognitionConfig.VoiceActivityTimeout |
Type | Description |
StreamingRecognitionConfig.Builder |
setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
public StreamingRecognitionConfig.Builder setVoiceActivityTimeout(StreamingRecognitionConfig.VoiceActivityTimeout.Builder builderForValue)
If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events
must also be set to true.
.google.cloud.speech.v1p1beta1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;
Name | Description |
builderForValue | StreamingRecognitionConfig.VoiceActivityTimeout.Builder |
Type | Description |
StreamingRecognitionConfig.Builder |