public static final class StreamingRecognitionConfig.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionConfig.Builder> implements StreamingRecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the
request.
Protobuf type google.cloud.speech.v1.StreamingRecognitionConfig
Static Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionConfig.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Returns
Overrides
build()
public StreamingRecognitionConfig build()
Returns
buildPartial()
public StreamingRecognitionConfig buildPartial()
Returns
clear()
public StreamingRecognitionConfig.Builder clear()
Returns
Overrides
clearConfig()
public StreamingRecognitionConfig.Builder clearConfig()
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Returns
clearField(Descriptors.FieldDescriptor field)
public StreamingRecognitionConfig.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Returns
Overrides
clearInterimResults()
public StreamingRecognitionConfig.Builder clearInterimResults()
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Returns
clearOneof(Descriptors.OneofDescriptor oneof)
public StreamingRecognitionConfig.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Returns
Overrides
clearSingleUtterance()
public StreamingRecognitionConfig.Builder clearSingleUtterance()
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional field useEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects
a model based on any other parameters that you set in
RecognitionConfig
.
bool single_utterance = 2;
Returns
clone()
public StreamingRecognitionConfig.Builder clone()
Returns
Overrides
getConfig()
public RecognitionConfig getConfig()
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Returns
getConfigBuilder()
public RecognitionConfig.Builder getConfigBuilder()
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Returns
getConfigOrBuilder()
public RecognitionConfigOrBuilder getConfigOrBuilder()
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Returns
getDefaultInstanceForType()
public StreamingRecognitionConfig getDefaultInstanceForType()
Returns
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns
Overrides
getInterimResults()
public boolean getInterimResults()
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Returns
Type | Description |
boolean | The interimResults.
|
getSingleUtterance()
public boolean getSingleUtterance()
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional field useEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects
a model based on any other parameters that you set in
RecognitionConfig
.
bool single_utterance = 2;
Returns
Type | Description |
boolean | The singleUtterance.
|
hasConfig()
public boolean hasConfig()
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Returns
Type | Description |
boolean | Whether the config field is set.
|
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Overrides
isInitialized()
public final boolean isInitialized()
Returns
Overrides
mergeConfig(RecognitionConfig value)
public StreamingRecognitionConfig.Builder mergeConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter
Returns
mergeFrom(StreamingRecognitionConfig other)
public StreamingRecognitionConfig.Builder mergeFrom(StreamingRecognitionConfig other)
Parameter
Returns
public StreamingRecognitionConfig.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Overrides
Exceptions
mergeFrom(Message other)
public StreamingRecognitionConfig.Builder mergeFrom(Message other)
Parameter
Returns
Overrides
mergeUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionConfig.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Returns
Overrides
setConfig(RecognitionConfig value)
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig value)
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter
Returns
setConfig(RecognitionConfig.Builder builderForValue)
public StreamingRecognitionConfig.Builder setConfig(RecognitionConfig.Builder builderForValue)
Required. Provides information to the recognizer that specifies how to
process the request.
.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter
Returns
setField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionConfig.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Returns
Overrides
setInterimResults(boolean value)
public StreamingRecognitionConfig.Builder setInterimResults(boolean value)
If true
, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false
flag).
If false
or omitted, only is_final=true
result(s) are returned.
bool interim_results = 3;
Parameter
Name | Description |
value | boolean
The interimResults to set.
|
Returns
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public StreamingRecognitionConfig.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Returns
Overrides
setSingleUtterance(boolean value)
public StreamingRecognitionConfig.Builder setSingleUtterance(boolean value)
If false
or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no
more than one StreamingRecognitionResult
with the is_final
flag set to
true
.
The single_utterance
field can only be used with specified models,
otherwise an error is thrown. The model
field in [RecognitionConfig
][]
must be set to:
command_and_search
phone_call
AND additional field useEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects
a model based on any other parameters that you set in
RecognitionConfig
.
bool single_utterance = 2;
Parameter
Name | Description |
value | boolean
The singleUtterance to set.
|
Returns
setUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionConfig.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Returns
Overrides