Interface StreamingRecognitionConfigOrBuilder (4.22.0)

public interface StreamingRecognitionConfigOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getConfig()

public abstract RecognitionConfig getConfig()

Required. Provides information to the recognizer that specifies how to process the request.

.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
RecognitionConfig

The config.

getConfigOrBuilder()

public abstract RecognitionConfigOrBuilder getConfigOrBuilder()

Required. Provides information to the recognizer that specifies how to process the request.

.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
RecognitionConfigOrBuilder

getEnableVoiceActivityEvents()

public abstract boolean getEnableVoiceActivityEvents()

If true, responses with voice activity speech events will be returned as they are detected.

bool enable_voice_activity_events = 5;

Returns
TypeDescription
boolean

The enableVoiceActivityEvents.

getInterimResults()

public abstract boolean getInterimResults()

If true, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the is_final=false flag). If false or omitted, only is_final=true result(s) are returned.

bool interim_results = 3;

Returns
TypeDescription
boolean

The interimResults.

getSingleUtterance()

public abstract boolean getSingleUtterance()

If false or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingRecognitionResults with the is_final flag set to true.

If true, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease recognition. It will return no more than one StreamingRecognitionResult with the is_final flag set to true.

The single_utterance field can only be used with specified models, otherwise an error is thrown. The model field in [RecognitionConfig][] must be set to:

  • command_and_search
  • phone_call AND additional field useEnhanced=true
  • The model field is left undefined. In this case the API auto-selects a model based on any other parameters that you set in RecognitionConfig.

bool single_utterance = 2;

Returns
TypeDescription
boolean

The singleUtterance.

getVoiceActivityTimeout()

public abstract StreamingRecognitionConfig.VoiceActivityTimeout getVoiceActivityTimeout()

If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field voice_activity_events must also be set to true.

.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;

Returns
TypeDescription
StreamingRecognitionConfig.VoiceActivityTimeout

The voiceActivityTimeout.

getVoiceActivityTimeoutOrBuilder()

public abstract StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder getVoiceActivityTimeoutOrBuilder()

If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field voice_activity_events must also be set to true.

.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;

Returns
TypeDescription
StreamingRecognitionConfig.VoiceActivityTimeoutOrBuilder

hasConfig()

public abstract boolean hasConfig()

Required. Provides information to the recognizer that specifies how to process the request.

.google.cloud.speech.v1.RecognitionConfig config = 1 [(.google.api.field_behavior) = REQUIRED];

Returns
TypeDescription
boolean

Whether the config field is set.

hasVoiceActivityTimeout()

public abstract boolean hasVoiceActivityTimeout()

If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field voice_activity_events must also be set to true.

.google.cloud.speech.v1.StreamingRecognitionConfig.VoiceActivityTimeout voice_activity_timeout = 6;

Returns
TypeDescription
boolean

Whether the voiceActivityTimeout field is set.