Class StreamingRecognitionResult.Builder (0.44.0)

public static final class StreamingRecognitionResult.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder> implements StreamingRecognitionResultOrBuilder

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

While end-user audio is being processed, Dialogflow sends a series of results. Each result may contain a transcript value. A transcript represents a portion of the utterance. While the recognizer is processing audio, transcript values may be interim values or finalized values. Once a transcript is finalized, the is_final value is set to true and processing continues for the next transcript.

If StreamingDetectIntentRequest.query_input.audio.config.single_utterance was true, and the recognizer has completed processing audio, the message_type value is set to END_OF_SINGLE_UTTERANCE and the following (last) result contains the last finalized transcript.

The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results.

In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur.

Num transcript message_type is_final
1 "tube" TRANSCRIPT false
2 "to be a" TRANSCRIPT false
3 "to be" TRANSCRIPT false
4 "to be or not to be" TRANSCRIPT true
5 "that's" TRANSCRIPT false
6 "that is TRANSCRIPT false
7 unset END_OF_SINGLE_UTTERANCE unset
8 " that is the question" TRANSCRIPT true

Concatenating the finalized transcripts with is_final` set to true, the complete utterance becomes "to be or not to be that is the question".

Protobuf type google.cloud.dialogflow.cx.v3.StreamingRecognitionResult

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
Type Description
Descriptor

Methods

addAllSpeechWordInfo(Iterable<? extends SpeechWordInfo> values)

public StreamingRecognitionResult.Builder addAllSpeechWordInfo(Iterable<? extends SpeechWordInfo> values)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
values Iterable<? extends com.google.cloud.dialogflow.cx.v3.SpeechWordInfo>
Returns
Type Description
StreamingRecognitionResult.Builder

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public StreamingRecognitionResult.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

addSpeechWordInfo(SpeechWordInfo value)

public StreamingRecognitionResult.Builder addSpeechWordInfo(SpeechWordInfo value)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
value SpeechWordInfo
Returns
Type Description
StreamingRecognitionResult.Builder

addSpeechWordInfo(SpeechWordInfo.Builder builderForValue)

public StreamingRecognitionResult.Builder addSpeechWordInfo(SpeechWordInfo.Builder builderForValue)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
builderForValue SpeechWordInfo.Builder
Returns
Type Description
StreamingRecognitionResult.Builder

addSpeechWordInfo(int index, SpeechWordInfo value)

public StreamingRecognitionResult.Builder addSpeechWordInfo(int index, SpeechWordInfo value)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameters
Name Description
index int
value SpeechWordInfo
Returns
Type Description
StreamingRecognitionResult.Builder

addSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)

public StreamingRecognitionResult.Builder addSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameters
Name Description
index int
builderForValue SpeechWordInfo.Builder
Returns
Type Description
StreamingRecognitionResult.Builder

addSpeechWordInfoBuilder()

public SpeechWordInfo.Builder addSpeechWordInfoBuilder()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
SpeechWordInfo.Builder

addSpeechWordInfoBuilder(int index)

public SpeechWordInfo.Builder addSpeechWordInfoBuilder(int index)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
index int
Returns
Type Description
SpeechWordInfo.Builder

build()

public StreamingRecognitionResult build()
Returns
Type Description
StreamingRecognitionResult

buildPartial()

public StreamingRecognitionResult buildPartial()
Returns
Type Description
StreamingRecognitionResult

clear()

public StreamingRecognitionResult.Builder clear()
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

clearConfidence()

public StreamingRecognitionResult.Builder clearConfidence()

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

float confidence = 4;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clearField(Descriptors.FieldDescriptor field)

public StreamingRecognitionResult.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Name Description
field FieldDescriptor
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

clearIsFinal()

public StreamingRecognitionResult.Builder clearIsFinal()

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

bool is_final = 3;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clearLanguageCode()

public StreamingRecognitionResult.Builder clearLanguageCode()

Detected language code for the transcript.

string language_code = 10;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clearMessageType()

public StreamingRecognitionResult.Builder clearMessageType()

Type of the result message.

.google.cloud.dialogflow.cx.v3.StreamingRecognitionResult.MessageType message_type = 1;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clearOneof(Descriptors.OneofDescriptor oneof)

public StreamingRecognitionResult.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Name Description
oneof OneofDescriptor
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

clearSpeechEndOffset()

public StreamingRecognitionResult.Builder clearSpeechEndOffset()

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Returns
Type Description
StreamingRecognitionResult.Builder

clearSpeechWordInfo()

public StreamingRecognitionResult.Builder clearSpeechWordInfo()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
StreamingRecognitionResult.Builder

clearStability()

public StreamingRecognitionResult.Builder clearStability()

An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:

  • If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for TRANSCRIPT results with is_final = false.
  • Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.

float stability = 6;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clearTranscript()

public StreamingRecognitionResult.Builder clearTranscript()

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

string transcript = 2;

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

clone()

public StreamingRecognitionResult.Builder clone()
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

getConfidence()

public float getConfidence()

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

float confidence = 4;

Returns
Type Description
float

The confidence.

getDefaultInstanceForType()

public StreamingRecognitionResult getDefaultInstanceForType()
Returns
Type Description
StreamingRecognitionResult

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
Type Description
Descriptor
Overrides

getIsFinal()

public boolean getIsFinal()

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

bool is_final = 3;

Returns
Type Description
boolean

The isFinal.

getLanguageCode()

public String getLanguageCode()

Detected language code for the transcript.

string language_code = 10;

Returns
Type Description
String

The languageCode.

getLanguageCodeBytes()

public ByteString getLanguageCodeBytes()

Detected language code for the transcript.

string language_code = 10;

Returns
Type Description
ByteString

The bytes for languageCode.

getMessageType()

public StreamingRecognitionResult.MessageType getMessageType()

Type of the result message.

.google.cloud.dialogflow.cx.v3.StreamingRecognitionResult.MessageType message_type = 1;

Returns
Type Description
StreamingRecognitionResult.MessageType

The messageType.

getMessageTypeValue()

public int getMessageTypeValue()

Type of the result message.

.google.cloud.dialogflow.cx.v3.StreamingRecognitionResult.MessageType message_type = 1;

Returns
Type Description
int

The enum numeric value on the wire for messageType.

getSpeechEndOffset()

public Duration getSpeechEndOffset()

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Returns
Type Description
Duration

The speechEndOffset.

getSpeechEndOffsetBuilder()

public Duration.Builder getSpeechEndOffsetBuilder()

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Returns
Type Description
Builder

getSpeechEndOffsetOrBuilder()

public DurationOrBuilder getSpeechEndOffsetOrBuilder()

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Returns
Type Description
DurationOrBuilder

getSpeechWordInfo(int index)

public SpeechWordInfo getSpeechWordInfo(int index)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
index int
Returns
Type Description
SpeechWordInfo

getSpeechWordInfoBuilder(int index)

public SpeechWordInfo.Builder getSpeechWordInfoBuilder(int index)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
index int
Returns
Type Description
SpeechWordInfo.Builder

getSpeechWordInfoBuilderList()

public List<SpeechWordInfo.Builder> getSpeechWordInfoBuilderList()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
List<Builder>

getSpeechWordInfoCount()

public int getSpeechWordInfoCount()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
int

getSpeechWordInfoList()

public List<SpeechWordInfo> getSpeechWordInfoList()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
List<SpeechWordInfo>

getSpeechWordInfoOrBuilder(int index)

public SpeechWordInfoOrBuilder getSpeechWordInfoOrBuilder(int index)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
index int
Returns
Type Description
SpeechWordInfoOrBuilder

getSpeechWordInfoOrBuilderList()

public List<? extends SpeechWordInfoOrBuilder> getSpeechWordInfoOrBuilderList()

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Returns
Type Description
List<? extends com.google.cloud.dialogflow.cx.v3.SpeechWordInfoOrBuilder>

getStability()

public float getStability()

An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:

  • If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for TRANSCRIPT results with is_final = false.
  • Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.

float stability = 6;

Returns
Type Description
float

The stability.

getTranscript()

public String getTranscript()

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

string transcript = 2;

Returns
Type Description
String

The transcript.

getTranscriptBytes()

public ByteString getTranscriptBytes()

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

string transcript = 2;

Returns
Type Description
ByteString

The bytes for transcript.

hasSpeechEndOffset()

public boolean hasSpeechEndOffset()

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Returns
Type Description
boolean

Whether the speechEndOffset field is set.

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Type Description
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
Type Description
boolean
Overrides

mergeFrom(StreamingRecognitionResult other)

public StreamingRecognitionResult.Builder mergeFrom(StreamingRecognitionResult other)
Parameter
Name Description
other StreamingRecognitionResult
Returns
Type Description
StreamingRecognitionResult.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public StreamingRecognitionResult.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Name Description
input CodedInputStream
extensionRegistry ExtensionRegistryLite
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides
Exceptions
Type Description
IOException

mergeFrom(Message other)

public StreamingRecognitionResult.Builder mergeFrom(Message other)
Parameter
Name Description
other Message
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

mergeSpeechEndOffset(Duration value)

public StreamingRecognitionResult.Builder mergeSpeechEndOffset(Duration value)

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Parameter
Name Description
value Duration
Returns
Type Description
StreamingRecognitionResult.Builder

mergeUnknownFields(UnknownFieldSet unknownFields)

public final StreamingRecognitionResult.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

removeSpeechWordInfo(int index)

public StreamingRecognitionResult.Builder removeSpeechWordInfo(int index)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameter
Name Description
index int
Returns
Type Description
StreamingRecognitionResult.Builder

setConfidence(float value)

public StreamingRecognitionResult.Builder setConfidence(float value)

The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if is_final is true and you should not rely on it being accurate or even set.

float confidence = 4;

Parameter
Name Description
value float

The confidence to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setField(Descriptors.FieldDescriptor field, Object value)

public StreamingRecognitionResult.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

setIsFinal(boolean value)

public StreamingRecognitionResult.Builder setIsFinal(boolean value)

If false, the StreamingRecognitionResult represents an interim result that may change. If true, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type = TRANSCRIPT.

bool is_final = 3;

Parameter
Name Description
value boolean

The isFinal to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setLanguageCode(String value)

public StreamingRecognitionResult.Builder setLanguageCode(String value)

Detected language code for the transcript.

string language_code = 10;

Parameter
Name Description
value String

The languageCode to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setLanguageCodeBytes(ByteString value)

public StreamingRecognitionResult.Builder setLanguageCodeBytes(ByteString value)

Detected language code for the transcript.

string language_code = 10;

Parameter
Name Description
value ByteString

The bytes for languageCode to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setMessageType(StreamingRecognitionResult.MessageType value)

public StreamingRecognitionResult.Builder setMessageType(StreamingRecognitionResult.MessageType value)

Type of the result message.

.google.cloud.dialogflow.cx.v3.StreamingRecognitionResult.MessageType message_type = 1;

Parameter
Name Description
value StreamingRecognitionResult.MessageType

The messageType to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setMessageTypeValue(int value)

public StreamingRecognitionResult.Builder setMessageTypeValue(int value)

Type of the result message.

.google.cloud.dialogflow.cx.v3.StreamingRecognitionResult.MessageType message_type = 1;

Parameter
Name Description
value int

The enum numeric value on the wire for messageType to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public StreamingRecognitionResult.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Name Description
field FieldDescriptor
index int
value Object
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides

setSpeechEndOffset(Duration value)

public StreamingRecognitionResult.Builder setSpeechEndOffset(Duration value)

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Parameter
Name Description
value Duration
Returns
Type Description
StreamingRecognitionResult.Builder

setSpeechEndOffset(Duration.Builder builderForValue)

public StreamingRecognitionResult.Builder setSpeechEndOffset(Duration.Builder builderForValue)

Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type = TRANSCRIPT.

.google.protobuf.Duration speech_end_offset = 8;

Parameter
Name Description
builderForValue Builder
Returns
Type Description
StreamingRecognitionResult.Builder

setSpeechWordInfo(int index, SpeechWordInfo value)

public StreamingRecognitionResult.Builder setSpeechWordInfo(int index, SpeechWordInfo value)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameters
Name Description
index int
value SpeechWordInfo
Returns
Type Description
StreamingRecognitionResult.Builder

setSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)

public StreamingRecognitionResult.Builder setSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)

Word-specific information for the words recognized by Speech in transcript. Populated if and only if message_type = TRANSCRIPT and [InputAudioConfig.enable_word_info] is set.

repeated .google.cloud.dialogflow.cx.v3.SpeechWordInfo speech_word_info = 7;

Parameters
Name Description
index int
builderForValue SpeechWordInfo.Builder
Returns
Type Description
StreamingRecognitionResult.Builder

setStability(float value)

public StreamingRecognitionResult.Builder setStability(float value)

An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:

  • If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for TRANSCRIPT results with is_final = false.
  • Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.

float stability = 6;

Parameter
Name Description
value float

The stability to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setTranscript(String value)

public StreamingRecognitionResult.Builder setTranscript(String value)

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

string transcript = 2;

Parameter
Name Description
value String

The transcript to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setTranscriptBytes(ByteString value)

public StreamingRecognitionResult.Builder setTranscriptBytes(ByteString value)

Transcript text representing the words that the user spoke. Populated if and only if message_type = TRANSCRIPT.

string transcript = 2;

Parameter
Name Description
value ByteString

The bytes for transcript to set.

Returns
Type Description
StreamingRecognitionResult.Builder

This builder for chaining.

setUnknownFields(UnknownFieldSet unknownFields)

public final StreamingRecognitionResult.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
StreamingRecognitionResult.Builder
Overrides