- 4.60.0 (latest)
- 4.59.0
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.45.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.33.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.1
- 4.8.6
- 4.7.5
- 4.6.0
- 4.5.11
- 4.4.0
- 4.3.1
public static final class StreamingRecognitionResult.Builder extends GeneratedMessageV3.Builder<StreamingRecognitionResult.Builder> implements StreamingRecognitionResultOrBuilder
Contains a speech recognition result corresponding to a portion of the audio
that is currently being processed or an indication that this is the end
of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series of
results. Each result may contain a transcript
value. A transcript
represents a portion of the utterance. While the recognizer is processing
audio, transcript values may be interim values or finalized values.
Once a transcript is finalized, the is_final
value is set to true and
processing continues for the next transcript.
If StreamingDetectIntentRequest.query_input.audio_config.single_utterance
was true, and the recognizer has completed processing audio,
the message_type
value is set to END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the
finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case where
single utterance is not enabled, result 7 would not occur.
Num | transcript | message_type | is_final |
---|---|---|---|
1 | "tube" | TRANSCRIPT | false |
2 | "to be a" | TRANSCRIPT | false |
3 | "to be" | TRANSCRIPT | false |
4 | "to be or not to be" | TRANSCRIPT | true |
5 | "that's" | TRANSCRIPT | false |
6 | "that is | TRANSCRIPT | false |
7 | unset | END_OF_SINGLE_UTTERANCE | unset |
8 | " that is the question" | TRANSCRIPT | true |
Concatenating the finalized transcripts with
is_final` set to true,
the complete utterance becomes "to be or not to be that is the question".
Protobuf type google.cloud.dialogflow.v2.StreamingRecognitionResult
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > StreamingRecognitionResult.BuilderImplements
StreamingRecognitionResultOrBuilderMethods
addAllSpeechWordInfo(Iterable<? extends SpeechWordInfo> values)
public StreamingRecognitionResult.Builder addAllSpeechWordInfo(Iterable<? extends SpeechWordInfo> values)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
values | Iterable<? extends com.google.cloud.dialogflow.v2.SpeechWordInfo> |
Type | Description |
StreamingRecognitionResult.Builder |
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionResult.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
StreamingRecognitionResult.Builder |
addSpeechWordInfo(SpeechWordInfo value)
public StreamingRecognitionResult.Builder addSpeechWordInfo(SpeechWordInfo value)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
value | SpeechWordInfo |
Type | Description |
StreamingRecognitionResult.Builder |
addSpeechWordInfo(SpeechWordInfo.Builder builderForValue)
public StreamingRecognitionResult.Builder addSpeechWordInfo(SpeechWordInfo.Builder builderForValue)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
builderForValue | SpeechWordInfo.Builder |
Type | Description |
StreamingRecognitionResult.Builder |
addSpeechWordInfo(int index, SpeechWordInfo value)
public StreamingRecognitionResult.Builder addSpeechWordInfo(int index, SpeechWordInfo value)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
value | SpeechWordInfo |
Type | Description |
StreamingRecognitionResult.Builder |
addSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)
public StreamingRecognitionResult.Builder addSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
builderForValue | SpeechWordInfo.Builder |
Type | Description |
StreamingRecognitionResult.Builder |
addSpeechWordInfoBuilder()
public SpeechWordInfo.Builder addSpeechWordInfoBuilder()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
SpeechWordInfo.Builder |
addSpeechWordInfoBuilder(int index)
public SpeechWordInfo.Builder addSpeechWordInfoBuilder(int index)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
Type | Description |
SpeechWordInfo.Builder |
build()
public StreamingRecognitionResult build()
Type | Description |
StreamingRecognitionResult |
buildPartial()
public StreamingRecognitionResult buildPartial()
Type | Description |
StreamingRecognitionResult |
clear()
public StreamingRecognitionResult.Builder clear()
Type | Description |
StreamingRecognitionResult.Builder |
clearConfidence()
public StreamingRecognitionResult.Builder clearConfidence()
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
float confidence = 4;
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
clearField(Descriptors.FieldDescriptor field)
public StreamingRecognitionResult.Builder clearField(Descriptors.FieldDescriptor field)
Name | Description |
field | FieldDescriptor |
Type | Description |
StreamingRecognitionResult.Builder |
clearIsFinal()
public StreamingRecognitionResult.Builder clearIsFinal()
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
bool is_final = 3;
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
clearLanguageCode()
public StreamingRecognitionResult.Builder clearLanguageCode()
Detected language code for the transcript.
string language_code = 10;
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
clearMessageType()
public StreamingRecognitionResult.Builder clearMessageType()
Type of the result message.
.google.cloud.dialogflow.v2.StreamingRecognitionResult.MessageType message_type = 1;
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
clearOneof(Descriptors.OneofDescriptor oneof)
public StreamingRecognitionResult.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Name | Description |
oneof | OneofDescriptor |
Type | Description |
StreamingRecognitionResult.Builder |
clearSpeechEndOffset()
public StreamingRecognitionResult.Builder clearSpeechEndOffset()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Type | Description |
StreamingRecognitionResult.Builder |
clearSpeechWordInfo()
public StreamingRecognitionResult.Builder clearSpeechWordInfo()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
StreamingRecognitionResult.Builder |
clearTranscript()
public StreamingRecognitionResult.Builder clearTranscript()
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
clone()
public StreamingRecognitionResult.Builder clone()
Type | Description |
StreamingRecognitionResult.Builder |
getConfidence()
public float getConfidence()
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
float confidence = 4;
Type | Description |
float | The confidence. |
getDefaultInstanceForType()
public StreamingRecognitionResult getDefaultInstanceForType()
Type | Description |
StreamingRecognitionResult |
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Type | Description |
Descriptor |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Type | Description |
Descriptor |
getIsFinal()
public boolean getIsFinal()
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
bool is_final = 3;
Type | Description |
boolean | The isFinal. |
getLanguageCode()
public String getLanguageCode()
Detected language code for the transcript.
string language_code = 10;
Type | Description |
String | The languageCode. |
getLanguageCodeBytes()
public ByteString getLanguageCodeBytes()
Detected language code for the transcript.
string language_code = 10;
Type | Description |
ByteString | The bytes for languageCode. |
getMessageType()
public StreamingRecognitionResult.MessageType getMessageType()
Type of the result message.
.google.cloud.dialogflow.v2.StreamingRecognitionResult.MessageType message_type = 1;
Type | Description |
StreamingRecognitionResult.MessageType | The messageType. |
getMessageTypeValue()
public int getMessageTypeValue()
Type of the result message.
.google.cloud.dialogflow.v2.StreamingRecognitionResult.MessageType message_type = 1;
Type | Description |
int | The enum numeric value on the wire for messageType. |
getSpeechEndOffset()
public Duration getSpeechEndOffset()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Type | Description |
Duration | The speechEndOffset. |
getSpeechEndOffsetBuilder()
public Duration.Builder getSpeechEndOffsetBuilder()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Type | Description |
Builder |
getSpeechEndOffsetOrBuilder()
public DurationOrBuilder getSpeechEndOffsetOrBuilder()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Type | Description |
DurationOrBuilder |
getSpeechWordInfo(int index)
public SpeechWordInfo getSpeechWordInfo(int index)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
Type | Description |
SpeechWordInfo |
getSpeechWordInfoBuilder(int index)
public SpeechWordInfo.Builder getSpeechWordInfoBuilder(int index)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
Type | Description |
SpeechWordInfo.Builder |
getSpeechWordInfoBuilderList()
public List<SpeechWordInfo.Builder> getSpeechWordInfoBuilderList()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
List<Builder> |
getSpeechWordInfoCount()
public int getSpeechWordInfoCount()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
int |
getSpeechWordInfoList()
public List<SpeechWordInfo> getSpeechWordInfoList()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
List<SpeechWordInfo> |
getSpeechWordInfoOrBuilder(int index)
public SpeechWordInfoOrBuilder getSpeechWordInfoOrBuilder(int index)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
Type | Description |
SpeechWordInfoOrBuilder |
getSpeechWordInfoOrBuilderList()
public List<? extends SpeechWordInfoOrBuilder> getSpeechWordInfoOrBuilderList()
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Type | Description |
List<? extends com.google.cloud.dialogflow.v2.SpeechWordInfoOrBuilder> |
getTranscript()
public String getTranscript()
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Type | Description |
String | The transcript. |
getTranscriptBytes()
public ByteString getTranscriptBytes()
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Type | Description |
ByteString | The bytes for transcript. |
hasSpeechEndOffset()
public boolean hasSpeechEndOffset()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Type | Description |
boolean | Whether the speechEndOffset field is set. |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Type | Description |
boolean |
mergeFrom(StreamingRecognitionResult other)
public StreamingRecognitionResult.Builder mergeFrom(StreamingRecognitionResult other)
Name | Description |
other | StreamingRecognitionResult |
Type | Description |
StreamingRecognitionResult.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public StreamingRecognitionResult.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Name | Description |
input | CodedInputStream |
extensionRegistry | ExtensionRegistryLite |
Type | Description |
StreamingRecognitionResult.Builder |
Type | Description |
IOException |
mergeFrom(Message other)
public StreamingRecognitionResult.Builder mergeFrom(Message other)
Name | Description |
other | Message |
Type | Description |
StreamingRecognitionResult.Builder |
mergeSpeechEndOffset(Duration value)
public StreamingRecognitionResult.Builder mergeSpeechEndOffset(Duration value)
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Name | Description |
value | Duration |
Type | Description |
StreamingRecognitionResult.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionResult.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
StreamingRecognitionResult.Builder |
removeSpeechWordInfo(int index)
public StreamingRecognitionResult.Builder removeSpeechWordInfo(int index)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
Type | Description |
StreamingRecognitionResult.Builder |
setConfidence(float value)
public StreamingRecognitionResult.Builder setConfidence(float value)
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
float confidence = 4;
Name | Description |
value | float The confidence to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setField(Descriptors.FieldDescriptor field, Object value)
public StreamingRecognitionResult.Builder setField(Descriptors.FieldDescriptor field, Object value)
Name | Description |
field | FieldDescriptor |
value | Object |
Type | Description |
StreamingRecognitionResult.Builder |
setIsFinal(boolean value)
public StreamingRecognitionResult.Builder setIsFinal(boolean value)
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
bool is_final = 3;
Name | Description |
value | boolean The isFinal to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setLanguageCode(String value)
public StreamingRecognitionResult.Builder setLanguageCode(String value)
Detected language code for the transcript.
string language_code = 10;
Name | Description |
value | String The languageCode to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setLanguageCodeBytes(ByteString value)
public StreamingRecognitionResult.Builder setLanguageCodeBytes(ByteString value)
Detected language code for the transcript.
string language_code = 10;
Name | Description |
value | ByteString The bytes for languageCode to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setMessageType(StreamingRecognitionResult.MessageType value)
public StreamingRecognitionResult.Builder setMessageType(StreamingRecognitionResult.MessageType value)
Type of the result message.
.google.cloud.dialogflow.v2.StreamingRecognitionResult.MessageType message_type = 1;
Name | Description |
value | StreamingRecognitionResult.MessageType The messageType to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setMessageTypeValue(int value)
public StreamingRecognitionResult.Builder setMessageTypeValue(int value)
Type of the result message.
.google.cloud.dialogflow.v2.StreamingRecognitionResult.MessageType message_type = 1;
Name | Description |
value | int The enum numeric value on the wire for messageType to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public StreamingRecognitionResult.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Name | Description |
field | FieldDescriptor |
index | int |
value | Object |
Type | Description |
StreamingRecognitionResult.Builder |
setSpeechEndOffset(Duration value)
public StreamingRecognitionResult.Builder setSpeechEndOffset(Duration value)
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Name | Description |
value | Duration |
Type | Description |
StreamingRecognitionResult.Builder |
setSpeechEndOffset(Duration.Builder builderForValue)
public StreamingRecognitionResult.Builder setSpeechEndOffset(Duration.Builder builderForValue)
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Name | Description |
builderForValue | Builder |
Type | Description |
StreamingRecognitionResult.Builder |
setSpeechWordInfo(int index, SpeechWordInfo value)
public StreamingRecognitionResult.Builder setSpeechWordInfo(int index, SpeechWordInfo value)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
value | SpeechWordInfo |
Type | Description |
StreamingRecognitionResult.Builder |
setSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)
public StreamingRecognitionResult.Builder setSpeechWordInfo(int index, SpeechWordInfo.Builder builderForValue)
Word-specific information for the words recognized by Speech in
transcript. Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.v2.SpeechWordInfo speech_word_info = 7;
Name | Description |
index | int |
builderForValue | SpeechWordInfo.Builder |
Type | Description |
StreamingRecognitionResult.Builder |
setTranscript(String value)
public StreamingRecognitionResult.Builder setTranscript(String value)
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Name | Description |
value | String The transcript to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setTranscriptBytes(ByteString value)
public StreamingRecognitionResult.Builder setTranscriptBytes(ByteString value)
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Name | Description |
value | ByteString The bytes for transcript to set. |
Type | Description |
StreamingRecognitionResult.Builder | This builder for chaining. |
setUnknownFields(UnknownFieldSet unknownFields)
public final StreamingRecognitionResult.Builder setUnknownFields(UnknownFieldSet unknownFields)
Name | Description |
unknownFields | UnknownFieldSet |
Type | Description |
StreamingRecognitionResult.Builder |