- 0.63.0 (latest)
- 0.62.0
- 0.60.0
- 0.59.0
- 0.58.0
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.7
- 0.13.1
- 0.12.1
- 0.11.5
public interface StreamingRecognitionResultOrBuilder extends MessageOrBuilder
Implements
MessageOrBuilderMethods
getConfidence()
public abstract float getConfidence()
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
float confidence = 4;
Returns | |
---|---|
Type | Description |
float |
The confidence. |
getIsFinal()
public abstract boolean getIsFinal()
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
bool is_final = 3;
Returns | |
---|---|
Type | Description |
boolean |
The isFinal. |
getLanguageCode()
public abstract String getLanguageCode()
Detected language code for the transcript.
string language_code = 10;
Returns | |
---|---|
Type | Description |
String |
The languageCode. |
getLanguageCodeBytes()
public abstract ByteString getLanguageCodeBytes()
Detected language code for the transcript.
string language_code = 10;
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for languageCode. |
getMessageType()
public abstract StreamingRecognitionResult.MessageType getMessageType()
Type of the result message.
.google.cloud.dialogflow.cx.v3beta1.StreamingRecognitionResult.MessageType message_type = 1;
Returns | |
---|---|
Type | Description |
StreamingRecognitionResult.MessageType |
The messageType. |
getMessageTypeValue()
public abstract int getMessageTypeValue()
Type of the result message.
.google.cloud.dialogflow.cx.v3beta1.StreamingRecognitionResult.MessageType message_type = 1;
Returns | |
---|---|
Type | Description |
int |
The enum numeric value on the wire for messageType. |
getSpeechEndOffset()
public abstract Duration getSpeechEndOffset()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
=
TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Returns | |
---|---|
Type | Description |
Duration |
The speechEndOffset. |
getSpeechEndOffsetOrBuilder()
public abstract DurationOrBuilder getSpeechEndOffsetOrBuilder()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
=
TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Returns | |
---|---|
Type | Description |
DurationOrBuilder |
getSpeechWordInfo(int index)
public abstract SpeechWordInfo getSpeechWordInfo(int index)
Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.cx.v3beta1.SpeechWordInfo speech_word_info = 7;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechWordInfo |
getSpeechWordInfoCount()
public abstract int getSpeechWordInfoCount()
Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.cx.v3beta1.SpeechWordInfo speech_word_info = 7;
Returns | |
---|---|
Type | Description |
int |
getSpeechWordInfoList()
public abstract List<SpeechWordInfo> getSpeechWordInfoList()
Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.cx.v3beta1.SpeechWordInfo speech_word_info = 7;
Returns | |
---|---|
Type | Description |
List<SpeechWordInfo> |
getSpeechWordInfoOrBuilder(int index)
public abstract SpeechWordInfoOrBuilder getSpeechWordInfoOrBuilder(int index)
Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.cx.v3beta1.SpeechWordInfo speech_word_info = 7;
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
SpeechWordInfoOrBuilder |
getSpeechWordInfoOrBuilderList()
public abstract List<? extends SpeechWordInfoOrBuilder> getSpeechWordInfoOrBuilderList()
Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
repeated .google.cloud.dialogflow.cx.v3beta1.SpeechWordInfo speech_word_info = 7;
Returns | |
---|---|
Type | Description |
List<? extends com.google.cloud.dialogflow.cx.v3beta1.SpeechWordInfoOrBuilder> |
getStability()
public abstract float getStability()
An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result:
- If the value is unspecified or 0.0, Dialogflow didn't compute the
stability. In particular, Dialogflow will only provide stability for
TRANSCRIPT
results withis_final = false
. - Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.
float stability = 6;
Returns | |
---|---|
Type | Description |
float |
The stability. |
getTranscript()
public abstract String getTranscript()
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Returns | |
---|---|
Type | Description |
String |
The transcript. |
getTranscriptBytes()
public abstract ByteString getTranscriptBytes()
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string transcript = 2;
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for transcript. |
hasSpeechEndOffset()
public abstract boolean hasSpeechEndOffset()
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
=
TRANSCRIPT
.
.google.protobuf.Duration speech_end_offset = 8;
Returns | |
---|---|
Type | Description |
boolean |
Whether the speechEndOffset field is set. |