Interface StreamingTranslateSpeechRequestOrBuilder (0.9.3)

public interface StreamingTranslateSpeechRequestOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getAudioContent()

public abstract ByteString getAudioContent()

The audio data to be translated. Sequential chunks of audio data are sent in sequential StreamingTranslateSpeechRequest messages. The first StreamingTranslateSpeechRequest message must not contain audio_content data and all subsequent StreamingTranslateSpeechRequest messages must contain audio_content data. The audio bytes must be encoded as specified in StreamingTranslateSpeechConfig. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64).

bytes audio_content = 2;

Returns
TypeDescription
ByteString

The audioContent.

getStreamingConfig()

public abstract StreamingTranslateSpeechConfig getStreamingConfig()

Provides information to the recognizer that specifies how to process the request. The first StreamingTranslateSpeechRequest message must contain a streaming_config message.

.google.cloud.mediatranslation.v1beta1.StreamingTranslateSpeechConfig streaming_config = 1;

Returns
TypeDescription
StreamingTranslateSpeechConfig

The streamingConfig.

getStreamingConfigOrBuilder()

public abstract StreamingTranslateSpeechConfigOrBuilder getStreamingConfigOrBuilder()

Provides information to the recognizer that specifies how to process the request. The first StreamingTranslateSpeechRequest message must contain a streaming_config message.

.google.cloud.mediatranslation.v1beta1.StreamingTranslateSpeechConfig streaming_config = 1;

Returns
TypeDescription
StreamingTranslateSpeechConfigOrBuilder

getStreamingRequestCase()

public abstract StreamingTranslateSpeechRequest.StreamingRequestCase getStreamingRequestCase()
Returns
TypeDescription
StreamingTranslateSpeechRequest.StreamingRequestCase

hasAudioContent()

public abstract boolean hasAudioContent()

The audio data to be translated. Sequential chunks of audio data are sent in sequential StreamingTranslateSpeechRequest messages. The first StreamingTranslateSpeechRequest message must not contain audio_content data and all subsequent StreamingTranslateSpeechRequest messages must contain audio_content data. The audio bytes must be encoded as specified in StreamingTranslateSpeechConfig. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64).

bytes audio_content = 2;

Returns
TypeDescription
boolean

Whether the audioContent field is set.

hasStreamingConfig()

public abstract boolean hasStreamingConfig()

Provides information to the recognizer that specifies how to process the request. The first StreamingTranslateSpeechRequest message must contain a streaming_config message.

.google.cloud.mediatranslation.v1beta1.StreamingTranslateSpeechConfig streaming_config = 1;

Returns
TypeDescription
boolean

Whether the streamingConfig field is set.