Media Translation V1beta1 API - Class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig (v0.8.2)

Reference documentation and code samples for the Media Translation V1beta1 API class Google::Cloud::MediaTranslation::V1beta1::StreamingTranslateSpeechConfig.

Config used for streaming translation.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#audio_config

def audio_config() -> ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig
Returns

#audio_config=

def audio_config=(value) -> ::Google::Cloud::MediaTranslation::V1beta1::TranslateSpeechConfig
Parameter
Returns

#single_utterance

def single_utterance() -> ::Boolean
Returns
  • (::Boolean) — Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true.

    If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).

#single_utterance=

def single_utterance=(value) -> ::Boolean
Parameter
  • value (::Boolean) — Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true.

    If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).

Returns
  • (::Boolean) — Optional. If false or omitted, the system performs continuous translation (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingTranslateSpeechResults with the is_final flag set to true.

    If true, the speech translator will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease translation. When the client receives 'END_OF_SINGLE_UTTERANCE' event, the client should stop sending the requests. However, clients should keep receiving remaining responses until the stream is terminated. To construct the complete sentence in a streaming way, one should override (if 'is_final' of previous response is false), or append (if 'is_final' of previous response is true).