Reference documentation and code samples for the Cloud Text-to-Speech V1beta1 API class Google::Cloud::TextToSpeech::V1beta1::SynthesizeSpeechResponse.
The message returned to the client by the SynthesizeSpeech
method.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#audio_config
def audio_config() -> ::Google::Cloud::TextToSpeech::V1beta1::AudioConfig
Returns
-
(::Google::Cloud::TextToSpeech::V1beta1::AudioConfig) — The audio metadata of
audio_content
.
#audio_config=
def audio_config=(value) -> ::Google::Cloud::TextToSpeech::V1beta1::AudioConfig
Parameter
-
value (::Google::Cloud::TextToSpeech::V1beta1::AudioConfig) — The audio metadata of
audio_content
.
Returns
-
(::Google::Cloud::TextToSpeech::V1beta1::AudioConfig) — The audio metadata of
audio_content
.
#audio_content
def audio_content() -> ::String
Returns
- (::String) — The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
#audio_content=
def audio_content=(value) -> ::String
Parameter
- value (::String) — The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
Returns
- (::String) — The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
#timepoints
def timepoints() -> ::Array<::Google::Cloud::TextToSpeech::V1beta1::Timepoint>
Returns
-
(::Array<::Google::Cloud::TextToSpeech::V1beta1::Timepoint>) — A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via
<mark>
of SSML input.
#timepoints=
def timepoints=(value) -> ::Array<::Google::Cloud::TextToSpeech::V1beta1::Timepoint>
Parameter
-
value (::Array<::Google::Cloud::TextToSpeech::V1beta1::Timepoint>) — A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via
<mark>
of SSML input.
Returns
-
(::Array<::Google::Cloud::TextToSpeech::V1beta1::Timepoint>) — A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via
<mark>
of SSML input.