Class SynthesizeSpeechResponse (1.0.0-beta04)

public sealed class SynthesizeSpeechResponse : IMessage<SynthesizeSpeechResponse>, IEquatable<SynthesizeSpeechResponse>, IDeepCloneable<SynthesizeSpeechResponse>, IBufferMessage, IMessage

The message returned to the client by the SynthesizeSpeech method.

Inheritance

Object > SynthesizeSpeechResponse

Namespace

Google.Cloud.TextToSpeech.V1Beta1

Assembly

Google.Cloud.TextToSpeech.V1Beta1.dll

Constructors

SynthesizeSpeechResponse()

public SynthesizeSpeechResponse()

SynthesizeSpeechResponse(SynthesizeSpeechResponse)

public SynthesizeSpeechResponse(SynthesizeSpeechResponse other)
Parameter
NameDescription
otherSynthesizeSpeechResponse

Properties

AudioConfig

public AudioConfig AudioConfig { get; set; }

The audio metadata of audio_content.

Property Value
TypeDescription
AudioConfig

AudioContent

public ByteString AudioContent { get; set; }

The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

Property Value
TypeDescription
ByteString

Timepoints

public RepeatedField<Timepoint> Timepoints { get; }

A link between a position in the original request input and a corresponding time in the output audio. It's only supported via &lt;mark> of SSML input.

Property Value
TypeDescription
RepeatedField<Timepoint>