Google Cloud Text-to-Speech v1beta1 API - Class SynthesizeSpeechResponse (2.0.0-beta05)

public sealed class SynthesizeSpeechResponse : IMessage<SynthesizeSpeechResponse>, IEquatable<SynthesizeSpeechResponse>, IDeepCloneable<SynthesizeSpeechResponse>, IBufferMessage, IMessage

Reference documentation and code samples for the Google Cloud Text-to-Speech v1beta1 API class SynthesizeSpeechResponse.

The message returned to the client by the SynthesizeSpeech method.

Inheritance

object > SynthesizeSpeechResponse

Namespace

Google.Cloud.TextToSpeech.V1Beta1

Assembly

Google.Cloud.TextToSpeech.V1Beta1.dll

Constructors

SynthesizeSpeechResponse()

public SynthesizeSpeechResponse()

SynthesizeSpeechResponse(SynthesizeSpeechResponse)

public SynthesizeSpeechResponse(SynthesizeSpeechResponse other)
Parameter
NameDescription
otherSynthesizeSpeechResponse

Properties

AudioConfig

public AudioConfig AudioConfig { get; set; }

The audio metadata of audio_content.

Property Value
TypeDescription
AudioConfig

AudioContent

public ByteString AudioContent { get; set; }

The audio data bytes encoded as specified in the request, including the header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS). For LINEAR16 audio, we include the WAV header. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

Property Value
TypeDescription
ByteString

Timepoints

public RepeatedField<Timepoint> Timepoints { get; }

A link between a position in the original request input and a corresponding time in the output audio. It's only supported via <mark> of SSML input.

Property Value
TypeDescription
RepeatedFieldTimepoint