public static final class SynthesizeSpeechResponse.Builder extends GeneratedMessageV3.Builder<SynthesizeSpeechResponse.Builder> implements SynthesizeSpeechResponseOrBuilder
The message returned to the client by the SynthesizeSpeech
method.
Protobuf type google.cloud.texttospeech.v1beta1.SynthesizeSpeechResponse
Static Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns
Methods
addAllTimepoints(Iterable<? extends Timepoint> values)
public SynthesizeSpeechResponse.Builder addAllTimepoints(Iterable<? extends Timepoint> values)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Name | Description |
values | Iterable<? extends com.google.cloud.texttospeech.v1beta1.Timepoint>
|
Returns
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public SynthesizeSpeechResponse.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Returns
Overrides
addTimepoints(Timepoint value)
public SynthesizeSpeechResponse.Builder addTimepoints(Timepoint value)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
addTimepoints(Timepoint.Builder builderForValue)
public SynthesizeSpeechResponse.Builder addTimepoints(Timepoint.Builder builderForValue)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
addTimepoints(int index, Timepoint value)
public SynthesizeSpeechResponse.Builder addTimepoints(int index, Timepoint value)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameters
Returns
addTimepoints(int index, Timepoint.Builder builderForValue)
public SynthesizeSpeechResponse.Builder addTimepoints(int index, Timepoint.Builder builderForValue)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameters
Returns
addTimepointsBuilder()
public Timepoint.Builder addTimepointsBuilder()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
addTimepointsBuilder(int index)
public Timepoint.Builder addTimepointsBuilder(int index)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
build()
public SynthesizeSpeechResponse build()
Returns
buildPartial()
public SynthesizeSpeechResponse buildPartial()
Returns
clear()
public SynthesizeSpeechResponse.Builder clear()
Returns
Overrides
clearAudioConfig()
public SynthesizeSpeechResponse.Builder clearAudioConfig()
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Returns
clearAudioContent()
public SynthesizeSpeechResponse.Builder clearAudioContent()
The audio data bytes encoded as specified in the request, including the
header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS).
For LINEAR16 audio, we include the WAV header. Note: as
with all bytes fields, protobuffers use a pure binary representation,
whereas JSON representations use base64.
bytes audio_content = 1;
Returns
clearField(Descriptors.FieldDescriptor field)
public SynthesizeSpeechResponse.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Returns
Overrides
clearOneof(Descriptors.OneofDescriptor oneof)
public SynthesizeSpeechResponse.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Returns
Overrides
clearTimepoints()
public SynthesizeSpeechResponse.Builder clearTimepoints()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
clone()
public SynthesizeSpeechResponse.Builder clone()
Returns
Overrides
getAudioConfig()
public AudioConfig getAudioConfig()
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Returns
getAudioConfigBuilder()
public AudioConfig.Builder getAudioConfigBuilder()
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Returns
getAudioConfigOrBuilder()
public AudioConfigOrBuilder getAudioConfigOrBuilder()
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Returns
getAudioContent()
public ByteString getAudioContent()
The audio data bytes encoded as specified in the request, including the
header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS).
For LINEAR16 audio, we include the WAV header. Note: as
with all bytes fields, protobuffers use a pure binary representation,
whereas JSON representations use base64.
bytes audio_content = 1;
Returns
getDefaultInstanceForType()
public SynthesizeSpeechResponse getDefaultInstanceForType()
Returns
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns
Overrides
getTimepoints(int index)
public Timepoint getTimepoints(int index)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
getTimepointsBuilder(int index)
public Timepoint.Builder getTimepointsBuilder(int index)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
getTimepointsBuilderList()
public List<Timepoint.Builder> getTimepointsBuilderList()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
getTimepointsCount()
public int getTimepointsCount()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
getTimepointsList()
public List<Timepoint> getTimepointsList()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
getTimepointsOrBuilder(int index)
public TimepointOrBuilder getTimepointsOrBuilder(int index)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
getTimepointsOrBuilderList()
public List<? extends TimepointOrBuilder> getTimepointsOrBuilderList()
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Returns
Type | Description |
List<? extends com.google.cloud.texttospeech.v1beta1.TimepointOrBuilder> | |
hasAudioConfig()
public boolean hasAudioConfig()
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Returns
Type | Description |
boolean | Whether the audioConfig field is set.
|
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Overrides
isInitialized()
public final boolean isInitialized()
Returns
Overrides
mergeAudioConfig(AudioConfig value)
public SynthesizeSpeechResponse.Builder mergeAudioConfig(AudioConfig value)
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Parameter
Returns
mergeFrom(SynthesizeSpeechResponse other)
public SynthesizeSpeechResponse.Builder mergeFrom(SynthesizeSpeechResponse other)
Parameter
Returns
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public SynthesizeSpeechResponse.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Returns
Overrides
Exceptions
mergeFrom(Message other)
public SynthesizeSpeechResponse.Builder mergeFrom(Message other)
Parameter
Returns
Overrides
mergeUnknownFields(UnknownFieldSet unknownFields)
public final SynthesizeSpeechResponse.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Returns
Overrides
removeTimepoints(int index)
public SynthesizeSpeechResponse.Builder removeTimepoints(int index)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameter
Returns
setAudioConfig(AudioConfig value)
public SynthesizeSpeechResponse.Builder setAudioConfig(AudioConfig value)
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Parameter
Returns
setAudioConfig(AudioConfig.Builder builderForValue)
public SynthesizeSpeechResponse.Builder setAudioConfig(AudioConfig.Builder builderForValue)
The audio metadata of audio_content
.
.google.cloud.texttospeech.v1beta1.AudioConfig audio_config = 4;
Parameter
Returns
setAudioContent(ByteString value)
public SynthesizeSpeechResponse.Builder setAudioContent(ByteString value)
The audio data bytes encoded as specified in the request, including the
header for encodings that are wrapped in containers (e.g. MP3, OGG_OPUS).
For LINEAR16 audio, we include the WAV header. Note: as
with all bytes fields, protobuffers use a pure binary representation,
whereas JSON representations use base64.
bytes audio_content = 1;
Parameter
Name | Description |
value | ByteString
The audioContent to set.
|
Returns
setField(Descriptors.FieldDescriptor field, Object value)
public SynthesizeSpeechResponse.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Returns
Overrides
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public SynthesizeSpeechResponse.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Returns
Overrides
setTimepoints(int index, Timepoint value)
public SynthesizeSpeechResponse.Builder setTimepoints(int index, Timepoint value)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameters
Returns
setTimepoints(int index, Timepoint.Builder builderForValue)
public SynthesizeSpeechResponse.Builder setTimepoints(int index, Timepoint.Builder builderForValue)
A link between a position in the original request input and a corresponding
time in the output audio. It's only supported via <mark>
of SSML input.
repeated .google.cloud.texttospeech.v1beta1.Timepoint timepoints = 2;
Parameters
Returns
setUnknownFields(UnknownFieldSet unknownFields)
public final SynthesizeSpeechResponse.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Returns
Overrides