Cloud Text-to-Speech basics

Cloud Text-to-Speech allows developers to create natural-sounding, synthetic human speech as playable audio. You can use the audio data files you create using Cloud Text-to-Speech to power your applications or augment media like videos or audio recordings (in compliance with the Google Cloud Platform Terms of Service including compliance with all applicable law).

Cloud Text-to-Speech converts text or Speech Synthesis Markup Language (SSML) input into audio data like MP3 or LINEAR16 (the encoding used in WAV files).

This document is a guide to the fundamental concepts of using Cloud Text-to-Speech. Before diving into the API itself, review the quickstarts.

Basic example

Cloud Text-to-Speech is ideal for any application that plays audio of human speech to users. It allows you to convert arbitrary strings, words, and sentences into the sound of a person speaking the same things.

Imagine that you have a voice assistant app that provides natural language feedback to your users as playable audio files. Your app might take an action and then provide human speech as feedback to the user.

For example, your app may want to report that it successfully added an event to the user's calendar. Your app constructs a response string to report the success to the user, something like "I've added the event to your calendar."

With Cloud Text-to-Speech, you can convert that response string to actual human speech to play back to the user, similar to the example provided below.


Example 1. Audio file generated from Cloud Text-to-Speech

To create an audio file like example 1, you send a request to Cloud Text-to-Speech like the following code snippet.

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) -H "Content-Type: application/json; charset=utf-8" --data "{
  'input':{
    'text':'I\'ve added the event to your calendar.'
  },
  'voice':{
    'languageCode':'en-gb',
    'name':'en-GB-Standard-A',
    'ssmlGender':'FEMALE'
  },
  'audioConfig':{
    'audioEncoding':'MP3'
  }
}" "https://texttospeech.googleapis.com/v1/text:synthesize"

Speech synthesis

The process of translating text input into audio data is called synthesis and the output of synthesis is called synthetic speech. Cloud Text-to-Speech takes two types of input: raw text or SSML-formatted data (discussed below). To create a new audio file, you call the synthesize endpoint of the API.

The speech synthesis process generates raw audio data as a base64-encoded string. You must decode the base64-encoded string into an audio file before an application can play it. Most platforms and operating systems have tools for decoding base64 text into playable media files.

To learn more about synthesis, review the quickstart or the Creating Voice Audio Files page.

Voices

Cloud Text-to-Speech creates raw audio data of natural, human speech. That is, it creates audio that sounds like a person talking. When you send a synthesis request to Cloud Text-to-Speech, you must specify a voice that 'speaks' the words.

Cloud Text-to-Speech has a wide selection of custom voices available for you to use. The voices differ by language, gender, and accent (for some languages). For example, you can produce create audio that mimics the sound of a female English speaker with a British accent like example 1, above. You can also convert the same text into a different voice, say a male English speaker with an Australian accent.


Example 2. Audio file generated with en-AU speaker

To see the complete list of the available voices, see Supported Voices.

WaveNet voices

Along with other, traditional synthetic voices, Cloud Text-to-Speech also provides premium, WaveNet-generated voices. Users find the Wavenet-generated voices to be more warm and human-like than other synthetic voices.

The key difference to a WaveNet voice is the WaveNet model used to generate the voice. WaveNet models have been trained using raw audio samples of actual humans speaking. As a result, these models generate synthetic speech with more human-like emphasis and inflection on syllables, phonemes, and words.

Compare the following two samples of synthetic speech.


Example 3. Audio file generated with a standard voice


Example 4. Audio file generated with a WaveNet voice

To learn more about the benefits of WaveNet-generated voices, see WaveNet and Other Synthetic Voices.

Other audio output settings

Besides the voice, you can also configure other aspects of the audio data output created by speech synthesis. Cloud Text-to-Speech supports configuring the speaking rate, pitch, volume, and sample rate hertz.

Review the AudioConfig reference for more information.

Speech Synthesis Markup Language (SSML) support

You can enhance the synthetic speech produced by Cloud Text-to-Speech by marking up the text using Speech Synthesis Markup Language (SSML). SSML enables you to insert pauses, acronym pronunciations, or other additional details into the audio data created by Cloud Text-to-Speech. Cloud Text-to-Speech supports a subset of the available SSML elements.

For example, you can ensure that the synthetic speech correctly pronounces ordinal numbers by providing Cloud Text-to-Speech with SSML input that marks ordinal numbers as such.


Example 5. Audio file generated from plain text input


Example 6. Audio file generated from SSML input

To learn more about how to synthesize speech from SSML, see Creating Voice Audio Files

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...

Cloud Text-to-Speech Documentation
Trenger du hjelp? Gå til brukerstøttesiden vår.