Using your deployed custom voice model

Before you begin

  1. You supplied Google with a Google Cloud project ID in order to access the Custom Voice Feature. Make sure that you have enabled billing and enabled the Text-to-Speech API and the AutoML API for this project, as well as installed and initialized the Google Cloud CLI.

  2. Grant the AutoML Predictor role to your Google Account on your project. For instructions, see Grant a single role.

Using the command line

HTTP method and URL:

POST https://texttospeech.googleapis.com/v1beta1/text:synthesize

Request JSON body:

{
  "input":{
    "text":"Android is a mobile operating system developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets."
  },
  "voice":{
    "custom_voice":{
      "reportedUsage":"REALTIME",
      "model":"projects/{project_id}/locations/us-central1/models/{model_id}",
     }
  },
  "audioConfig":{
    "audioEncoding":"LINEAR16"
  }
}

Save the request body in a file called request.json and execute the following command, replacing PROJECT_ID with your project ID:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth print-access-token) \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://texttospeech.googleapis.com/v1beta1/text:synthesize

Using the Python client library

Download the client library and run the following command:

pip install texttospeech-custom-voice-beta-v1beta1-py.tar.gz
pip install protobuf --upgrade pip

Sample Python code

"""Synthesize custom voice from the input string of text or ssml.
"""
from google.cloud import texttospeech_v1beta1

# Instantiate a client
client = texttospeech_v1beta1.TextToSpeechClient()

# Set the text input to be synthesized
synthesis_input = texttospeech_v1beta1.types.SynthesisInput(text="Hello, World!")

# Build the voice request, select the language code ("en-US") and specify
# custom voice model and speaker_id.
custom_voice = texttospeech_v1beta1.types.CustomVoiceParams(
    reported_usage=texttospeech_v1beta1.enums.CustomVoiceParams.ReportedUsage.REALTIME,
    model='projects/{project_id}/locations/us-central1/models/{model_id}')
voice = texttospeech_v1beta1.types.VoiceSelectionParams(
    language_code='en-US',
    custom_voice=custom_voice)

# Select the type of audio file you want returned
audio_config = texttospeech_v1beta1.types.AudioConfig(
    audio_encoding=texttospeech_v1beta1.enums.AudioEncoding.LINEAR16)

# Perform the text-to-speech request on the text input with the selected
# voice parameters and audio file type
response = client.synthesize_speech(synthesis_input, voice, audio_config)

# The response's audio_content is binary.
with open('output.wav', 'wb') as out:
    # Write the response to the output file.
    out.write(response.audio_content)
    print('Audio content written to file "output.wav"')