Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Sintetiza la voz con transmisión bidireccional
En este documento, se explica el proceso de sintetizar audio con la transmisión bidireccional.
La transmisión bidireccional te permite enviar entradas de texto y recibir datos de audio de forma simultánea. Esto significa que puedes comenzar a sintetizar la voz antes de que se envíe el texto de entrada completo, lo que reduce la latencia y habilita interacciones en tiempo real. Los asistentes de voz y los juegos interactivos usan la transmisión bidireccional para crear aplicaciones más dinámicas y responsivas.
Para obtener más información sobre los conceptos básicos de Text-to-Speech, consulta Conceptos básicos de Text-to-Speech.
Antes de comenzar
Antes de enviar una solicitud a la API de Text-to-Speech, debes completar las siguientes acciones. Consulta la página antes de comenzar para obtener más detalles.
Sintetiza la voz con transmisión bidireccional
Instala la biblioteca cliente
Envía una transmisión de texto y recibe una transmisión de audio
La API acepta un flujo de solicitudes con el tipo StreamingSynthesizeRequest
, que contiene StreamingSynthesisInput
o StreamingSynthesizeConfig
.
Antes de enviar un flujo StreamingSynthesizeRequest
con StreamingSynthesisInput
, que proporciona una entrada de texto, envía exactamente un StreamingSynthesizeRequest
con un StreamingSynthesizeConfig
.
La transmisión de Text-to-Speech solo es compatible con Chirp 3: Voces en HD.
Limpia
Para evitar cargos innecesarios de Google Cloud Platform, usa la
Google Cloud console para borrar tu proyecto si no lo necesitas.
Salvo que se indique lo contrario, el contenido de esta página está sujeto a la licencia Atribución 4.0 de Creative Commons, y los ejemplos de código están sujetos a la licencia Apache 2.0. Para obtener más información, consulta las políticas del sitio de Google Developers. Java es una marca registrada de Oracle o sus afiliados.
Última actualización: 2025-09-04 (UTC)
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Quickstart: Synthesize speech with bidirectional streaming quickstart\n\nSynthesize speech with bidirectional streaming\n==============================================\n\nThis document walks you through the process of synthesizing audio using\nbidirectional streaming.\n\nBidirectional streaming lets you send text input and receive audio data\nsimultaneously. This means that you can start synthesizing speech before the\ncomplete input text is sent, which reduces latency and enables real-time\ninteractions. Voice assistants and interactive games use bidirectional streaming\nto create more dynamic and responsive applications.\n\nTo learn more about the fundamental concepts in Text-to-Speech, read\n[Text-to-Speech Basics](/text-to-speech/docs/basics).\n\nBefore you begin\n----------------\n\nBefore you can send a request to the Text-to-Speech API, you must have completed\nthe following actions. See the\n[before you begin](/text-to-speech/docs/before-you-begin) page for details.\n\n- Enable Text-to-Speech on a Google Cloud project.\n 1. Make sure billing is enabled for Text-to-Speech.\n-\n [Install](/sdk/docs/install) the Google Cloud CLI, and then\n [sign in to the gcloud CLI with your federated identity](/iam/docs/workforce-log-in-gcloud).\n\n After signing in,\n [initialize](/sdk/docs/initializing) the Google Cloud CLI by running the following command:\n\n ```bash\n gcloud init\n ```\n\nSynthesize speech with bidirectional streaming\n----------------------------------------------\n\n### Install the client library\n\n### Python\n\nBefore installing the library, make sure you've [prepared your environment for Python development](/python/docs/setup). \n\n```\npip install --upgrade google-cloud-texttospeech\n```\n\n\u003cbr /\u003e\n\n### Send a stream of text and receive a stream of audio\n\nThe API accepts a stream of requests with type `StreamingSynthesizeRequest`,\nwhich contain either `StreamingSynthesisInput` or `StreamingSynthesizeConfig`.\n\nBefore sending a stream `StreamingSynthesizeRequest` with\n`StreamingSynthesisInput`, which provides text input, send exactly one\n`StreamingSynthesizeRequest` with a `StreamingSynthesizeConfig`.\n\nStreaming Text-to-Speech is only compatible with [Chirp 3: HD voices](/text-to-speech/docs/chirp3-hd). \n\n### Python\n\nBefore running the example, make sure you've [prepared your environment for Python development](/python/docs/setup). \n\n #!/usr/bin/env python\n # Copyright 2024 Google LLC\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n #\n # http://www.apache.org/licenses/LICENSE-2.0\n #\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n #\n\n \"\"\"Google Cloud Text-To-Speech API streaming sample application .\n\n Example usage:\n python streaming_tts_quickstart.py\n \"\"\"\n\n\n def run_streaming_tts_quickstart():\n \"\"\"Synthesizes speech from a stream of input text.\"\"\"\n from google.cloud import texttospeech\n\n client = texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.services.text_to_speech.TextToSpeechClient.html()\n\n # See https://cloud.google.com/text-to-speech/docs/voices for all voices.\n streaming_config = texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.types.StreamingSynthesizeConfig.html(\n voice=texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.types.VoiceSelectionParams.html(\n name=\"en-US-Chirp3-HD-Charon\",\n language_code=\"en-US\",\n )\n )\n\n # Set the config for your stream. The first request must contain your config, and then each subsequent request must contain text.\n config_request = texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.types.StreamingSynthesizeRequest.html(\n streaming_config=streaming_config\n )\n\n text_iterator = [\n \"Hello there. \",\n \"How are you \",\n \"today? It's \",\n \"such nice weather outside.\",\n ]\n\n # Request generator. Consider using Gemini or another LLM with output streaming as a generator.\n def request_generator():\n yield config_request\n for text in text_iterator:\n yield texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.types.StreamingSynthesizeRequest.html(\n input=texttospeech.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.types.StreamingSynthesisInput.html(text=text)\n )\n\n streaming_responses = client.https://cloud.google.com/python/docs/reference/texttospeech/latest/google.cloud.texttospeech_v1.services.text_to_speech.TextToSpeechClient.html#google_cloud_texttospeech_v1_services_text_to_speech_TextToSpeechClient_streaming_synthesize(request_generator())\n\n for response in streaming_responses:\n print(f\"Audio content size in bytes is: {len(response.audio_content)}\")\n\n\n if __name__ == \"__main__\":\n run_streaming_tts_quickstart()\n\n\u003cbr /\u003e\n\nClean up\n--------\n\nTo avoid unnecessary Google Cloud Platform charges, use the\n[Google Cloud console](https://console.cloud.google.com/) to delete your project if you do not need it.\n\nWhat's next\n-----------\n\n\n- Learn more about Cloud Text-to-Speech by reading the [basics](/text-to-speech/docs/basics).\n- Review the list of [available voices](/text-to-speech/docs/voices) you can use for synthetic speech.\n\n\u003cbr /\u003e"]]