Transcreva um ficheiro multilingue no Cloud Storage (beta)
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Transcreva um ficheiro de áudio armazenado no Cloud Storage que inclua mais de um idioma.
Explore mais
Para ver documentação detalhada que inclui este exemplo de código, consulte o seguinte:
Exemplo de código
Exceto em caso de indicação contrária, o conteúdo desta página é licenciado de acordo com a Licença de atribuição 4.0 do Creative Commons, e as amostras de código são licenciadas de acordo com a Licença Apache 2.0. Para mais detalhes, consulte as políticas do site do Google Developers. Java é uma marca registrada da Oracle e/ou afiliadas.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],[],[],[],null,["# Transcribe a multi-lingual file in Cloud Storage (beta)\n\nTranscribe an audio file stored in Cloud Storage that includes more than one language.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Enable language recognition in Speech-to-Text](/speech-to-text/docs/enable-language-recognition-speech-to-text)\n\nCode sample\n-----------\n\n### Java\n\n\nTo learn how to install and use the client library for Speech-to-Text, see\n[Speech-to-Text client libraries](/speech-to-text/docs/client-libraries).\n\n\nFor more information, see the\n[Speech-to-Text Java API\nreference documentation](/java/docs/reference/google-cloud-speech/latest/overview).\n\n\nTo authenticate to Speech-to-Text, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n /**\n * Transcribe a remote audio file with multi-language recognition\n *\n * @param gcsUri the path to the remote audio file\n */\n public static void transcribeMultiLanguageGcs(String gcsUri) throws Exception {\n try (SpeechClient speechClient = SpeechClient.create()) {\n\n ArrayList\u003cString\u003e languageList = new ArrayList\u003c\u003e();\n languageList.add(\"es-ES\");\n languageList.add(\"en-US\");\n\n // Configure request to enable multiple languages\n RecognitionConfig config =\n RecognitionConfig.newBuilder()\n .setEncoding(AudioEncoding.LINEAR16)\n .setSampleRateHertz(16000)\n .setLanguageCode(\"ja-JP\")\n .addAllAlternativeLanguageCodes(languageList)\n .build();\n\n // Set the remote path for the audio file\n RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();\n\n // Use non-blocking call for getting file transcription\n OperationFuture\u003cLongRunningRecognizeResponse, LongRunningRecognizeMetadata\u003e response =\n speechClient.longRunningRecognizeAsync(config, audio);\n\n while (!response.isDone()) {\n System.out.println(\"Waiting for response...\");\n Thread.sleep(10000);\n }\n\n for (SpeechRecognitionResult result : response.get().getResultsList()) {\n\n // There can be several alternative transcripts for a given chunk of speech. Just use the\n // first (most likely) one here.\n SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);\n\n // Print out the result\n System.out.printf(\"Transcript : %s\\n\\n\", alternative.getTranscript());\n }\n }\n }\n\n### Node.js\n\n\nTo learn how to install and use the client library for Speech-to-Text, see\n[Speech-to-Text client libraries](/speech-to-text/docs/client-libraries).\n\n\nFor more information, see the\n[Speech-to-Text Node.js API\nreference documentation](/nodejs/docs/reference/speech/latest).\n\n\nTo authenticate to Speech-to-Text, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Imports the Google Cloud client library\n const speech = require('https://cloud.google.com/nodejs/docs/reference/speech/latest/overview.html').v1p1beta1;\n\n // Creates a client\n const client = new speech.https://cloud.google.com/nodejs/docs/reference/speech/latest/overview.html();\n\n /**\n * TODO(developer): Uncomment the following line before running the sample.\n */\n // const uri = path to GCS audio file e.g. `gs:/bucket/audio.wav`;\n\n const config = {\n encoding: 'LINEAR16',\n sampleRateHertz: 44100,\n languageCode: 'en-US',\n alternativeLanguageCodes: ['es-ES', 'en-US'],\n };\n\n const audio = {\n uri: gcsUri,\n };\n\n const request = {\n config: config,\n audio: audio,\n };\n\n const [operation] = await client.longRunningRecognize(request);\n const [response] = await operation.promise();\n const transcription = response.results\n .map(result =\u003e result.alternatives[0].transcript)\n .join('\\n');\n console.log(`Transcription: ${transcription}`);\n\n### Python\n\n\nTo learn how to install and use the client library for Speech-to-Text, see\n[Speech-to-Text client libraries](/speech-to-text/docs/client-libraries).\n\n\nFor more information, see the\n[Speech-to-Text Python API\nreference documentation](/python/docs/reference/speech/latest).\n\n\nTo authenticate to Speech-to-Text, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n\n from google.cloud import speech_v1p1beta1 as speech\n\n\n def transcribe_file_with_multilanguage_gcs(audio_uri: str) -\u003e str:\n \"\"\"Transcribe a remote audio file with multi-language recognition\n Args:\n audio_uri (str): The Google Cloud Storage path to an audio file.\n E.g., gs://[BUCKET]/[FILE]\n Returns:\n str: The generated transcript from the audio file provided.\n \"\"\"\n\n client = speech.SpeechClient()\n\n first_language = \"es-ES\"\n alternate_languages = [\"en-US\", \"fr-FR\"]\n\n # Configure request to enable multiple languages\n recognition_config = speech.RecognitionConfig(\n encoding=speech.RecognitionConfig.AudioEncoding.FLAC,\n sample_rate_hertz=44100,\n language_code=first_language,\n alternative_language_codes=alternate_languages,\n )\n\n # Set the remote path for the audio file\n audio = speech.RecognitionAudio(uri=audio_uri)\n\n # Use non-blocking call for getting file transcription\n response = client.long_running_recognize(\n config=recognition_config, audio=audio\n ).result(timeout=300)\n\n transcript_builder = []\n for i, result in enumerate(response.results):\n alternative = result.alternatives[0]\n transcript_builder.append(\"-\" * 20 + \"\\n\")\n transcript_builder.append(f\"First alternative of result {i}: {alternative}\")\n transcript_builder.append(f\"Transcript: {alternative.transcript} \\n\")\n\n transcript = \"\".join(transcript_builder)\n print(transcript)\n\n return transcript\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=speech)."]]