使用客户端库将语音转写为文字

本页面介绍了如何使用 Google Cloud 客户端库以您喜爱的编程语言向 Speech-to-Text 发送语音识别请求。

Speech-to-Text 能够将 Google 语音识别技术轻松集成到开发者应用中。您可以向 Speech-to-Text API 发送音频数据,然后该 API 会返回该音频文件的文字转录。如需详细了解该服务,请参阅 Speech-to-Text 基础知识

须知事项

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Speech-to-Text APIs.

    Enable the APIs

  5. Make sure that you have the following role or roles on the project: Cloud Speech Administrator

    Check for the roles

    1. In the Google Cloud console, go to the IAM page.

      Go to IAM
    2. Select the project.
    3. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.

    4. For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.

    Grant the roles

    1. In the Google Cloud console, go to the IAM page.

      进入 IAM
    2. 选择项目。
    3. 点击 授予访问权限
    4. 新的主账号字段中,输入您的用户标识符。 这通常是 Google 账号的电子邮件地址。

    5. 选择角色列表中,选择一个角色。
    6. 如需授予其他角色,请点击 添加其他角色,然后添加其他各个角色。
    7. 点击 Save(保存)。
    8. Install the Google Cloud CLI.
    9. To initialize the gcloud CLI, run the following command:

      gcloud init
    10. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    11. Make sure that billing is enabled for your Google Cloud project.

    12. Enable the Speech-to-Text APIs.

      Enable the APIs

    13. Make sure that you have the following role or roles on the project: Cloud Speech Administrator

      Check for the roles

      1. In the Google Cloud console, go to the IAM page.

        Go to IAM
      2. Select the project.
      3. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.

      4. For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.

      Grant the roles

      1. In the Google Cloud console, go to the IAM page.

        进入 IAM
      2. 选择项目。
      3. 点击 授予访问权限
      4. 新的主账号字段中,输入您的用户标识符。 这通常是 Google 账号的电子邮件地址。

      5. 选择角色列表中,选择一个角色。
      6. 如需授予其他角色,请点击 添加其他角色,然后添加其他各个角色。
      7. 点击 Save(保存)。
      8. Install the Google Cloud CLI.
      9. To initialize the gcloud CLI, run the following command:

        gcloud init
      10. 客户端库可以使用应用默认凭据轻松进行 Google API 身份验证,并向这些 API 发送请求。借助应用默认凭据,您可以在本地测试应用并部署它,无需更改底层代码。如需了解详情,请参阅 使用客户端库时进行身份验证

      11. If you're using a local shell, then create local authentication credentials for your user account:

        gcloud auth application-default login

        You don't need to do this if you're using Cloud Shell.

      此外,请确保您已安装客户端库

      发出音频转录请求

      请使用以下代码向 Speech-to-Text API 发送 Recognize 请求。

      Java

      // Imports the Google Cloud client library
      import com.google.api.gax.longrunning.OperationFuture;
      import com.google.cloud.speech.v2.AutoDetectDecodingConfig;
      import com.google.cloud.speech.v2.CreateRecognizerRequest;
      import com.google.cloud.speech.v2.OperationMetadata;
      import com.google.cloud.speech.v2.RecognitionConfig;
      import com.google.cloud.speech.v2.RecognizeRequest;
      import com.google.cloud.speech.v2.RecognizeResponse;
      import com.google.cloud.speech.v2.Recognizer;
      import com.google.cloud.speech.v2.SpeechClient;
      import com.google.cloud.speech.v2.SpeechRecognitionAlternative;
      import com.google.cloud.speech.v2.SpeechRecognitionResult;
      import com.google.protobuf.ByteString;
      import java.io.IOException;
      import java.nio.file.Files;
      import java.nio.file.Path;
      import java.nio.file.Paths;
      import java.util.List;
      import java.util.concurrent.ExecutionException;
      
      public class QuickstartSampleV2 {
      
        public static void main(String[] args) throws IOException, ExecutionException,
            InterruptedException {
          String projectId = "my-project-id";
          String filePath = "path/to/audioFile.raw";
          String recognizerId = "my-recognizer-id";
          quickstartSampleV2(projectId, filePath, recognizerId);
        }
      
        public static void quickstartSampleV2(String projectId, String filePath, String recognizerId)
            throws IOException, ExecutionException, InterruptedException {
      
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests. After completing all of your requests, call
          // the "close" method on the client to safely clean up any remaining background resources.
          try (SpeechClient speechClient = SpeechClient.create()) {
            Path path = Paths.get(filePath);
            byte[] data = Files.readAllBytes(path);
            ByteString audioBytes = ByteString.copyFrom(data);
      
            String parent = String.format("projects/%s/locations/global", projectId);
      
            // First, create a recognizer
            Recognizer recognizer = Recognizer.newBuilder()
                .setModel("latest_long")
                .addLanguageCodes("en-US")
                .build();
      
            CreateRecognizerRequest createRecognizerRequest = CreateRecognizerRequest.newBuilder()
                .setParent(parent)
                .setRecognizerId(recognizerId)
                .setRecognizer(recognizer)
                .build();
      
            OperationFuture<Recognizer, OperationMetadata> operationFuture =
                speechClient.createRecognizerAsync(createRecognizerRequest);
            recognizer = operationFuture.get();
      
            // Next, create the transcription request
            RecognitionConfig recognitionConfig = RecognitionConfig.newBuilder()
                .setAutoDecodingConfig(AutoDetectDecodingConfig.newBuilder().build())
                .build();
      
            RecognizeRequest request = RecognizeRequest.newBuilder()
                .setConfig(recognitionConfig)
                .setRecognizer(recognizer.getName())
                .setContent(audioBytes)
                .build();
      
            RecognizeResponse response = speechClient.recognize(request);
            List<SpeechRecognitionResult> results = response.getResultsList();
      
            for (SpeechRecognitionResult result : results) {
              // There can be several alternative transcripts for a given chunk of speech. Just use the
              // first (most likely) one here.
              if (result.getAlternativesCount() > 0) {
                SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
                System.out.printf("Transcription: %s%n", alternative.getTranscript());
              }
            }
          }
        }
      }

      Python

      import os
      
      from google.cloud.speech_v2 import SpeechClient
      from google.cloud.speech_v2.types import cloud_speech
      
      PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
      
      
      def quickstart_v2(audio_file: str) -> cloud_speech.RecognizeResponse:
          """Transcribe an audio file.
          Args:
              audio_file (str): Path to the local audio file to be transcribed.
          Returns:
              cloud_speech.RecognizeResponse: The response from the recognize request, containing
              the transcription results
          """
          # Reads a file as bytes
          with open(audio_file, "rb") as f:
              audio_content = f.read()
      
          # Instantiates a client
          client = SpeechClient()
      
          config = cloud_speech.RecognitionConfig(
              auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
              language_codes=["en-US"],
              model="long",
          )
      
          request = cloud_speech.RecognizeRequest(
              recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",
              config=config,
              content=audio_content,
          )
      
          # Transcribes the audio into text
          response = client.recognize(request=request)
      
          for result in response.results:
              print(f"Transcript: {result.alternatives[0].transcript}")
      
          return response
      
      

      您向 Speech-to-Text 发送了第一个请求。

      清理

      为避免因本页中使用的资源导致您的 Google Cloud 账号产生费用,请按照以下步骤操作。

      1. Optional: Revoke the authentication credentials that you created, and delete the local credential file.

        gcloud auth application-default revoke
      2. Optional: Revoke credentials from the gcloud CLI.

        gcloud auth revoke

      控制台

    14. In the Google Cloud console, go to the Manage resources page.

      Go to Manage resources

    15. In the project list, select the project that you want to delete, and then click Delete.
    16. In the dialog, type the project ID, and then click Shut down to delete the project.
    17. gcloud

      Delete a Google Cloud project:

      gcloud projects delete PROJECT_ID

      后续步骤