使用模型自适应功能改进转录结果

概览

您可以使用模型自适应功能,帮助 Speech-to-Text 更加频繁识别特定字词或短语,而不是其他替代做法。例如,假设您的音频数据中出现“weather”一词的频率较高。当“Speech-to-Text”遇到“weather”一词时,您希望将该字词转录为“weather”,而不是“whether”。在这种情况下,您可以使用模型自适应功能,使 Speech-to-Text 更偏向于识别“weather”。

模型自适应功能在以下用例中特别有用:

  • 提高识别音频数据中经常出现的字词和短语的准确率。例如,您可以提醒识别模型使用用户经常说出的语音指令。

  • 扩展 Speech-to-Text 的识别字词库。 Speech-to-Text 包含一个非常庞大的词汇表。但是,如果您的音频数据经常包含常规用语中很少使用的字词(例如专有名词或特定领域的字词),则可以使用模型自适应功能添加这些字词。

  • 提供的音频含有噪音或不太清晰时,提高语音转写的准确率。

(可选)您可以使用增强型模型自适应功能微调识别模型的偏差。

提高字词和短语的识别准确率

为了提高 Speech-to-Text 在转录您的音频数据时识别出“weather”一词的概率,您可以在 SpeechAdaptation 资源中的 PhraseSet 对象中传递单个单词“weather”。

提供多字词短语时,Speech-to-Text 更有可能按顺序识别这些字词。提供短语还会提升识别短语中各部分(包括单个字词)的概率。请参阅内容限制页面,了解这些短语的数量和大小限制。

使用类改善识别效果

代表自然语言中的常见概念,例如货币单位和日历日期。类可以帮助您提高大量字词组的转写准确率,这些字词组映射到常见概念,但不总是包含相同字词或词组。

例如,假设您的音频数据包含讲话人说出其街道地址的录音。您的某个录音可能是讲话人说“My house is 123 Main Street, the fourth house on the left”。在这种情况下,您希望 Speech-to-Text 将第一个数字序列(“123”)识别为地址而不是序号(“一百二十三”)。不过,并非所有人都住在“123 Main Street”。在一个 PhraseSet 资源中列出所有可能的街道地址是不切实际的。相反,您可以使用类来指示应该识别门牌号,无论这串数字实际是什么。在此示例中,Speech-to-Text 可以更准确地转录“123 Main Street”和“987 Grand Boulevard”等短语,因为它们都被识别为地址门牌号。

类令牌

如需在模型自适应中使用类,请在 PhraseSet 资源的 phrases 字段中包括类令牌。请参阅支持的类令牌列表,了解您的语言支持哪些类令牌。例如,如需改善源音频中地址门牌号的转录结果,请为 PhraseSet. 中的短语提供值 $ADDRESSNUM

您可以将类用作 phrases 数组中的独立项,也可以将一个或多个类令牌嵌入到较长的多字词短语中。例如,您可以在较长的短语中以字符串的形式添加类令牌 ["my address is $ADDRESSNUM"],以指明地址门牌号。但是,如果音频中包含类似但不相同的短语,例如“I am at 123 Main Street”,则此短语无效。为帮助识别相似的短语,您可以额外添加同一个类令牌,如 ["my address is $ADDRESSNUM", "$ADDRESSNUM"]。如果您使用无效或格式错误的类令牌,Speech-to-Text 会忽略该令牌而不触发错误,但仍会使用短语的其余部分作为上下文。

自定义类

您还可以创建自己的 CustomClass(一个类,包含相关项或值的您自己的自定义列表)。例如,您要转换的音频数据可能包含数百个区域性餐厅中的任何一个的名称。餐馆名称在普通语音中相对不常见,因此被识别模型选作“正确”答案的可能性较低。在使用自定义类的情况下,当这些名称出现在音频中时,您可以使识别模型偏向于正确识别这些名称。

若要使用自定义类,请创建 CustomClass 资源,其包含所有餐馆名称作为 ClassItem。 自定义类的工作方式与预构建的类令牌相同。phrase 可以同时包含预构建的类令牌和自定义类。

使用增强型功能微调转录结果

默认情况下,模型自适应的影响相对较小,对单字词短语来说尤其如此。模型自适应增强型功能允许您通过为某些短语分配比其他短语更高的权重来提高识别模型偏差。如果以下所有条件都成立,我们建议您实现增强型功能:

  1. 您已实现模型自适应。
  2. 您希望进一步调整模型自适应对转写结果的影响强度。如需查看增强型功能是否支持您的语言,请参阅语言支持页面

例如,您的很多录音中讲话人在询问“fare to get into the county fair”,其中“fair”一词出现的频率高于“fare”。在这种情况下,您可以使用模型自适应,通过将“fair”和“fare”添加为 PhraseSet 资源中的 phrases 来提高模型识别“fair”和“fare”的可能性。这会指示 Speech-to-Text 更频繁地识别“fair”和“fare”,而不是如“hare”或“lair”这些词。

不过,“fair”的识别频率要高于“fare”,因为它在音频中出现的频率更高。您可能已经使用 Speech-to-Text API 转录了音频,并发现在识别正确的字词(“fair”)时存在大量错误。在本例中,您可能需要使用增强型功能为“fair”指定比“fare”更高的增强值。为“fair”指定较高的权重值会使 Speech-to-Text API 选择“fair”的频率高于“fare”。如果没有增强值,识别模型将识别出具有相同概率的“fair”和“fare”。

增强基础知识

使用增强型功能时,可为 PhraseSet 资源中的 phrase 项指定权重值。Speech-to-Text 为音频数据中的字词选择可能的转录时,会参考此权重值。值越高,Speech-to-Text 从可能的备选项中选择该字词或短语的可能性就越大。

如果为多字词短语指定增强值,则增强型操作将应用于整个短语,而且仅应用于整个短语。例如,您想为短语“我最喜欢的美国自然历史博物馆的展览是蓝鲸”指定一个增强值。如果您向 phrase 对象添加该短语并分配增强值,则识别模型更有可能完整地逐字识别该词组。

如果通过增强一个多单词短语没有得到想要的结果,我们建议将组成该短语的所有双字母组合(2个单词,按顺序)添加为附加 phrase 项,并为每个短语分配增强值。继续上一个例子,您可以研究添加额外的双字母和尾字母(超过两个字词),例如“我的最爱”“我最爱的展览”“最爱的展览”“我最爱的美国自然历史博物馆展览”“美国自然历史博物馆”和“蓝鲸”。STT 识别模型更有可能识别音频中的相关短语,这些短语包含原始增强短语的一部分,但不能逐字匹配。

设置增强值

增强值必须是大于 0 的浮点值。增强值的实际上限为 20。为获得最佳转录结果,请通过向上或向下调整增强值来尝试转录结果,直到获得准确的转录结果。

较高的增强值可以减少假负例,假负例是指音频中出现的字词或短语未被 Speech-to-Text 正确识别的情况。但是,增强型功能也会增加出现假正例的可能性;假正例是指音频中不包含的字词或短语出现在转录中的情况。

使用模型自适应的示例用例

以下示例将引导您完成使用模型自适应来转写某人说“The word is fare”的录音的过程。在此示例中,如果没有语音自适应,Speech-to-Text 会识别“fair”一词。使用语音自适应,Speech-to-Text 可以改为识别“fare”一词。

准备工作

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Speech-to-Text APIs.

    Enable the APIs

  5. Make sure that you have the following role or roles on the project: Cloud Speech Administrator

    Check for the roles

    1. In the Google Cloud console, go to the IAM page.

      Go to IAM
    2. Select the project.
    3. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.

    4. For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.

    Grant the roles

    1. In the Google Cloud console, go to the IAM page.

      进入 IAM
    2. 选择项目。
    3. 点击 授予访问权限
    4. 新的主账号字段中,输入您的用户标识符。 这通常是 Google 账号的电子邮件地址。

    5. 选择角色列表中,选择一个角色。
    6. 如需授予其他角色,请点击 添加其他角色,然后添加其他各个角色。
    7. 点击 Save(保存)。
    8. Install the Google Cloud CLI.
    9. To initialize the gcloud CLI, run the following command:

      gcloud init
    10. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Go to project selector

    11. Make sure that billing is enabled for your Google Cloud project.

    12. Enable the Speech-to-Text APIs.

      Enable the APIs

    13. Make sure that you have the following role or roles on the project: Cloud Speech Administrator

      Check for the roles

      1. In the Google Cloud console, go to the IAM page.

        Go to IAM
      2. Select the project.
      3. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.

      4. For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.

      Grant the roles

      1. In the Google Cloud console, go to the IAM page.

        进入 IAM
      2. 选择项目。
      3. 点击 授予访问权限
      4. 新的主账号字段中,输入您的用户标识符。 这通常是 Google 账号的电子邮件地址。

      5. 选择角色列表中,选择一个角色。
      6. 如需授予其他角色,请点击 添加其他角色,然后添加其他各个角色。
      7. 点击 Save(保存)。
      8. Install the Google Cloud CLI.
      9. To initialize the gcloud CLI, run the following command:

        gcloud init
      10. 客户端库可以使用应用默认凭据轻松进行 Google API 身份验证,并向这些 API 发送请求。借助应用默认凭据,您可以在本地测试应用并部署它,无需更改底层代码。如需了解详情,请参阅 使用客户端库时进行身份验证

      11. If you're using a local shell, then create local authentication credentials for your user account:

        gcloud auth application-default login

        You don't need to do this if you're using Cloud Shell.

      此外,请确保您已安装客户端库

      使用 PhraseSet 改进转写结果

      1. 以下示例构建一个包含短语“fare”的 PhraseSet,并将其作为 inline_phrase_set 添加到识别请求中:

      Python

      import os
      
      from google.cloud.speech_v2 import SpeechClient
      from google.cloud.speech_v2.types import cloud_speech
      
      PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
      
      
      def adaptation_v2_inline_phrase_set(audio_file: str) -> cloud_speech.RecognizeResponse:
          """Enhances speech recognition accuracy using an inline phrase set.
          The inline custom phrase set helps the recognizer produce more accurate transcriptions for specific terms.
          Phrases are given a boost to increase their chances of being recognized correctly.
          Args:
              audio_file (str): Path to the local audio file to be transcribed.
          Returns:
              cloud_speech.RecognizeResponse: The full response object which includes the transcription results.
          """
      
          # Instantiates a client
          client = SpeechClient()
      
          # Reads a file as bytes
          with open(audio_file, "rb") as f:
              audio_content = f.read()
      
          # Build inline phrase set to produce a more accurate transcript
          phrase_set = cloud_speech.PhraseSet(
              phrases=[{"value": "fare", "boost": 10}, {"value": "word", "boost": 20}]
          )
          adaptation = cloud_speech.SpeechAdaptation(
              phrase_sets=[
                  cloud_speech.SpeechAdaptation.AdaptationPhraseSet(
                      inline_phrase_set=phrase_set
                  )
              ]
          )
          config = cloud_speech.RecognitionConfig(
              auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
              adaptation=adaptation,
              language_codes=["en-US"],
              model="short",
          )
      
          # Prepare the request which includes specifying the recognizer, configuration, and the audio content
          request = cloud_speech.RecognizeRequest(
              recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",
              config=config,
              content=audio_content,
          )
      
          # Transcribes the audio into text
          response = client.recognize(request=request)
      
          for result in response.results:
              print(f"Transcript: {result.alternatives[0].transcript}")
      
          return response
      
      
      1. 此示例会创建一个包含相同短语的 PhraseSet 资源,然后在识别请求中引用该资源:

      Python

      import os
      
      from google.cloud.speech_v2 import SpeechClient
      from google.cloud.speech_v2.types import cloud_speech
      
      PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
      
      
      def adaptation_v2_phrase_set_reference(
          audio_file: str,
          phrase_set_id: str,
      ) -> cloud_speech.RecognizeResponse:
          """Transcribe audio files using a PhraseSet.
          Args:
              audio_file (str): Path to the local audio file to be transcribed.
              phrase_set_id (str): The unique ID of the PhraseSet to use.
          Returns:
              cloud_speech.RecognizeResponse: The full response object which includes the transcription results.
          """
      
          # Instantiates a client
          client = SpeechClient()
      
          # Reads a file as bytes
          with open(audio_file, "rb") as f:
              audio_content = f.read()
      
          # Creating operation of creating the PhraseSet on the cloud.
          operation = client.create_phrase_set(
              parent=f"projects/{PROJECT_ID}/locations/global",
              phrase_set_id=phrase_set_id,
              phrase_set=cloud_speech.PhraseSet(phrases=[{"value": "fare", "boost": 10}]),
          )
          phrase_set = operation.result()
      
          # Add a reference of the PhraseSet into the recognition request
          adaptation = cloud_speech.SpeechAdaptation(
              phrase_sets=[
                  cloud_speech.SpeechAdaptation.AdaptationPhraseSet(
                      phrase_set=phrase_set.name
                  )
              ]
          )
      
          # Automatically detect audio encoding. Use "short" model for short utterances.
          config = cloud_speech.RecognitionConfig(
              auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
              adaptation=adaptation,
              language_codes=["en-US"],
              model="short",
          )
          #  Prepare the request which includes specifying the recognizer, configuration, and the audio content
          request = cloud_speech.RecognizeRequest(
              recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",
              config=config,
              content=audio_content,
          )
          # Transcribes the audio into text
          response = client.recognize(request=request)
      
          for result in response.results:
              print(f"Transcript: {result.alternatives[0].transcript}")
      
          return response
      
      

      使用 CustomClass 提高转录结果

      1. 以下示例构建一个包含项“fare”和名称“fare”的 CustomClass。然后,它在识别请求中的 inline_phrase_set 内引用 CustomClass

      Python

      import os
      
      from google.cloud.speech_v2 import SpeechClient
      from google.cloud.speech_v2.types import cloud_speech
      
      PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
      
      
      def adaptation_v2_inline_custom_class(
          audio_file: str,
      ) -> cloud_speech.RecognizeResponse:
          """Transcribe audio file using inline custom class.
          The inline custom class helps the recognizer produce more accurate transcriptions for specific terms.
          Args:
              audio_file (str): Path to the local audio file to be transcribed.
          Returns:
              cloud_speech.RecognizeResponse: The response object which includes the transcription results.
          """
          # Instantiates a client
          client = SpeechClient()
      
          # Reads a file as bytes
          with open(audio_file, "rb") as f:
              audio_content = f.read()
      
          # Define an inline custom class to enhance recognition accuracy with specific items like "fare" etc.
          custom_class_name = "your-class-name"
          custom_class = cloud_speech.CustomClass(
              name=custom_class_name,
              items=[{"value": "fare"}],
          )
      
          # Build inline phrase set to produce a more accurate transcript
          phrase_set = cloud_speech.PhraseSet(
              phrases=[{"value": custom_class_name, "boost": 20}]
          )
          adaptation = cloud_speech.SpeechAdaptation(
              phrase_sets=[
                  cloud_speech.SpeechAdaptation.AdaptationPhraseSet(
                      inline_phrase_set=phrase_set
                  )
              ],
              custom_classes=[custom_class],
          )
          config = cloud_speech.RecognitionConfig(
              auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
              adaptation=adaptation,
              language_codes=["en-US"],
              model="short",
          )
      
          # Prepare the request which includes specifying the recognizer, configuration, and the audio content
          request = cloud_speech.RecognizeRequest(
              recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",
              config=config,
              content=audio_content,
          )
      
          # Transcribes the audio into text
          response = client.recognize(request=request)
      
          for result in response.results:
              print(f"Transcript: {result.alternatives[0].transcript}")
      
          return response
      
      
      1. 此示例会创建一个包含相同项的 CustomClass 资源。最后创建一个 PhraseSet 资源,其短语引用 CustomClass 资源名称。然后,它在识别请求中引用 PhraseSet 资源:

      Python

      import os
      
      from google.cloud.speech_v2 import SpeechClient
      from google.cloud.speech_v2.types import cloud_speech
      
      PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT")
      
      
      def adaptation_v2_custom_class_reference(
          audio_file: str, phrase_set_id: str, custom_class_id: str
      ) -> cloud_speech.RecognizeResponse:
          """Transcribe audio file using a custom class.
          Args:
              audio_file (str): Path to the local audio file to be transcribed.
              phrase_set_id (str): The unique ID of the phrase set to use.
              custom_class_id (str): The unique ID of the custom class to use.
          Returns:
              cloud_speech.RecognizeResponse: The full response object which includes the transcription results.
          """
          # Instantiates a speech client
          client = SpeechClient()
      
          # Reads a file as bytes
          with open(audio_file, "rb") as f:
              audio_content = f.read()
      
          # Create a custom class to improve recognition accuracy for specific terms
          custom_class = cloud_speech.CustomClass(items=[{"value": "fare"}])
          operation = client.create_custom_class(
              parent=f"projects/{PROJECT_ID}/locations/global",
              custom_class_id=custom_class_id,
              custom_class=custom_class,
          )
          custom_class = operation.result()
      
          # Create a persistent PhraseSet to reference in a recognition request
          created_phrase_set = cloud_speech.PhraseSet(
              phrases=[
                  {
                      "value": f"${{{custom_class.name}}}",
                      "boost": 20,
                  },  # Using custom class reference
              ]
          )
          operation = client.create_phrase_set(
              parent=f"projects/{PROJECT_ID}/locations/global",
              phrase_set_id=phrase_set_id,
              phrase_set=created_phrase_set,
          )
          phrase_set = operation.result()
      
          # Add a reference of the PhraseSet into the recognition request
          adaptation = cloud_speech.SpeechAdaptation(
              phrase_sets=[
                  cloud_speech.SpeechAdaptation.AdaptationPhraseSet(
                      phrase_set=phrase_set.name
                  )
              ]
          )
          # Automatically detect the audio's encoding with short audio model
          config = cloud_speech.RecognitionConfig(
              auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(),
              adaptation=adaptation,
              language_codes=["en-US"],
              model="short",
          )
      
          # Create a custom class to reference in a PhraseSet
          request = cloud_speech.RecognizeRequest(
              recognizer=f"projects/{PROJECT_ID}/locations/global/recognizers/_",
              config=config,
              content=audio_content,
          )
      
          # Transcribes the audio into text
          response = client.recognize(request=request)
      
          for result in response.results:
              print(f"Transcript: {result.alternatives[0].transcript}")
      
          return response
      
      

      清理

      为避免因本页中使用的资源导致您的 Google Cloud 账号产生费用,请按照以下步骤操作。

      1. Optional: Revoke the authentication credentials that you created, and delete the local credential file.

        gcloud auth application-default revoke
      2. Optional: Revoke credentials from the gcloud CLI.

        gcloud auth revoke

      控制台

    14. In the Google Cloud console, go to the Manage resources page.

      Go to Manage resources

    15. In the project list, select the project that you want to delete, and then click Delete.
    16. In the dialog, type the project ID, and then click Shut down to delete the project.
    17. gcloud

      Delete a Google Cloud project:

      gcloud projects delete PROJECT_ID

      后续步骤