Cloud Speech-to-Text V2 API - Class Google::Cloud::Speech::V2::Recognizer (v0.1.0)

Reference documentation and code samples for the Cloud Speech-to-Text V2 API class Google::Cloud::Speech::V2::Recognizer.

A Recognizer message. Stores recognition configuration and metadata.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#annotations

def annotations() -> ::Google::Protobuf::Map{::String => ::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.

#annotations=

def annotations=(value) -> ::Google::Protobuf::Map{::String => ::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Allows users to store small amounts of arbitrary data. Both the key and the value must be 63 characters or less each. At most 100 annotations.

#create_time

def create_time() -> ::Google::Protobuf::Timestamp
Returns

#default_recognition_config

def default_recognition_config() -> ::Google::Cloud::Speech::V2::RecognitionConfig
Returns

#default_recognition_config=

def default_recognition_config=(value) -> ::Google::Cloud::Speech::V2::RecognitionConfig
Parameter
Returns

#delete_time

def delete_time() -> ::Google::Protobuf::Timestamp
Returns

#display_name

def display_name() -> ::String
Returns
  • (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.

#display_name=

def display_name=(value) -> ::String
Parameter
  • value (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.
Returns
  • (::String) — User-settable, human-readable name for the Recognizer. Must be 63 characters or less.

#etag

def etag() -> ::String
Returns
  • (::String) — Output only. This checksum is computed by the server based on the value of other fields. This may be sent on update, undelete, and delete requests to ensure the client has an up-to-date value before proceeding.

#expire_time

def expire_time() -> ::Google::Protobuf::Timestamp
Returns

#kms_key_name

def kms_key_name() -> ::String
Returns
  • (::String) — Output only. The KMS key name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}.

#kms_key_version_name

def kms_key_version_name() -> ::String
Returns
  • (::String) — Output only. The KMS key version name with which the Recognizer is encrypted. The expected format is projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}.

#language_codes

def language_codes() -> ::Array<::String>
Returns
  • (::Array<::String>) — Required. The language of the supplied audio as a BCP-47 language tag.

    Supported languages:

    • en-US

    • en-GB

    • fr-FR

    If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

#language_codes=

def language_codes=(value) -> ::Array<::String>
Parameter
  • value (::Array<::String>) — Required. The language of the supplied audio as a BCP-47 language tag.

    Supported languages:

    • en-US

    • en-GB

    • fr-FR

    If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

Returns
  • (::Array<::String>) — Required. The language of the supplied audio as a BCP-47 language tag.

    Supported languages:

    • en-US

    • en-GB

    • fr-FR

    If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio. When you create or update a Recognizer, these values are stored in normalized BCP-47 form. For example, "en-us" is stored as "en-US".

#model

def model() -> ::String
Returns
  • (::String) — Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results.

    Supported models:

    • latest_long

    Best for long form content like media or conversation.

    • latest_short

    Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.

    When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.

#model=

def model=(value) -> ::String
Parameter
  • value (::String) — Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results.

    Supported models:

    • latest_long

    Best for long form content like media or conversation.

    • latest_short

    Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.

    When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.

Returns
  • (::String) — Required. Which model to use for recognition requests. Select the model best suited to your domain to get best results.

    Supported models:

    • latest_long

    Best for long form content like media or conversation.

    • latest_short

    Best for short form content like commands or single shot directed speech. When using this model, the service will stop transcribing audio after the first utterance is detected and completed.

    When using this model, SEPARATE_RECOGNITION_PER_CHANNEL is not supported; multi-channel audio is accepted, but only the first channel will be processed and transcribed.

#name

def name() -> ::String
Returns
  • (::String) — Output only. The resource name of the Recognizer. Format: projects/{project}/locations/{location}/recognizers/{recognizer}.

#reconciling

def reconciling() -> ::Boolean
Returns
  • (::Boolean) — Output only. Whether or not this Recognizer is in the process of being updated.

#state

def state() -> ::Google::Cloud::Speech::V2::Recognizer::State
Returns

#uid

def uid() -> ::String
Returns
  • (::String) — Output only. System-assigned unique identifier for the Recognizer.

#update_time

def update_time() -> ::Google::Protobuf::Timestamp
Returns