Class SpeechClient (2.16.2)

Stay organized with collections Save and categorize content based on your preferences.
SpeechClient(*, credentials: Optional[google.auth.credentials.Credentials] = None, transport: Optional[Union[str, google.cloud.speech_v1p1beta1.services.speech.transports.base.SpeechTransport]] = None, client_options: Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)

Service that implements Google Cloud Speech API.

Inheritance

builtins.object > SpeechClient

Properties

transport

Returns the transport used by the client instance.

Returns
TypeDescription
SpeechTransportThe transport used by the client instance.

Methods

SpeechClient

SpeechClient(*, credentials: Optional[google.auth.credentials.Credentials] = None, transport: Optional[Union[str, google.cloud.speech_v1p1beta1.services.speech.transports.base.SpeechTransport]] = None, client_options: Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)

Instantiates the speech client.

Parameters
NameDescription
credentials Optional[google.auth.credentials.Credentials]

The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment.

transport Union[str, SpeechTransport]

The transport to use. If set to None, a transport is chosen automatically.

client_options google.api_core.client_options.ClientOptions

Custom options for the client. It won't take effect if a transport instance is provided. (1) The api_endpoint property can be used to override the default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT environment variable can also be used to override the endpoint: "always" (always use the default mTLS endpoint), "never" (always use the default regular endpoint) and "auto" (auto switch to the default mTLS endpoint if client certificate is present, this is the default value). However, the api_endpoint property takes precedence if provided. (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable is "true", then the client_cert_source property can be used to provide client certificate for mutual TLS transport. If not provided, the default SSL client certificate will be used if present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not set, no client certificate will be used.

client_info google.api_core.gapic_v1.client_info.ClientInfo

The client info used to send a user-agent string along with API requests. If None, then default info will be used. Generally, you only need to set this if you're developing your own client library.

Exceptions
TypeDescription
google.auth.exceptions.MutualTLSChannelErrorIf mutual TLS transport creation failed for any reason.

__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

common_billing_account_path

common_billing_account_path(billing_account: str)

Returns a fully-qualified billing_account string.

common_folder_path

common_folder_path(folder: str)

Returns a fully-qualified folder string.

common_location_path

common_location_path(project: str, location: str)

Returns a fully-qualified location string.

common_organization_path

common_organization_path(organization: str)

Returns a fully-qualified organization string.

common_project_path

common_project_path(project: str)

Returns a fully-qualified project string.

custom_class_path

custom_class_path(project: str, location: str, custom_class: str)

Returns a fully-qualified custom_class string.

from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
SpeechClientThe constructed client.

from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

Parameter
NameDescription
info dict

The service account private key info.

Returns
TypeDescription
SpeechClientThe constructed client.

from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
SpeechClientThe constructed client.

get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(
    client_options: Optional[google.api_core.client_options.ClientOptions] = None,
)

Return the API endpoint and client cert source for mutual TLS.

The client cert source is determined in the following order: (1) if GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable is not "true", the client cert source is None. (2) if client_options.client_cert_source is provided, use the provided one; if the default client cert source exists, use the default one; otherwise the client cert source is None.

The API endpoint is determined in the following order: (1) if client_options.api_endpoint if provided, use the provided one. (2) if GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable is "always", use the default mTLS endpoint; if the environment variabel is "never", use the default API endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise use the default API endpoint.

More details can be found at https://google.aip.dev/auth/4114.

Parameter
NameDescription
client_options google.api_core.client_options.ClientOptions

Custom options for the client. Only the api_endpoint and client_cert_source properties may be used in this method.

Exceptions
TypeDescription
google.auth.exceptions.MutualTLSChannelErrorIf any errors happen.
Returns
TypeDescription
Tuple[str, Callable[[], Tuple[bytes, bytes]]]returns the API endpoint and the client cert source to use.

long_running_recognize

long_running_recognize(request: Optional[Union[google.cloud.speech_v1p1beta1.types.cloud_speech.LongRunningRecognizeRequest, dict]] = None, *, config: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionConfig] = None, audio: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionAudio] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())

Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an Operation.error or an Operation.response which contains a LongRunningRecognizeResponse message. For more information on asynchronous speech recognition, see the how-to <https://cloud.google.com/speech-to-text/docs/async-recognize>__.

# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
#   client as shown in:
#   https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import speech_v1p1beta1

def sample_long_running_recognize():
    # Create a client
    client = speech_v1p1beta1.SpeechClient()

    # Initialize request argument(s)
    config = speech_v1p1beta1.RecognitionConfig()
    config.language_code = "language_code_value"

    audio = speech_v1p1beta1.RecognitionAudio()
    audio.content = b'content_blob'

    request = speech_v1p1beta1.LongRunningRecognizeRequest(
        config=config,
        audio=audio,
    )

    # Make the request
    operation = client.long_running_recognize(request=request)

    print("Waiting for operation to complete...")

    response = operation.result()

    # Handle the response
    print(response)
Parameters
NameDescription
request Union[google.cloud.speech_v1p1beta1.types.LongRunningRecognizeRequest, dict]

The request object. The top-level message sent by the client for the LongRunningRecognize method.

config google.cloud.speech_v1p1beta1.types.RecognitionConfig

Required. Provides information to the recognizer that specifies how to process the request. This corresponds to the config field on the request instance; if request is provided, this should not be set.

audio google.cloud.speech_v1p1beta1.types.RecognitionAudio

Required. The audio data to be recognized. This corresponds to the audio field on the request instance; if request is provided, this should not be set.

retry google.api_core.retry.Retry

Designation of what errors, if any, should be retried.

timeout float

The timeout for this request.

metadata Sequence[Tuple[str, str]]

Strings which should be sent along with the request as metadata.

Returns
TypeDescription
google.api_core.operation.OperationAn object representing a long-running operation. The result type for the operation will be LongRunningRecognizeResponse The only message returned to the client by the LongRunningRecognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages. It is included in the result.response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

parse_common_billing_account_path

parse_common_billing_account_path(path: str)

Parse a billing_account path into its component segments.

parse_common_folder_path

parse_common_folder_path(path: str)

Parse a folder path into its component segments.

parse_common_location_path

parse_common_location_path(path: str)

Parse a location path into its component segments.

parse_common_organization_path

parse_common_organization_path(path: str)

Parse a organization path into its component segments.

parse_common_project_path

parse_common_project_path(path: str)

Parse a project path into its component segments.

parse_custom_class_path

parse_custom_class_path(path: str)

Parses a custom_class path into its component segments.

parse_phrase_set_path

parse_phrase_set_path(path: str)

Parses a phrase_set path into its component segments.

phrase_set_path

phrase_set_path(project: str, location: str, phrase_set: str)

Returns a fully-qualified phrase_set string.

recognize

recognize(request: Optional[Union[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognizeRequest, dict]] = None, *, config: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionConfig] = None, audio: Optional[google.cloud.speech_v1p1beta1.types.cloud_speech.RecognitionAudio] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())

Performs synchronous speech recognition: receive results after all audio has been sent and processed.

# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
#   client as shown in:
#   https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import speech_v1p1beta1

def sample_recognize():
    # Create a client
    client = speech_v1p1beta1.SpeechClient()

    # Initialize request argument(s)
    config = speech_v1p1beta1.RecognitionConfig()
    config.language_code = "language_code_value"

    audio = speech_v1p1beta1.RecognitionAudio()
    audio.content = b'content_blob'

    request = speech_v1p1beta1.RecognizeRequest(
        config=config,
        audio=audio,
    )

    # Make the request
    response = client.recognize(request=request)

    # Handle the response
    print(response)
Parameters
NameDescription
request Union[google.cloud.speech_v1p1beta1.types.RecognizeRequest, dict]

The request object. The top-level message sent by the client for the Recognize method.

config google.cloud.speech_v1p1beta1.types.RecognitionConfig

Required. Provides information to the recognizer that specifies how to process the request. This corresponds to the config field on the request instance; if request is provided, this should not be set.

audio google.cloud.speech_v1p1beta1.types.RecognitionAudio

Required. The audio data to be recognized. This corresponds to the audio field on the request instance; if request is provided, this should not be set.

retry google.api_core.retry.Retry

Designation of what errors, if any, should be retried.

timeout float

The timeout for this request.

metadata Sequence[Tuple[str, str]]

Strings which should be sent along with the request as metadata.

Returns
TypeDescription
google.cloud.speech_v1p1beta1.types.RecognizeResponseThe only message returned to the client by the Recognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages.

streaming_recognize

streaming_recognize(requests: Optional[Iterator[google.cloud.speech_v1p1beta1.types.cloud_speech.StreamingRecognizeRequest]] = None, *, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())

Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).

# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
#   client as shown in:
#   https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import speech_v1p1beta1

def sample_streaming_recognize():
    # Create a client
    client = speech_v1p1beta1.SpeechClient()

    # Initialize request argument(s)
    streaming_config = speech_v1p1beta1.StreamingRecognitionConfig()
    streaming_config.config.language_code = "language_code_value"

    request = speech_v1p1beta1.StreamingRecognizeRequest(
        streaming_config=streaming_config,
    )

    # This method expects an iterator which contains
    # 'speech_v1p1beta1.StreamingRecognizeRequest' objects
    # Here we create a generator that yields a single `request` for
    # demonstrative purposes.
    requests = [request]

    def request_generator():
        for request in requests:
            yield request

    # Make the request
    stream = client.streaming_recognize(requests=request_generator())

    # Handle the response
    for response in stream:
        print(response)
Parameters
NameDescription
requests Iterator[google.cloud.speech_v1p1beta1.types.StreamingRecognizeRequest]

The request object iterator. The top-level message sent by the client for the StreamingRecognize method. Multiple StreamingRecognizeRequest messages are sent. The first message must contain a streaming_config message and must not contain audio_content. All subsequent messages must contain audio_content and must not contain a streaming_config message.

retry google.api_core.retry.Retry

Designation of what errors, if any, should be retried.

timeout float

The timeout for this request.

metadata Sequence[Tuple[str, str]]

Strings which should be sent along with the request as metadata.

Returns
TypeDescription
Iterable[google.cloud.speech_v1p1beta1.types.StreamingRecognizeResponse]StreamingRecognizeResponse is the only message returned to the client by StreamingRecognize. A series of zero or more StreamingRecognizeResponse messages are streamed back to the client. If there is no recognizable audio, and single_utterance is set to false, then no messages are streamed back to the client. Here's an example of a series of StreamingRecognizeResponses that might be returned while processing audio: 1. results { alternatives { transcript: "tube" } stability: 0.01 } 2. results { alternatives { transcript: "to be a" } stability: 0.01 } 3. results { alternatives { transcript: "to be" } stability: 0.9 } results { alternatives { transcript: " or not to be" } stability: 0.01 } 4. results { alternatives { transcript: "to be or not to be" confidence: 0.92 } alternatives { transcript: "to bee or not to bee" } is_final: true } 5. results { alternatives { transcript: " that's" } stability: 0.01 } 6. results { alternatives { transcript: " that is" } stability: 0.9 } results { alternatives { transcript: " the question" } stability: 0.01 } 7. results { alternatives { transcript: " that is the question" confidence: 0.98 } alternatives { transcript: " that was the question" } is_final: true } Notes: - Only two of the above responses #4 and #7 contain final results; they are indicated by is_final: true. Concatenating these together generates the full transcript: "to be or not to be that is the question". - The others contain interim results. #3 and #6 contain two interim \results: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stability results. - The specific stability and confidence values shown above are only for illustrative purposes. Actual values may vary. - In each response, only one of these fields will be set: error, speech_event_type, or one or more (repeated) results.