PredictionServiceAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.automl_v1.services.prediction_service.transports.base.PredictionServiceTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)
AutoML Prediction API.
On any input that is documented to expect a string parameter in snake_case or dash-case, either of those cases is accepted.
Properties
transport
Returns the transport used by the client instance.
Returns | |
---|---|
Type | Description |
PredictionServiceTransport | The transport used by the client instance. |
Methods
PredictionServiceAsyncClient
PredictionServiceAsyncClient(*, credentials: typing.Optional[google.auth.credentials.Credentials] = None, transport: typing.Union[str, google.cloud.automl_v1.services.prediction_service.transports.base.PredictionServiceTransport] = 'grpc_asyncio', client_options: typing.Optional[google.api_core.client_options.ClientOptions] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object>)
Instantiates the prediction service client.
Parameters | |
---|---|
Name | Description |
credentials |
Optional[google.auth.credentials.Credentials]
The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. |
transport |
Union[str,
The transport to use. If set to None, a transport is chosen automatically. |
client_options |
ClientOptions
Custom options for the client. It won't take effect if a |
Exceptions | |
---|---|
Type | Description |
google.auth.exceptions.MutualTlsChannelError | If mutual TLS transport creation failed for any reason. |
batch_predict
batch_predict(
request: typing.Optional[
typing.Union[
google.cloud.automl_v1.types.prediction_service.BatchPredictRequest, dict
]
] = None,
*,
name: typing.Optional[str] = None,
input_config: typing.Optional[
google.cloud.automl_v1.types.io.BatchPredictInputConfig
] = None,
output_config: typing.Optional[
google.cloud.automl_v1.types.io.BatchPredictOutputConfig
] = None,
params: typing.Optional[typing.MutableMapping[str, str]] = None,
retry: typing.Union[
google.api_core.retry_async.AsyncRetry,
google.api_core.gapic_v1.method._MethodDefault,
] = _MethodDefault._DEFAULT_VALUE,
timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.api_core.operation_async.AsyncOperation
Perform a batch prediction. Unlike the online
xref_Predict,
batch prediction result won't be immediately available in the
response. Instead, a long running operation object is returned.
User can poll the operation result via
GetOperation][google.longrunning.Operations.GetOperation]
method. Once the operation is done,
xref_BatchPredictResult
is returned in the
response][google.longrunning.Operation.response]
field.
Available for following ML scenarios:
- AutoML Vision Classification
- AutoML Vision Object Detection
- AutoML Video Intelligence Classification
- AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
- AutoML Natural Language Entity Extraction
- AutoML Natural Language Sentiment Analysis
- AutoML Tables
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import automl_v1
async def sample_batch_predict():
# Create a client
client = automl_v1.PredictionServiceAsyncClient()
# Initialize request argument(s)
input_config = automl_v1.BatchPredictInputConfig()
input_config.gcs_source.input_uris = ['input_uris_value1', 'input_uris_value2']
output_config = automl_v1.BatchPredictOutputConfig()
output_config.gcs_destination.output_uri_prefix = "output_uri_prefix_value"
request = automl_v1.BatchPredictRequest(
name="name_value",
input_config=input_config,
output_config=output_config,
)
# Make the request
operation = client.batch_predict(request=request)
print("Waiting for operation to complete...")
response = (await operation).result()
# Handle the response
print(response)
Parameters | |
---|---|
Name | Description |
request |
Optional[Union[google.cloud.automl_v1.types.BatchPredictRequest, dict]]
The request object. Request message for PredictionService.BatchPredict. |
name |
Required. Name of the model requested to serve the batch prediction. This corresponds to the |
input_config |
BatchPredictInputConfig
Required. The input configuration for batch prediction. This corresponds to the |
output_config |
BatchPredictOutputConfig
Required. The Configuration specifying where output predictions should be written. This corresponds to the |
params |
:class:
Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification |
retry |
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
Type | Description |
google.api_core.operation_async.AsyncOperation | An object representing a long-running operation. The result type for the operation will be BatchPredictResult Result of the Batch Predict. This message is returned in response][google.longrunning.Operation.response] of the operation returned by the PredictionService.BatchPredict. |
common_billing_account_path
common_billing_account_path(billing_account: str) -> str
Returns a fully-qualified billing_account string.
common_folder_path
common_folder_path(folder: str) -> str
Returns a fully-qualified folder string.
common_location_path
common_location_path(project: str, location: str) -> str
Returns a fully-qualified location string.
common_organization_path
common_organization_path(organization: str) -> str
Returns a fully-qualified organization string.
common_project_path
common_project_path(project: str) -> str
Returns a fully-qualified project string.
from_service_account_file
from_service_account_file(filename: str, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Parameter | |
---|---|
Name | Description |
filename |
str
The path to the service account private key json file. |
Returns | |
---|---|
Type | Description |
PredictionServiceAsyncClient | The constructed client. |
from_service_account_info
from_service_account_info(info: dict, *args, **kwargs)
Creates an instance of this client using the provided credentials info.
Parameter | |
---|---|
Name | Description |
info |
dict
The service account private key info. |
Returns | |
---|---|
Type | Description |
PredictionServiceAsyncClient | The constructed client. |
from_service_account_json
from_service_account_json(filename: str, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Parameter | |
---|---|
Name | Description |
filename |
str
The path to the service account private key json file. |
Returns | |
---|---|
Type | Description |
PredictionServiceAsyncClient | The constructed client. |
get_mtls_endpoint_and_cert_source
get_mtls_endpoint_and_cert_source(
client_options: typing.Optional[
google.api_core.client_options.ClientOptions
] = None,
)
Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is not "true", the
client cert source is None.
(2) if client_options.client_cert_source
is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if client_options.api_endpoint
if provided, use the provided one.
(2) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is "always", use the
default mTLS endpoint; if the environment variable is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114.
Parameter | |
---|---|
Name | Description |
client_options |
google.api_core.client_options.ClientOptions
Custom options for the client. Only the |
Exceptions | |
---|---|
Type | Description |
google.auth.exceptions.MutualTLSChannelError | If any errors happen. |
Returns | |
---|---|
Type | Description |
Tuple[str, Callable[[], Tuple[bytes, bytes]]] | returns the API endpoint and the client cert source to use. |
get_transport_class
get_transport_class() -> (
typing.Type[
google.cloud.automl_v1.services.prediction_service.transports.base.PredictionServiceTransport
]
)
Returns an appropriate transport class.
Parameter | |
---|---|
Name | Description |
label |
typing.Optional[str]
The name of the desired transport. If none is provided, then the first transport in the registry is used. |
model_path
model_path(project: str, location: str, model: str) -> str
Returns a fully-qualified model string.
parse_common_billing_account_path
parse_common_billing_account_path(path: str) -> typing.Dict[str, str]
Parse a billing_account path into its component segments.
parse_common_folder_path
parse_common_folder_path(path: str) -> typing.Dict[str, str]
Parse a folder path into its component segments.
parse_common_location_path
parse_common_location_path(path: str) -> typing.Dict[str, str]
Parse a location path into its component segments.
parse_common_organization_path
parse_common_organization_path(path: str) -> typing.Dict[str, str]
Parse a organization path into its component segments.
parse_common_project_path
parse_common_project_path(path: str) -> typing.Dict[str, str]
Parse a project path into its component segments.
parse_model_path
parse_model_path(path: str) -> typing.Dict[str, str]
Parses a model path into its component segments.
predict
predict(
request: typing.Optional[
typing.Union[
google.cloud.automl_v1.types.prediction_service.PredictRequest, dict
]
] = None,
*,
name: typing.Optional[str] = None,
payload: typing.Optional[
google.cloud.automl_v1.types.data_items.ExamplePayload
] = None,
params: typing.Optional[typing.MutableMapping[str, str]] = None,
retry: typing.Union[
google.api_core.retry_async.AsyncRetry,
google.api_core.gapic_v1.method._MethodDefault,
] = _MethodDefault._DEFAULT_VALUE,
timeout: typing.Union[float, object] = _MethodDefault._DEFAULT_VALUE,
metadata: typing.Sequence[typing.Tuple[str, str]] = ()
) -> google.cloud.automl_v1.types.prediction_service.PredictResponse
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
AutoML Vision Classification
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Vision Object Detection
- An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
AutoML Natural Language Classification
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Natural Language Entity Extraction
- A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.
AutoML Natural Language Sentiment Analysis
- A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.
AutoML Translation
- A TextSnippet up to 25,000 characters, UTF-8 encoded.
AutoML Tables
- A row with column values matching the columns of the model,
up to 5MB. Not available for FORECASTING
prediction_type
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# https://googleapis.dev/python/google-api-core/latest/client_options.html
from google.cloud import automl_v1
async def sample_predict():
# Create a client
client = automl_v1.PredictionServiceAsyncClient()
# Initialize request argument(s)
payload = automl_v1.ExamplePayload()
payload.image.image_bytes = b'image_bytes_blob'
request = automl_v1.PredictRequest(
name="name_value",
payload=payload,
)
# Make the request
response = await client.predict(request=request)
# Handle the response
print(response)
Parameters | |
---|---|
Name | Description |
request |
Optional[Union[google.cloud.automl_v1.types.PredictRequest, dict]]
The request object. Request message for PredictionService.Predict. |
name |
Required. Name of the model requested to serve the prediction. This corresponds to the |
payload |
ExamplePayload
Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. This corresponds to the |
params |
:class:
Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification |
retry |
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout |
float
The timeout for this request. |
metadata |
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
Type | Description |
google.cloud.automl_v1.types.PredictResponse | Response message for PredictionService.Predict. |