The Workflows connector defines the built-in functions that can be used to access other Google Cloud products within a workflow.
This page provides an overview of the individual connector. There is no need to import or load connector libraries in a workflow—connectors work out of the box when used in a call step.
AI Platform Training & Prediction API
An API to enable creating and using machine learning models. To learn more, see the AI Platform Training & Prediction API documentation.
AI Platform Training & Prediction connector sample
YAML
JSON
Module: googleapis.ml.v1.projects
Functions | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
explain |
Performs explanation on the data in the request.
AI Explanations implements a custom The URL is described in Google API HTTP annotation syntax:
The Example specifying both model and version:
Example specifying only a model. The default version for that model is used:
This page describes the format of the explanation request body and of the response body. For a code sample showing how to send an explanation request, see the guide to using feature attributions. Request body detailsTensorFlowThe request body contains data with the following structure (JSON representation):
The The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs (as objects) or can contain only unlabeled values.
Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists. Below are some examples of request bodies. CSV data with each row encoded as a string value: {"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]} Plain text: {"instances": ["the quick brown fox", "the lazy dog"]} Sentences encoded as lists of words (vectors of strings): { "instances": [ ["the","quick","brown"], ["the","lazy","dog"], ... ] } Floating point scalar values: {"instances": [0.0, 1.1, 2.2]} Vectors of integers: { "instances": [ [0, 1, 2], [3, 4, 5], ... ] } Tensors (in this case, two-dimensional tensors): { "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] } Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel: { "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] } Data encodingJSON strings must be encoded as UTF-8. To send binary data, you must
base64-encode the data and mark it as binary. To mark a JSON string as
binary, replace it with a JSON object with a single attribute named
{"b64": "..."} The following example shows two serialized {"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]} The following example shows two JPEG image byte strings, requiring base64 encoding (fake data, for illustrative purposes only): {"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]} Multiple input tensorsSome models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors. For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string): { "instances": [ { "tag": "beach", "image": {"b64": "ASa8asdf"} }, { "tag": "car", "image": {"b64": "JLK7ljk3"} } ] } For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints): { "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] } Explanation metadataWhen using AI Explanations, you need to indicate which of your tensors
correspond to your actual features and output probabilities or predictions. You
do this by adding a file named To make this process easier, assign a name to the tensors in your TensorFlow
graph. Before training your model, set the
The file contents should match this schema:
Configure visualization settingsWhen you get explanations on image data with the integrated gradients or XRAI
methods, you can configure visualization settings for your results by including
them in your To configure your visualiztion settings, include the visualization config within the input object you want to visualize:
Details on visualization settingsAll of the visualization settings are optional, so you can specify all, some, or none of these values.
See example configurations and output images.
Response body detailsResponses are very similar to requests. If the call is successful, the response body contains one explanations entry per instance in the request body, given in the same order:
If explanation fails for any instance, the response body contains no explanations. Instead, it contains a single error entry:
The On error, the Even though there is one explanation per instance, the format of an explanation is not directly related to the format of an instance. Explanations take whatever format is specified in the outputs collection defined in the model. The collection of explanations is returned in a JSON list. Each member of the list can be a simple value, a list, or a JSON object of any complexity. If your model has more than one output tensor, each explanation will be a JSON object containing a name/value pair for each output. The names identify the output aliases in the graph. Response body examplesThe following explanations response is for an individual feature attribution on tabular data. It is part of the example notebook for tabular data. The notebook demonstrates how to parse explanations responses and plot the attributions data. Feature attributions appear within the
Individual attribution valuesThis table shows more details about the features that correspond to these
example attribution values. Positive attribution values increase the predicted
value by that amount, and negative attribution values decrease the predicted
value. The
Learn more by trying the example notebook for tabular data. API specificationThe following section describes the specification of the |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
getConfig |
Get the service account information associated with your project. You need this information in order to grant the service account permissions for the Google Cloud Storage location where you put your model training code for training the model with Google Cloud Machine Learning. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
predict |
Performs online prediction on the data in the request.
AI Platform Prediction implements a custom The URL is described in Google API HTTP annotation syntax:
The Example specifying both model and version:
Example specifying only a model. The default version for that model is used:
This page describes the format of the prediction request body and of the response body. For a code sample showing how to send a prediction request, see the guide to requesting online predictions. Request body detailsTensorFlowThe request body contains data with the following structure (JSON representation):
The The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs (as objects) or can contain only unlabeled values.
Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists. Below are some examples of request bodies. CSV data with each row encoded as a string value: {"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]} Plain text: {"instances": ["the quick brown fox", "the lazy dog"]} Sentences encoded as lists of words (vectors of strings): { "instances": [ ["the","quick","brown"], ["the","lazy","dog"], ... ] } Floating point scalar values: {"instances": [0.0, 1.1, 2.2]} Vectors of integers: { "instances": [ [0, 1, 2], [3, 4, 5], ... ] } Tensors (in this case, two-dimensional tensors): { "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] } Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel: { "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] } Data encodingJSON strings must be encoded as UTF-8. To send binary data, you must
base64-encode the data and mark it as binary. To mark a JSON string as
binary, replace it with a JSON object with a single attribute named
{"b64": "..."} The following example shows two serialized {"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]} The following example shows two JPEG image byte strings, requiring base64 encoding (fake data, for illustrative purposes only): {"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]} Multiple input tensorsSome models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors. For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string): { "instances": [ { "tag": "beach", "image": {"b64": "ASa8asdf"} }, { "tag": "car", "image": {"b64": "JLK7ljk3"} } ] } For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints): { "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] } scikit-learnThe request body contains data with the following structure (JSON representation):
The
The dimension of input instances must match what your model expects. For example, if your model requires three features, then the length of each input instance must be 3. XGBoostThe request body contains data with the following structure (JSON representation):
The
The dimension of input instances must match what your model expects. For example, if your model requires three features, then the length of each input instance must be 3. AI Platform Prediction does not support sparse representation of input instances for XGBoost. The Online Prediction service interprets zeros and The following example represents a prediction request with a single input instance, where the value of the first feature is 0.0, the value of the second feature is 1.1, and the value of the third feature is missing:
Custom prediction routine
The request body contains data with the following structure (JSON representation):
The You may optionally provide any other valid JSON key-value pairs.
AI Platform Prediction parses the JSON and provides these fields to the How to structure the list of instancesThe structure of each element of the instances list is determined by the
Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists. Below are some examples of request bodies. CSV data with each row encoded as a string value: {"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]} Plain text: {"instances": ["the quick brown fox", "the lazy dog"]} Sentences encoded as lists of words (vectors of strings): { "instances": [ ["the","quick","brown"], ["the","lazy","dog"], ... ] } Floating point scalar values: {"instances": [0.0, 1.1, 2.2]} Vectors of integers: { "instances": [ [0, 1, 2], [3, 4, 5], ... ] } Tensors (in this case, two-dimensional tensors): { "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] } Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel: { "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] } Multiple input tensorsSome models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors. For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints): { "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] } Response body detailsResponses are very similar to requests. If the call is successful, the response body contains one prediction entry per instance in the request body, given in the same order:
If prediction fails for any instance, the response body contains no predictions. Instead, it contains a single error entry:
The On error, the Even though there is one prediction per instance, the format of a prediction is not directly related to the format of an instance. Predictions take whatever format is specified in the outputs collection defined in the model. The collection of predictions is returned in a JSON list. Each member of the list can be a simple value, a list, or a JSON object of any complexity. If your model has more than one output tensor, each prediction will be a JSON object containing a name/value pair for each output. The names identify the output aliases in the graph. Response body examplesTensorFlowThe following examples show some possible responses:
scikit-learnThe following examples show some possible responses:
XGBoostThe following examples show some possible responses:
Custom prediction routineThe following examples show some possible responses:
API specificationThe following section describes the specification of the |
Module: googleapis.ml.v1.projects.jobs
Functions | |
---|---|
cancel |
Cancels a running job. |
create |
Creates a training or a batch prediction job. |
get |
Describes a job. |
getIamPolicy |
Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
list |
Lists the jobs in the project. If there are no jobs that match the request parameters, the list request returns an empty response body: {}. |
patch |
Updates a specific job resource. Currently the only supported fields to
update are labels . |
setIamPolicy |
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND , INVALID_ARGUMENT , and
PERMISSION_DENIED errors. |
testIamPermissions |
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning. |
Module: googleapis.ml.v1.projects.locations
Functions | |
---|---|
get |
Get the complete list of CMLE capabilities in a location, along with their location-specific properties. |
list |
List all locations that provides at least one type of CMLE capability. |
Module: googleapis.ml.v1.projects.models
Functions | |
---|---|
create |
Creates a model which will later contain one or more versions. You must add at least one version before you can request predictions from the model. Add versions by calling projects.models.versions.create. |
delete |
Deletes a model. You can only delete a model if there are no versions in it. You can delete versions by calling projects.models.versions.delete. |
get |
Gets information about a model, including its name, the description (if set), and the default version (if at least one version of the model has been deployed). |
getIamPolicy |
Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set. |
list |
Lists the models in a project. Each project can contain multiple models, and each model can have multiple versions. If there are no models that match the request parameters, the list request returns an empty response body: {}. |
patch |
Updates a specific model resource. Currently the only supported fields
to update are description and default_version.name . |
setIamPolicy |
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND , INVALID_ARGUMENT , and
PERMISSION_DENIED errors. |
testIamPermissions |
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning. |
Module: googleapis.ml.v1.projects.models.versions
Functions | |
---|---|
create |
Creates a new version of a model from a trained TensorFlow model. If the version created in the cloud by this call is the first deployed version of the specified model, it will be made the default version of the model. When you add a version to a model that already has one or more versions, the default version does not automatically change. If you want a new version to be the default, you must call projects.models.versions.setDefault. |
delete |
Deletes a model version. Each model can have multiple versions deployed and in use at any given time. Use this method to remove a single version. Note: You cannot delete the version that is set as the default version of the model unless it is the only remaining version. |
get |
Gets information about a model version. Models can have multiple versions. You can call projects.models.versions.list to get the same information that this method returns for all of the versions of a model. |
list |
Gets basic information about all the versions of a model. If you expect that a model has many versions, or if you need to handle only a limited number of results at a time, you can request that the list be retrieved in batches (called pages). If there are no versions that match the request parameters, the list request returns an empty response body: {}. |
patch |
Updates the specified Version resource. Currently the only update-able
fields are description , requestLoggingConfig ,
autoScaling.minNodes , and manualScaling.nodes . |
setDefault |
Designates a version to be the default for the model. The default version is used for prediction requests made against the model that don't specify a version. The first version to be created for a model is automatically set as the default. You must make any subsequent changes to the default version setting manually using this method. |
Module: googleapis.ml.v1.projects.operations
Functions | |
---|---|
cancel |
Starts asynchronous cancellation on a long-running operation. The server
makes a best effort to cancel the operation, but success is not
guaranteed. If the server doesn't support this method, it returns
google.rpc.Code.UNIMPLEMENTED . Clients can use Operations.GetOperation
or other methods to check whether the cancellation succeeded or whether
the operation completed despite cancellation. On successful
cancellation, the operation is not deleted; instead, it becomes an
operation with an Operation.error value with a google.rpc.Status.code of
1, corresponding to Code.CANCELLED . |
get |
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service. |
list |
Lists operations that match the specified filter in the request. If the
server doesn't support this method, it returns UNIMPLEMENTED . |