AI Platform Training & Prediction API Connector Overview

The Workflows connector defines the built-in functions that can be used to access other Google Cloud products within a workflow.

This page provides an overview of the individual connector. There is no need to import or load connector libraries in a workflow—connectors work out of the box when used in a call step.

AI Platform Training & Prediction API

An API to enable creating and using machine learning models. To learn more, see the AI Platform Training & Prediction API documentation.

AI Platform Training & Prediction connector sample

YAML

# This workflow expects following items to be provided through input argument for execution:
#   - projectID (string)
#     - The user project ID.
#
# Expected successful output: "SUCCESS"

main:
  params: [args]
  steps:
    - init:
        assign:
          - project_id: ${args.projectID}
    - list_jobs:
        call: googleapis.ml.v1.projects.jobs.list
        args:
          parent: ${"projects/" + project_id}
        result: jobs
    - list_locations:
        call: googleapis.ml.v1.projects.locations.list
        args:
          parent: ${"projects/" + project_id}
        result: locations
    - the_end:
        return: "SUCCESS"

JSON

{
  "main": {
    "params": [
      "args"
    ],
    "steps": [
      {
        "init": {
          "assign": [
            {
              "project_id": "${args.projectID}"
            }
          ]
        }
      },
      {
        "list_jobs": {
          "call": "googleapis.ml.v1.projects.jobs.list",
          "args": {
            "parent": "${\"projects/\" + project_id}"
          },
          "result": "jobs"
        }
      },
      {
        "list_locations": {
          "call": "googleapis.ml.v1.projects.locations.list",
          "args": {
            "parent": "${\"projects/\" + project_id}"
          },
          "result": "locations"
        }
      },
      {
        "the_end": {
          "return": "SUCCESS"
        }
      }
    ]
  }
}

Module: googleapis.ml.v1.projects

Functions
explain Performs explanation on the data in the request.

AI Explanations implements a custom explain verb on top of an HTTP POST method. The explain method performs prediction on the data in the request.

The URL is described in Google API HTTP annotation syntax:

POST https://ml.googleapis.com/v1/{name=projects/**}:explain

The name parameter is required. It must contain the name of your model and, optionally, a version. If you specify a model without a version, the default version for that model is used.

Example specifying both model and version:

POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME/versions/YOUR_VERSION_NAME:explain

Example specifying only a model. The default version for that model is used:

POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME:explain

This page describes the format of the explanation request body and of the response body. For a code sample showing how to send an explanation request, see the guide to using feature attributions.

Request body details

TensorFlow

The request body contains data with the following structure (JSON representation):

{
  "instances": [
    <value>|<simple/nested list>|<object>,
    ...
  ]
}

The instances[] object is required, and must contain the list of instances to get explanations for.

The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs (as objects) or can contain only unlabeled values.

Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists.

Below are some examples of request bodies.

CSV data with each row encoded as a string value:

{"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]}

Plain text:

{"instances": ["the quick brown fox", "the lazy dog"]}

Sentences encoded as lists of words (vectors of strings):

{
  "instances": [
    ["the","quick","brown"],
    ["the","lazy","dog"],
    ...
  ]
}

Floating point scalar values:

{"instances": [0.0, 1.1, 2.2]}

Vectors of integers:

{
  "instances": [
    [0, 1, 2],
    [3, 4, 5],
    ...
  ]
}

Tensors (in this case, two-dimensional tensors):

{
  "instances": [
    [
      [0, 1, 2],
      [3, 4, 5]
    ],
    ...
  ]
}

Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel:

{
  "instances": [
    [
      [
        [138, 30, 66],
        [130, 20, 56],
        ...
      ],
      [
        [126, 38, 61],
        [122, 24, 57],
        ...
      ],
      ...
    ],
    ...
  ]
}

Data encoding

JSON strings must be encoded as UTF-8. To send binary data, you must base64-encode the data and mark it as binary. To mark a JSON string as binary, replace it with a JSON object with a single attribute named b64:

{"b64": "..."} 

The following example shows two serialized tf.Examples instances, requiring base64 encoding (fake data, for illustrative purposes only):

{"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]}

The following example shows two JPEG image byte strings, requiring base64 encoding (fake data, for illustrative purposes only):

{"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]}

Multiple input tensors

Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors.

For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string):

{
  "instances": [
    {
      "tag": "beach",
      "image": {"b64": "ASa8asdf"}
    },
    {
      "tag": "car",
      "image": {"b64": "JLK7ljk3"}
    }
  ]
}

For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints):

{
  "instances": [
    {
      "tag": "beach",
      "image": [
        [
          [138, 30, 66],
          [130, 20, 56],
          ...
        ],
        [
          [126, 38, 61],
          [122, 24, 57],
          ...
        ],
        ...
      ]
    },
    {
      "tag": "car",
      "image": [
        [
          [255, 0, 102],
          [255, 0, 97],
          ...
        ],
        [
          [254, 1, 101],
          [254, 2, 93],
          ...
        ],
        ...
      ]
    },
    ...
  ]
}

Explanation metadata

When using AI Explanations, you need to indicate which of your tensors correspond to your actual features and output probabilities or predictions. You do this by adding a file named explanation_metadata.json to your SavedModel folder before deploying the model to AI Explanations.

To make this process easier, assign a name to the tensors in your TensorFlow graph. Before training your model, set the name property in either raw tensors or Keras layers:

auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')

The file contents should match this schema:

{
  "inputs": {
    string <input feature key>: {
      "input_tensor_name": string,
      "input_baselines": [
        number,
      ],
      "modality": string
    }
    ...
  },
  "outputs": {
    string <output value key>:  {
      "output_tensor_name": string
    },
    ...
  },
  "framework": string
}
Fields

output value key and input feature key

Any unique name. The system outputs a dictionary with the attribution scores for a given feature listed under this key.
input_tensor_name

string

Required. The name of the tensor containing the inputs that the model's prediction should be attributed to. Format the name as name:0. For example, aux_output:0.

input_baselines

<integer>|<simple list>

Optional. The value of the baselines or "uninformative" example for this particular feature. Consider using the average or 0.
The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, we broadcast to the same shape as the encoded tensor.
If you supply multiple baselines, the system averages the attributions among the baselines. For example, you might want to compare attributions to either a fully black or white image.

modality

string

Optional. Can be set to image if the input tensor is an image. In that case, the system will return a graphical representation of the attributions.

The tensor specified by input_tensor_name should be:

  • For color images: A dense 4-D tensor of dtype float32 and shape [batch_size, height, width, 3] whose elements are RGB color values of pixels normalized to the range [0, 1].
  • For grayscale images: A dense 3-D tensor of dtype float32 and shape [batch_size, height, width] whose elements are black values of pixels normalized to the range [0, 1].
output_tensor_name

string

Required. The name of the tensor containing the outputs that the model's prediction should be attributed to.

framework

string

Required. Must be set to tensorflow.

Configure visualization settings

When you get explanations on image data with the integrated gradients or XRAI methods, you can configure visualization settings for your results by including them in your explanation_metadata.json file. Configuring these settings is optional.

To configure your visualiztion settings, include the visualization config within the input object you want to visualize:

{
  "inputs": {
    string <input feature key>: {
      "input_tensor_name": string,
      "input_baselines": [
        number,
      ],
      "modality": string
      "visualization": {
        "type": string,
        "polarity": string,
        "clip_below_percentile": number,
        "clip_above_percentile": number,
        "color_map": string,
        "overlay_type": string
      }
    }
    ...
  },
  "outputs": {
    string <output value key>:  {
      "output_tensor_name": string
    },
    ...
  },
  "framework": string
}

Details on visualization settings

All of the visualization settings are optional, so you can specify all, some, or none of these values.

"visualization": {
  "type": string,
  "polarity": string,
  "clip_below_percentile": number,
  "clip_above_percentile": number,
  "color_map": string,
  "overlay_type": string
}

See example configurations and output images.

Fields

type

string

Optional. The type of visualization. Valid values are outlines or pixels. For integrated gradients, you can use either setting. The default setting is outlines.
For XRAI, pixels is the default setting. outlines is not recommended for XRAI.

polarity

string

Optional. The directionality of the attribution values displayed. Valid values are positive, negative, or both. Defaults to positive.

clip_below_percentile

number

Optional.. Excludes attributions below the specified percentile. Valid value is a decimal in the range [0, 100].

clip_above_percentile

number

Optional. Excludes attributions above the specified percentile. Valid value is a decimal in the range [0, 100]. Must be larger than clip_below_percentile.

color_map

string

Optional. Valid values are red_green, pink_green, and viridis. Defaults to pink_green for integrated gradients and viridis for XRAI.

overlay_type

string

Optional. The type of overlay modifying how the attributions are displayed over the original input images.
Valid values are none, grayscale, original and mask_black.

  • none: The attributions are displayed alone over a black image, without being overlaid onto the input image.
  • grayscale: The attributions are overlaid onto a grayscaled version of the input image.
  • original: The attributions are overlaid onto the original input image.
  • mask_black: The attributions are used as a mask to emphasize predictive parts of the image. The opacity of the pixels in the original image correspond to the intensity of the attributions for the corresponding pixel.

Defaults to original for integrated gradients and grayscale for XRAI.

See example images showing each overlay_type.

Response body details

Responses are very similar to requests.

If the call is successful, the response body contains one explanations entry per instance in the request body, given in the same order:

{
  "explanations": [
    {
      object
    }
  ]
}

If explanation fails for any instance, the response body contains no explanations. Instead, it contains a single error entry:

{
  "error": string
}

The explanations[] object contains the list of explanations, one for each instance in the request.

On error, the error string contains a message describing the problem. The error is returned instead of an explanations list if an error occurred while processing any instance.

Even though there is one explanation per instance, the format of an explanation is not directly related to the format of an instance. Explanations take whatever format is specified in the outputs collection defined in the model. The collection of explanations is returned in a JSON list. Each member of the list can be a simple value, a list, or a JSON object of any complexity. If your model has more than one output tensor, each explanation will be a JSON object containing a name/value pair for each output. The names identify the output aliases in the graph.

Response body examples

The following explanations response is for an individual feature attribution on tabular data. It is part of the example notebook for tabular data. The notebook demonstrates how to parse explanations responses and plot the attributions data.

Feature attributions appear within the attributions_by_label object:

{
 "explanations": [
  {
   "attributions_by_label": [
    {
     "approx_error": 0.001017811509478243,
     "attributions": {
      "data": [
       -0.0,
       1.501250445842743,
       4.4058547498107075,
       0.016078486742916454,
       -0.03749384209513669,
       -0.0,
       -0.2621846305120581,
       -0.0,
       -0.0,
       0.0
      ]
      ...
     },
     "baseline_score": 14.049912452697754,
     "example_score": 19.667699813842773,
     "label_index": 0,
     "output_name": "duration"
    }
    ...
   ]
  }
 ]
}
  • approx_error is an approximation error for the feature attributions. Feature attributions are based on an approximation of Shapley values. Learn more about the approximation error.
  • The attributions object contains a key-value pair for each input feature you requested explanations for.
    • For each input feature, the key is the same as the input feature key you set in your explanation_metadata.json file. In this example, it's "data".
    • The values are the attributions for each feature. The shape of the attributions matches the input tensor. In this example, it is a list of scalar values. Learn more about these attribution values.
  • The baseline_score is the model output for the baseline you set. Depending on your model, you can set the baseline to zero, a random value, or median values. Learn more about selecting baselines.
  • The example_score is the prediction for the input instance you provided. In this example, example_score is the predicted duration of a rideshare bike trip in minutes. In general, the example_score for any given instance is the prediction for that instance.
  • The label_index is the index in the output tensor that is being explained.
  • The output_name is the same as the output feature key you set in your explanation_metadata.json file. In this example, it's "duration".

Individual attribution values

This table shows more details about the features that correspond to these example attribution values. Positive attribution values increase the predicted value by that amount, and negative attribution values decrease the predicted value. The euclidean distance between the start and end locations of the bike trip had the strongest effect on the predicted bike trip duration (19.667 minutes), increasing it by 4.498.

Feature name Feature value Attribution value
start_hr 19 0
weekday 1 -0.0661425
euclidean 3920.76 4.49809
temp 52.5 0.0564195
dew_point 38.8 -0.072438
wdsp 0 0
max_temp 64.8 -0.226125
fog 0 -0
prcp 0.06 -0
rain_drizzle 0 0

Learn more by trying the example notebook for tabular data.

API specification

The following section describes the specification of the explain method as defined in the AI Platform Training and Prediction API discovery document. Refer to the previous sections of this document for detailed information about the method.

getConfig Get the service account information associated with your project. You need this information in order to grant the service account permissions for the Google Cloud Storage location where you put your model training code for training the model with Google Cloud Machine Learning.
predict Performs online prediction on the data in the request.

AI Platform Prediction implements a custom predict verb on top of an HTTP POST method. The predict method performs prediction on the data in the request.

The URL is described in Google API HTTP annotation syntax:

POST https://ml.googleapis.com/v1/{name=projects/**}:predict

The name parameter is required. It must contain the name of your model and, optionally, a version. If you specify a model without a version, the default version for that model is used.

Example specifying both model and version:

POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME/versions/YOUR_VERSION_NAME:predict

Example specifying only a model. The default version for that model is used:

POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME:predict

This page describes the format of the prediction request body and of the response body. For a code sample showing how to send a prediction request, see the guide to requesting online predictions.

Request body details

TensorFlow

The request body contains data with the following structure (JSON representation):

{
  "instances": [
    <value>|<simple/nested list>|<object>,
    ...
  ]
}

The instances[] object is required, and must contain the list of instances to get predictions for.

The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs (as objects) or can contain only unlabeled values.

Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists.

Below are some examples of request bodies.

CSV data with each row encoded as a string value:

{"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]}

Plain text:

{"instances": ["the quick brown fox", "the lazy dog"]}

Sentences encoded as lists of words (vectors of strings):

{
  "instances": [
    ["the","quick","brown"],
    ["the","lazy","dog"],
    ...
  ]
}

Floating point scalar values:

{"instances": [0.0, 1.1, 2.2]}

Vectors of integers:

{
  "instances": [
    [0, 1, 2],
    [3, 4, 5],
    ...
  ]
}

Tensors (in this case, two-dimensional tensors):

{
  "instances": [
    [
      [0, 1, 2],
      [3, 4, 5]
    ],
    ...
  ]
}

Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel:

{
  "instances": [
    [
      [
        [138, 30, 66],
        [130, 20, 56],
        ...
      ],
      [
        [126, 38, 61],
        [122, 24, 57],
        ...
      ],
      ...
    ],
    ...
  ]
}

Data encoding

JSON strings must be encoded as UTF-8. To send binary data, you must base64-encode the data and mark it as binary. To mark a JSON string as binary, replace it with a JSON object with a single attribute named b64:

{"b64": "..."} 

The following example shows two serialized tf.Examples instances, requiring base64 encoding (fake data, for illustrative purposes only):

{"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]}

The following example shows two JPEG image byte strings, requiring base64 encoding (fake data, for illustrative purposes only):

{"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]}

Multiple input tensors

Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors.

For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string):

{
  "instances": [
    {
      "tag": "beach",
      "image": {"b64": "ASa8asdf"}
    },
    {
      "tag": "car",
      "image": {"b64": "JLK7ljk3"}
    }
  ]
}

For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints):

{
  "instances": [
    {
      "tag": "beach",
      "image": [
        [
          [138, 30, 66],
          [130, 20, 56],
          ...
        ],
        [
          [126, 38, 61],
          [122, 24, 57],
          ...
        ],
        ...
      ]
    },
    {
      "tag": "car",
      "image": [
        [
          [255, 0, 102],
          [255, 0, 97],
          ...
        ],
        [
          [254, 1, 101],
          [254, 2, 93],
          ...
        ],
        ...
      ]
    },
    ...
  ]
}

scikit-learn

The request body contains data with the following structure (JSON representation):

{
  "instances": [
    <simple list>,
    ...
  ]
}

The instances[] object is required, and must contain the list of instances to get predictions for. In the following example, each input instance is a list of floats:

{
  "instances": [
    [0.0, 1.1, 2.2],
    [3.3, 4.4, 5.5],
    ...
  ]
}

The dimension of input instances must match what your model expects. For example, if your model requires three features, then the length of each input instance must be 3.

XGBoost

The request body contains data with the following structure (JSON representation):

{
  "instances": [
    <simple list>,
    ...
  ]
}

The instances[] object is required, and must contain the list of instances to get predictions for. In the following example, each input instance is a list of floats:

{
  "instances": [
    [0.0, 1.1, 2.2],
    [3.3, 4.4, 5.5],
    ...
  ]
}

The dimension of input instances must match what your model expects. For example, if your model requires three features, then the length of each input instance must be 3.

AI Platform Prediction does not support sparse representation of input instances for XGBoost.

The Online Prediction service interprets zeros and NaNs differently. If the value of a feature is zero, use 0.0 in the corresponding input. If the value of a feature is missing, use NaN in the corresponding input.

The following example represents a prediction request with a single input instance, where the value of the first feature is 0.0, the value of the second feature is 1.1, and the value of the third feature is missing:

{"instances": [[0.0, 1.1, NaN]]}

Custom prediction routine

The request body contains data with the following structure (JSON representation):

{
  "instances": [
    <value>|<simple/nested list>|<object>,
    ...
  ],
  "<other-key>": <value>|<simple/nested list>|<object>,
  ...
}

The instances[] object is required, and must contain the list of instances to get predictions for.

You may optionally provide any other valid JSON key-value pairs. AI Platform Prediction parses the JSON and provides these fields to the predict method of your Predictor class as entries in the **kwargs dictionary.

How to structure the list of instances

The structure of each element of the instances list is determined by the predict method of your Predictor class. Instances can include named inputs (as objects) or can contain only unlabeled values.

Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists.

Below are some examples of request bodies.

CSV data with each row encoded as a string value:

{"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]}

Plain text:

{"instances": ["the quick brown fox", "the lazy dog"]}

Sentences encoded as lists of words (vectors of strings):

{
  "instances": [
    ["the","quick","brown"],
    ["the","lazy","dog"],
    ...
  ]
}

Floating point scalar values:

{"instances": [0.0, 1.1, 2.2]}

Vectors of integers:

{
  "instances": [
    [0, 1, 2],
    [3, 4, 5],
    ...
  ]
}

Tensors (in this case, two-dimensional tensors):

{
  "instances": [
    [
      [0, 1, 2],
      [3, 4, 5]
    ],
    ...
  ]
}

Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel:

{
  "instances": [
    [
      [
        [138, 30, 66],
        [130, 20, 56],
        ...
      ],
      [
        [126, 38, 61],
        [122, 24, 57],
        ...
      ],
      ...
    ],
    ...
  ]
}

Multiple input tensors

Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors.

For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints):

{
  "instances": [
    {
      "tag": "beach",
      "image": [
        [
          [138, 30, 66],
          [130, 20, 56],
          ...
        ],
        [
          [126, 38, 61],
          [122, 24, 57],
          ...
        ],
        ...
      ]
    },
    {
      "tag": "car",
      "image": [
        [
          [255, 0, 102],
          [255, 0, 97],
          ...
        ],
        [
          [254, 1, 101],
          [254, 2, 93],
          ...
        ],
        ...
      ]
    },
    ...
  ]
}

Response body details

Responses are very similar to requests.

If the call is successful, the response body contains one prediction entry per instance in the request body, given in the same order:

{
  "predictions": [
    {
      object
    }
  ]
}

If prediction fails for any instance, the response body contains no predictions. Instead, it contains a single error entry:

{
  "error": string
}

The predictions[] object contains the list of predictions, one for each instance in the request. For a custom prediction routine (beta), predictions contains the return value of the predict method of your Predictor class, serialized as JSON.

On error, the error string contains a message describing the problem. The error is returned instead of a prediction list if an error occurred while processing any instance.

Even though there is one prediction per instance, the format of a prediction is not directly related to the format of an instance. Predictions take whatever format is specified in the outputs collection defined in the model. The collection of predictions is returned in a JSON list. Each member of the list can be a simple value, a list, or a JSON object of any complexity. If your model has more than one output tensor, each prediction will be a JSON object containing a name/value pair for each output. The names identify the output aliases in the graph.

Response body examples

TensorFlow

The following examples show some possible responses:

  • A simple set of predictions for three input instances, where each prediction is an integer value:

    {"predictions": [5, 4, 3]}
    
  • A more complex set of predictions, each containing two named values that correspond to output tensors, named label and scores respectively. The value of label is the predicted category ("car" or "beach") and scores contains a list of probabilities for that instance across the possible categories.

    {
      "predictions": [
        {
          "label": "beach",
          "scores": [0.1, 0.9]
        },
        {
          "label": "car",
          "scores": [0.75, 0.25]
        }
      ]
    }
    
  • A response when there is an error processing an input instance:

    {"error": "Divide by zero"}
    

scikit-learn

The following examples show some possible responses:

  • A simple set of predictions for three input instances, where each prediction is an integer value:

    {"predictions": [5, 4, 3]}
    
  • A response when there is an error processing an input instance:

    {"error": "Divide by zero"}
    

XGBoost

The following examples show some possible responses:

  • A simple set of predictions for three input instances, where each prediction is an integer value:

    {"predictions": [5, 4, 3]}
    
  • A response when there is an error processing an input instance:

    {"error": "Divide by zero"}
    

Custom prediction routine

The following examples show some possible responses:

  • A simple set of predictions for three input instances, where each prediction is an integer value:

    {"predictions": [5, 4, 3]}
    
  • A more complex set of predictions, each containing two named values that correspond to output tensors, named label and scores respectively. The value of label is the predicted category ("car" or "beach") and scores contains a list of probabilities for that instance across the possible categories.

    {
      "predictions": [
        {
          "label": "beach",
          "scores": [0.1, 0.9]
        },
        {
          "label": "car",
          "scores": [0.75, 0.25]
        }
      ]
    }
    
  • A response when there is an error processing an input instance:

    {"error": "Divide by zero"}
    

API specification

The following section describes the specification of the predict method as defined in the AI Platform Training and Prediction API discovery document. Refer to the previous sections of this document for detailed information about the method.

Module: googleapis.ml.v1.projects.jobs

Functions
cancel Cancels a running job.
create Creates a training or a batch prediction job.
get Describes a job.
getIamPolicy Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.
list Lists the jobs in the project. If there are no jobs that match the request parameters, the list request returns an empty response body: {}.
patch Updates a specific job resource. Currently the only supported fields to update are labels.
setIamPolicy Sets the access control policy on the specified resource. Replaces any existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors.
testIamPermissions Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning.

Module: googleapis.ml.v1.projects.locations

Functions
get Get the complete list of CMLE capabilities in a location, along with their location-specific properties.
list List all locations that provides at least one type of CMLE capability.

Module: googleapis.ml.v1.projects.models

Functions
create Creates a model which will later contain one or more versions. You must add at least one version before you can request predictions from the model. Add versions by calling projects.models.versions.create.
delete Deletes a model. You can only delete a model if there are no versions in it. You can delete versions by calling projects.models.versions.delete.
get Gets information about a model, including its name, the description (if set), and the default version (if at least one version of the model has been deployed).
getIamPolicy Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.
list Lists the models in a project. Each project can contain multiple models, and each model can have multiple versions. If there are no models that match the request parameters, the list request returns an empty response body: {}.
patch Updates a specific model resource. Currently the only supported fields to update are description and default_version.name.
setIamPolicy Sets the access control policy on the specified resource. Replaces any existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors.
testIamPermissions Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning.

Module: googleapis.ml.v1.projects.models.versions

Functions
create Creates a new version of a model from a trained TensorFlow model. If the version created in the cloud by this call is the first deployed version of the specified model, it will be made the default version of the model. When you add a version to a model that already has one or more versions, the default version does not automatically change. If you want a new version to be the default, you must call projects.models.versions.setDefault.
delete Deletes a model version. Each model can have multiple versions deployed and in use at any given time. Use this method to remove a single version. Note: You cannot delete the version that is set as the default version of the model unless it is the only remaining version.
get Gets information about a model version. Models can have multiple versions. You can call projects.models.versions.list to get the same information that this method returns for all of the versions of a model.
list Gets basic information about all the versions of a model. If you expect that a model has many versions, or if you need to handle only a limited number of results at a time, you can request that the list be retrieved in batches (called pages). If there are no versions that match the request parameters, the list request returns an empty response body: {}.
patch Updates the specified Version resource. Currently the only update-able fields are description, requestLoggingConfig, autoScaling.minNodes, and manualScaling.nodes.
setDefault Designates a version to be the default for the model. The default version is used for prediction requests made against the model that don't specify a version. The first version to be created for a model is automatically set as the default. You must make any subsequent changes to the default version setting manually using this method.

Module: googleapis.ml.v1.projects.operations

Functions
cancel Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED.
get Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
list Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns UNIMPLEMENTED.