Online predictions

This page describes how to get online (single, low-latency) predictions from AutoML Tables.


After you have created (trained) a model, you can deploy the model and request online (real-time) predictions. Online predictions accept one row of data and provide a predicted result based on your model for that data. You use online predictions when you need a prediction as input for your business logic flow.

Before you can request an online prediction, you must deploy your model. Deployed models incur charges. When you are finished making online predictions, you can undeploy your model to avoid further deployment charges. Learn more.

Models must be retrained periodically so that they can continue to serve predictions. For predictions without feature importance, a model must be retrained every two years. For predictions with feature importance, a model must be retrained every six months.

Getting an online prediction


Generally, you use online predictions to get predictions from within your business applications. However, you can use AutoML Tables in the console to test your data format or your model with a specific set of input.

  1. Visit the AutoML Tables page in the Google Cloud console.

    Go to the AutoML Tables page

  2. Select Models and select the model that you want to use.

  3. Select the Test & Use tab and click Online prediction.

  4. If your model is not yet deployed, deploy it now by clicking Deploy model.

    Your model must be deployed to use online predictions. Deploying your model incurs costs. For more information, see the pricing page.

  5. Provide your input values in the text boxes provided.

    Alternatively, you can select JSON Code View to provide your input values in JSON format.

  6. If you want to see how each feature impacted the prediction, select Generate feature importance.

    The Google Cloud console truncates the local feature importance values for readability. If you need an exact value, use the Cloud AutoML API to make the prediction request.

    For information about feature importance, see Local feature importance.

  7. Click Predict to get your prediction.

    AutoML Tables predict page

    For information about interpreting your prediction results, see Interpreting your prediction results. For information about local feature importance, see Local feature importance.

  8. (Optional) If you do not plan to request more online predictions, you can undeploy your model to avoid deployment charges by clicking Undeploy model.


You request a prediction for a set of values by creating your JSON object with your feature values, and then using the model.predict method to get the prediction.

The values must contain the exact columns you included in training, and they must be in the same order as shown on the Evaluate tab by clicking on the included columns link.

If you want to reorder the values, you can optionally include a set of column spec IDs in the order of the values. You can get the column spec IDs from your model object; they are found in the TablesModelMetadata.inputFeatureColumnSpecs field.

The data type of each value (feature) in the Row object depends on the AutoML Tables data type of the feature. For a list of accepted data types by AutoML Tables data type, see Row object format.

  1. If you haven't deployed your model yet, deploy it now. Learn more.

  2. Request the prediction.

    Before using any of the request data, make the following replacements:

    • endpoint: for the global location, and for the EU region.
    • project-id: your Google Cloud project ID.
    • location: the location for the resource: us-central1 for Global or eu for the European Union.
    • model-id: the ID of the model. For example, TBL543.
    • valueN: the values for each column, in the correct order.

    HTTP method and URL:

    POST https://endpoint/v1beta1/projects/project-id/locations/location/models/model-id:predict

    Request JSON body:

      "payload": {
        "row": {
          "values": [
            value1, value2,...

    To send your request, choose one of these options:


    Save the request body in a file called request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \


    Save the request body in a file called request.json, and execute the following command:

    $cred = gcloud auth application-default print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://endpoint/v1beta1/projects/project-id/locations/location/models/model-id:predict" | Select-Object -Expand Content
      To include local feature importance results, include the feature_importance parameter. For more information about local feature importance, see Local feature importance.

  3. View your results.

    For a classification model, you should see output similar to the following example. Note that two results are returned, each with a confidence estimate (score). The confidence estimate is between 0 and 1, and shows how likely the model thinks this is the correct prediction value. For more information about how to use the confidence estimate, see Interpreting your prediction results.

     "payload": [
         "tables": {
           "score": 0.11210235,
           "value": "1"
         "tables": {
           "score": 0.8878976,
           "value": "2"

    For a regression model, the results include a prediction value and a prediction interval. The prediction interval provides a range that includes the true value 95% of the time (based on the data that the model was trained on). Note that the predicted value might not be centered in the interval (it might even fall outside the interval), because the prediction interval is centered around the median, whereas the predicted value is the expected value (or mean).

     "payload": [
         "tables": {
           "value": 207.18209838867188,
           "predictionInterval": {
             "start": 29.712770462036133,
             "end": 937.42041015625

    For information about local feature importance results, see Local feature importance.

  4. (Optional) If you are finished requesting online predictions, you can undeploy your model to avoid deployment charges. Learn more.


If your resources are located in the EU region, you must explicitly set the endpoint. Learn more.

import java.util.ArrayList;
import java.util.List;

class TablesPredict {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    // Values should match the input expected by your model.
    List<Value> values = new ArrayList<>();
    // values.add(Value.newBuilder().setBoolValue(true).build());
    // values.add(Value.newBuilder().setNumberValue(10).build());
    // values.add(Value.newBuilder().setStringValue("YOUR_STRING").build());
    predict(projectId, modelId, values);

  static void predict(String projectId, String modelId, List<Value> values) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (PredictionServiceClient client = PredictionServiceClient.create()) {
      // Get the full path of the model.
      ModelName name = ModelName.of(projectId, "us-central1", modelId);
      Row row = Row.newBuilder().addAllValues(values).build();
      ExamplePayload payload = ExamplePayload.newBuilder().setRow(row).build();

      // Feature importance gives you visibility into how the features in a specific prediction
      // request informed the resulting prediction. For more info, see:
      PredictRequest request =
              .putParams("feature_importance", "true")

      PredictResponse response = client.predict(request);

      System.out.println("Prediction results:");
      for (AnnotationPayload annotationPayload : response.getPayloadList()) {
        TablesAnnotation tablesAnnotation = annotationPayload.getTables();
            "Classification label: %s%n", tablesAnnotation.getValue().getStringValue());
        System.out.format("Classification score: %.3f%n", tablesAnnotation.getScore());
        // Get features of top importance
                info ->
                        "\tColumn: %s - Importance: %.2f%n",
                        info.getColumnDisplayName(), info.getFeatureImportance()));


If your resources are located in the EU region, you must explicitly set the endpoint. Learn more.

 * Demonstrates using the AutoML client to request prediction from
 * automl tables using csv.
 * TODO(developer): Uncomment the following lines before running the sample.
// const projectId = '[PROJECT_ID]' e.g., "my-gcloud-project";
// const computeRegion = '[REGION_NAME]' e.g., "us-central1";
// const modelId = '[MODEL_ID]' e.g., "TBL000000000000";
// const inputs = [{ numberValue: 1 }, { stringValue: 'value' }, { stringValue: 'value2' } ...]

const automl = require('@google-cloud/automl');

// Create client for prediction service.
const automlClient = new automl.v1beta1.PredictionServiceClient();

// Get the full path of the model.
const modelFullId = automlClient.modelPath(projectId, computeRegion, modelId);

async function predict() {
  // Set the payload by giving the row values.
  const payload = {
    row: {
      values: inputs,

  // Params is additional domain-specific parameters.
  // Currently there is no additional parameters supported.
  const [response] = await automlClient.predict({
    name: modelFullId,
    payload: payload,
    params: {feature_importance: true},
  console.log('Prediction results:');

  for (const result of response.payload) {
    console.log(`Predicted class name: ${result.displayName}`);
    console.log(`Predicted class score: ${result.tables.score}`);

    // Get features of top importance
    const featureList =
      columnInfo => {
        return {
          importance: columnInfo.featureImportance,
          displayName: columnInfo.columnDisplayName,
    // Sort features by their importance, highest importance first
    featureList.sort((a, b) => {
      return b.importance - a.importance;

    // Print top 10 important features
    console.log('Features of top importance');
    console.log(featureList.slice(0, 10));


The client library for AutoML Tables includes additional Python methods that simplify using the AutoML Tables API. These methods refer to datasets and models by name instead of id. Your dataset and model names must be unique. For more information, see the Client reference.

If your resources are located in the EU region, you must explicitly set the endpoint. Learn more.

# TODO(developer): Uncomment and set the following variables
# project_id = 'PROJECT_ID_HERE'
# compute_region = 'COMPUTE_REGION_HERE'
# model_display_name = 'MODEL_DISPLAY_NAME_HERE'
# inputs = {'value': 3, ...}

from import automl_v1beta1 as automl

client = automl.TablesClient(project=project_id, region=compute_region)

if feature_importance:
    response = client.predict(
    response = client.predict(
        model_display_name=model_display_name, inputs=inputs

print("Prediction results:")
for result in response.payload:
        "Predicted class name: {}".format(result.tables.value)
    print("Predicted class score: {}".format(result.tables.score))

    if feature_importance:
        # get features of top importance
        feat_list = [
            (column.feature_importance, column.column_display_name)
            for column in result.tables.tables_model_column_info
        if len(feat_list) < 10:
            feat_to_show = len(feat_list)
            feat_to_show = 10

        print("Features of top importance:")
        for feat in feat_list[:feat_to_show]:

What's next