Method: projects.locations.models.batchPredict

Perform a batch prediction and return the id of a long-running operation. You can request the operation result by using the operations.get method. When the operation has completed, you can call operations.get to retrieve a BatchPredictResult from the response field.

See Making batch predictions for more details.

HTTP request

POST https://automl.googleapis.com/v1beta1/{name}:batchPredict

Path parameters

Parameters
name

string

Name of the model requested to serve the batch prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

Request body

The request body contains data with the following structure:

JSON representation
{
  "inputConfig": {
    object (BatchPredictInputConfig)
  },
  "outputConfig": {
    object (BatchPredictOutputConfig)
  },
  "params": {
    string: string,
    ...
  }
}
Fields
inputConfig

object (BatchPredictInputConfig)

Required. The input configuration for batch prediction.

outputConfig

object (BatchPredictOutputConfig)

Required. The Configuration specifying where output predictions should be written.

params

map (key: string, value: string)

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

You can set the following fields:

See Making batch predictions for more details.

Response body

If successful, the response body contains an instance of Operation.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

BatchPredictInputConfig

Input configuration for models.batchPredict Action.

See Making batch predictions for more information.

You can provide prediction data to AutoML Tables in two ways:

The URI of a BigQuery table. The size of data in the BigQuery table must be 100 GB or less.

The column names must match the column names used to train the model. The data in each column must match the data type used to train the model. The columns can be in any order.

One or more URIs from Google Cloud Storage identifying CSV files that contain the rows to make predictions for. Each file must be 10 GB or less in size. The total size of all files must be 100 GB or less.

The first file specified must have a header row with column names. The column names must match the column names used to train the model. The data in each column must match the data type used to train the model. The columns can be in any order.

If the first row of a subsequent file is the same as the original header row, then that row is also treated as a header. All other rows contain values for the corresponding columns.

JSON representation
{

  // Union field source can be only one of the following:
  "gcsSource": {
    object (GcsSource)
  },
  "bigquerySource": {
    object (BigQuerySource)
  }
  // End of list of possible types for union field source.

  // Union field auxiliary_source can be only one of the following:
  "gcsAuxiliarySource": {
    object (GcsSource)
  },
  "bigqueryAuxiliarySource": {
    object (BigQuerySource)
  }
  // End of list of possible types for union field auxiliary_source.
}
Fields
Union field source. Required. The source of the input. source can be only one of the following:
gcsSource

object (GcsSource)

The Google Cloud Storage location for the input content.

bigquerySource

object (BigQuerySource)

The BigQuery location for the input content.

Union field auxiliary_source. Optional. Side inputs that help in prediction. For Tables: For FORECASTING

prediction_type: Historical data rows are required here, even if they would be identical as the ones on which the model has been trained. The historical rows must have non-NULL target column values. auxiliary_source can be only one of the following:

gcsAuxiliarySource

object (GcsSource)

The Google Cloud Storage location for the input content.

bigqueryAuxiliarySource

object (BigQuerySource)

The BigQuery location for the input content.

BatchPredictOutputConfig

Output configuration for models.batchPredict action.

See Making batch predictions for more information.

You can configure the output location for predict results from AutoML Tables in two ways:

The URI of a BigQuery project. In the given project a new dataset is created with name "prediction_<model-display-name>_<timestamp>" where <model-display-name> is the model name, made compatible for BigQuery datasets, and timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format.

Two tables are created in the dataset: predictions, and errors.

Each column name in the predictions table is the display name value for the input column followed by the target column with name in the format of "predicted_<targetColumnSpecdisplayName>".

The input feature columns contain the respective values of successfully predicted rows, and the target column contains an ARRAY of AnnotationPayloads values represented as STRUCTs, containing TablesAnnotation values.

The errors table contains rows for which the prediction has failed. Each column name in the errors table is in the format of "errors_<targetColumnSpecdisplayName>". The column contains a google.rpc.Status value represented as a STRUCT, with the status code and message.

The URI for a Google Cloud Storage bucket where the prediction results are stored. AutoML Tables creates a directory in the specified bucket. The name of the directory is in the format "prediction-<model-display-name>-<timestamp>" where <model-display-name> is the display name of the model and <timestamp> is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format.

AutoML Tables creates new files in the directory named tables_1.csv, tables_2.csv, ... ,tables_N.csv. One file each for the total number of rows with successful predictions.

If prediction for any rows failed, then AutoML Tables adds additional files named errors_1.csv, errors_2.csv, ... , errors_N.csv. One file each for the total number of rows where the prediction failed. The error files contain the google.rpc.Status code and message values.

JSON representation
{

  // Union field destination can be only one of the following:
  "gcsDestination": {
    object (GcsDestination)
  },
  "bigqueryDestination": {
    object (BigQueryDestination)
  }
  // End of list of possible types for union field destination.
}
Fields
Union field destination. Required. The destination of the output. destination can be only one of the following:
gcsDestination

object (GcsDestination)

The Google Cloud Storage location of the directory where the output is to be written to.

bigqueryDestination

object (BigQueryDestination)

The BigQuery location where the output is to be written to.

このページは役立ちましたか?評価をお願いいたします。

フィードバックを送信...