Package google.cloud.automl.v1beta1

Index

AutoMl

AutoML Server API.

The resource names are assigned by the server. The server never reuses names that it has created after the resources with those names are deleted.

An ID of a resource is the last element of the item's resource name. For projects/{project_id}/locations/{location_id}/datasets/{dataset_id}, then the id for the item is {dataset_id}.

Currently the only supported location_id is "us-central1".

On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.

CreateDataset

rpc CreateDataset(CreateDatasetRequest) returns (Dataset)

Creates a dataset.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

CreateModel

rpc CreateModel(CreateModelRequest) returns (Operation)

Creates a model. Returns a Model in the response field when it completes. When you create a model, several model evaluations are created for it: a global evaluation, and one evaluation for each annotation spec.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteDataset

rpc DeleteDataset(DeleteDatasetRequest) returns (Operation)

Deletes a dataset and all of its contents. Returns empty response in the response field when it completes, and delete_details in the metadata field.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeleteModel

rpc DeleteModel(DeleteModelRequest) returns (Operation)

Deletes a model. Returns google.protobuf.Empty in the response field when it completes, and delete_details in the metadata field.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

DeployModel

rpc DeployModel(DeployModelRequest) returns (Operation)

Deploys a model. If a model is already deployed, deploying it with the same parameters has no effect. Deploying with different parameters updates the deployed model without pausing the model's availability.

Returns an empty response in the response field when it completes.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ExportData

rpc ExportData(ExportDataRequest) returns (Operation)

Exports dataset's data to the provided output location. Returns an empty response in the response field when it completes.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetAnnotationSpec

rpc GetAnnotationSpec(GetAnnotationSpecRequest) returns (AnnotationSpec)

Gets an annotation spec.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetDataset

rpc GetDataset(GetDatasetRequest) returns (Dataset)

Gets a dataset.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetModel

rpc GetModel(GetModelRequest) returns (Model)

Gets a model.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

GetModelEvaluation

rpc GetModelEvaluation(GetModelEvaluationRequest) returns (ModelEvaluation)

Gets a model evaluation.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ImportData

rpc ImportData(ImportDataRequest) returns (Operation)

Imports data into a dataset.

For more information, see Importing items into a dataset

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListDatasets

rpc ListDatasets(ListDatasetsRequest) returns (ListDatasetsResponse)

Lists datasets in a project.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListModelEvaluations

rpc ListModelEvaluations(ListModelEvaluationsRequest) returns (ListModelEvaluationsResponse)

Lists model evaluations.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

ListModels

rpc ListModels(ListModelsRequest) returns (ListModelsResponse)

Lists models.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UndeployModel

rpc UndeployModel(UndeployModelRequest) returns (Operation)

Removes a deployed model. If the model is not deployed, then this method has no effect.

Returns an empty response in the response field when it completes.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

UpdateDataset

rpc UpdateDataset(UpdateDatasetRequest) returns (Dataset)

Updates a dataset.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

PredictionService

AutoML Prediction API.

On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.

BatchPredict

rpc BatchPredict(BatchPredictRequest) returns (Operation)

Perform a batch prediction and return the id of a long-running operation. You can request the operation result by using the GetOperation method. When the operation has completed, you can call GetOperation to retrieve a BatchPredictResult from the response field.

Only available for AutoML Natural Language Entity Extraction

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

Predict

rpc Predict(PredictRequest) returns (PredictResponse)

Perform an online prediction. The prediction result will be directly returned in the response.

Provide UTF-8 NFC encoded content in the TextSnippet field. You can provide up to the following maximums:

  • Classification - 60,000 characters.

  • Entity Extraction - 30,000 characters.

  • Sentiment Analysis - 500 characters.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

AnnotationPayload

Contains annotation information that is relevant to AutoML.

Fields
annotation_spec_id

string

Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.

display_name

string

Output only. The value of display_name when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the display_name between any two model training.

Union field detail. Output only . Additional information about the annotation specific to the AutoML domain. detail can be only one of the following:
classification

ClassificationAnnotation

Annotation details for classification predictions.

text_extraction

TextExtractionAnnotation

Annotation details for text extraction.

text_sentiment

TextSentimentAnnotation

Annotation details for text sentiment.

AnnotationSpec

A definition of an annotation.

Fields
name

string

Output only. Resource name of the annotation spec. Form:

'projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/annotationSpecs/{annotation_spec_id}'

display_name

string

Required. The name of the annotation spec to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9.

example_count

int32

Output only. The number of examples in the parent dataset labeled by the annotation spec.

BatchPredictInputConfig

Input configuration for BatchPredict Action.

Only available for AutoML Natural Language Entity Extraction

See Preparing your training data for more information.

The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the gcs_source is expected, unless specified otherwise. If a file with identical content (even if it had different GCS_FILE_PATH) is mentioned multiple times , then its label, bounding boxes etc. are appended. The same file should be always provided with the same ML_USE and GCS_FILE_PATH, if it is not then these values are nondeterministically selected from the given ones.

The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:

A CSV file(s) with each line in format:

ML_USE,GCS_FILE_PATH
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.

After the training data set has been determined from the TRAIN and UNASSIGNED CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation.

For example:

TRAIN,gs://folder/file1.jsonl
VALIDATE,gs://folder/file2.jsonl
TEST,gs://folder/file3.jsonl

For a single call to the BatchPredict method, you can only use either in-line JSONL files, or JSONL files that reference documents.

In-line JSONL files

In-line .JSONL files contain, per line, a JSON document that wraps a text_snippet field followed by one or more annotations fields, which have display_name and text_extraction fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (\n).

The supplied text must be annotated exhaustively. For example, if you include the text "horse", but do not label it as "animal", then "horse" is assumed to not be an "animal".

Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded.

For example:

{
  "text_snippet": {
    "content": "dog car cat"
  },
  "annotations": [
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 0, "end_offset": 2}
       }
     },
     {
       "display_name": "vehicle",
       "text_extraction": {
         "text_segment": {"start_offset": 4, "end_offset": 6}
       }
     },
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 8, "end_offset": 10}
       }
     }
  ]
}\n
{
   "text_snippet": {
     "content": "This dog is good."
   },
   "annotations": [
      {
        "display_name": "animal",
        "text_extraction": {
          "text_segment": {"start_offset": 5, "end_offset": 7}
        }
      }
   ]
}
JSONL Files that reference documents

.JSONL files contain, per line, a JSON document that wraps a input_config that contains the path to a source document. Multiple JSON documents can be separated using line breaks (\n).

For example:

{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
      }
    }
  }
}\n
{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ]
      }
    }
  }
}

Errors:

If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.

See Analyzing entities for more information.

Fields
gcs_source

GcsSource

The Google Cloud Storage location for the input content.

BatchPredictOperationMetadata

Details of BatchPredict operation.

Fields
input_config

BatchPredictInputConfig

Output only. The input config that was given upon starting this batch predict operation.

output_info

BatchPredictOutputInfo

Output only. Information further describing this batch predict's output.

BatchPredictOutputInfo

Further describes this batch predict's output. Supplements

BatchPredictOutputConfig.

Fields
Union field output_location. The output location into which prediction output is written. output_location can be only one of the following:
gcs_output_directory

string

The full path of the Google Cloud Storage directory created, into which the prediction output is written.

bigquery_output_dataset

string

The path of the BigQuery dataset created, in bq://projectId.bqDatasetId format, into which the prediction output is written.

BatchPredictOutputConfig

Output configuration for BatchPredict action.

AutoML Natural Language creates a directory specified in the gcsDestination. The name of the directory is "prediction-<model-display-name>-<timestamp-of-prediction-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format.

AutoML Natural Language creates a file named text_extraction_n.jsonl, in the new directory where "n" is a number from 1 to the number of annotation files.

The contents of each .JSONL file depends on whether the input was in-line text, or references to documents.

  • If the input was in-line text, then each .JSONL file contains, per line, a JSON document with the supplied text in request text snippet's "id" : "" followed by a list of zero or more annotations, with the entity analysis in the text_extraction field. A single text snippet is listed only once with all of its annotations, and its annotations will never be split across files.

  • If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files.

If prediction for any text snippet failed (partially or completely), then additional errors_1.jsonl, errors_2.jsonl,..., errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "" (in case of inline) or the document proto (in case of document) but here followed by exactly one google.rpc.Status containing only code and message.

Fields
gcs_destination

GcsDestination

The Google Cloud Storage location of the directory where the output is to be written to.

BatchPredictRequest

Request message for PredictionService.BatchPredict.

Fields
name

string

Name of the model requested to serve the batch prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

input_config

BatchPredictInputConfig

Required. The input configuration for batch prediction.

output_config

BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params

map<string, string>

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

See Analyzing entities for more details.

BatchPredictResult

Result of the Batch Predict. This message is returned in response of the operation returned by the PredictionService.BatchPredict.

ClassificationAnnotation

Contains annotation details specific to classification.

Fields
score

float

Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.

ClassificationEvaluationMetrics

Model evaluation metrics for classification problems. For information on the prediction type, see BatchPredictRequest.params.

Fields
au_prc

float

Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.

base_au_prc
(deprecated)

float

Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.

au_roc

float

Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.

log_loss

float

Output only. The Log Loss metric.

confidence_metrics_entry[]

ConfidenceMetricsEntry

Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.

confusion_matrix

ConfusionMatrix

Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.

annotation_spec_id[]

string

Output only. The annotation spec ids used for this evaluation.

ConfidenceMetricsEntry

Metrics for a single confidence threshold.

Fields
confidence_threshold

float

Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

position_threshold

int32

Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.

recall

float

Output only. Recall (True Positive Rate) for the given confidence threshold.

precision

float

Output only. Precision for the given confidence threshold.

false_positive_rate

float

Output only. False Positive Rate for the given confidence threshold.

f1_score

float

Output only. The harmonic mean of recall and precision.

recall_at1

float

Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

precision_at1

float

Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

false_positive_rate_at1

float

Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

f1_score_at1

float

Output only. The harmonic mean of recall_at1 and precision_at1.

true_positive_count

int64

Output only. The number of model created labels that match a ground truth label.

false_positive_count

int64

Output only. The number of model created labels that do not match a ground truth label.

false_negative_count

int64

Output only. The number of ground truth labels that are not matched by a model created label.

true_negative_count

int64

Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

ConfusionMatrix

Confusion matrix of the model running the classification.

Fields
annotation_spec_id[]

string

Output only. IDs of the annotation specs used in the confusion matrix.

row[]

Row

Output only. Rows in the confusion matrix. The number of rows is equal to the size of annotation_spec_id. row[i].value[j] is the number of examples that have ground truth of the annotation_spec_id[i] and are predicted as annotation_spec_id[j] by the model being evaluated.

Row

Output only. A row in the confusion matrix.

Fields
example_count[]

int32

Output only. Value of the specific cell in the confusion matrix. The number of values each row has (i.e. the length of the row) is equal to the length of the annotation_spec_id field or, if that one is not populated, length of the display_name field.

ClassificationType

Type of the classification problem.

Enums
CLASSIFICATION_TYPE_UNSPECIFIED An un-set value of this enum.
MULTICLASS At most one label is allowed per example.
MULTILABEL Multiple labels are allowed for one example.

CreateDatasetRequest

Request message for AutoMl.CreateDataset.

Fields
parent

string

The resource name of the project to create the dataset for.

Authorization requires the following Google IAM permission on the specified resource parent:

  • automl.datasets.create

dataset

Dataset

The dataset to create.

CreateModelOperationMetadata

Details of CreateModel operation.

CreateModelRequest

Request message for AutoMl.CreateModel.

Fields
parent

string

Resource name of the parent project where the model is being created.

Authorization requires the following Google IAM permission on the specified resource parent:

  • automl.models.create

model

Model

The model to create.

Dataset

A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.

Fields
name

string

Output only. The resource name of the dataset. Form: projects/{project_id}/locations/{location_id}/datasets/{dataset_id}

display_name

string

Required. The name of the dataset to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9.

description

string

User-provided description of the dataset. The description can be up to 25000 characters long.

example_count

int32

Output only. The number of examples in the dataset.

create_time

Timestamp

Output only. Timestamp when this dataset was created.

etag

string

Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

Union field dataset_metadata. Required. The dataset metadata that is specific to the problem type. dataset_metadata can be only one of the following:
text_classification_dataset_metadata

TextClassificationDatasetMetadata

Metadata for a dataset used for text classification.

text_extraction_dataset_metadata

TextExtractionDatasetMetadata

Metadata for a dataset used for text extraction.

text_sentiment_dataset_metadata

TextSentimentDatasetMetadata

Metadata for a dataset used for text sentiment.

DeleteDatasetRequest

Request message for AutoMl.DeleteDataset.

Fields
name

string

The resource name of the dataset to delete.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.datasets.delete

DeleteModelRequest

Request message for AutoMl.DeleteModel.

Fields
name

string

Resource name of the model being deleted.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.delete

DeleteOperationMetadata

Details of operations that perform deletes of any entities.

DeployModelOperationMetadata

Details of DeployModel operation.

DeployModelRequest

Request message for AutoMl.DeployModel.

Fields
name

string

Resource name of the model to deploy.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.deploy

Document

A structured text document e.g. a PDF.

Fields
input_config

DocumentInputConfig

An input config specifying the content of the document.

DocumentInputConfig

Input configuration of a Document.

Fields
gcs_source

GcsSource

The Google Cloud Storage location of the document file. Only a single path should be given. Max supported size: 512MB. Supported extensions: .PDF.

ExamplePayload

Example data used for training or prediction.

Fields
Union field payload. Required. Input only. The example data. payload can be only one of the following:
text_snippet

TextSnippet

Example text.

document

Document

Example document.

row

Row

Example relational table row.

ExportDataOperationMetadata

Details of ExportData operation.

Fields
output_info

ExportDataOutputInfo

Output only. Information further describing this export data's output.

ExportDataOutputInfo

Further describes this export data's output. Supplements OutputConfig.

Fields
Union field output_location. The output location to which the exported data is written. output_location can be only one of the following:
gcs_output_directory

string

The full path of the Google Cloud Storage directory created, into which the exported data is written.

bigquery_output_dataset

string

The path of the BigQuery dataset created, in bq://projectId.bqDatasetId format, into which the exported data is written.

ExportDataRequest

Request message for AutoMl.ExportData.

Fields
name

string

Required. The resource name of the dataset.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.datasets.export

output_config

OutputConfig

Required. The desired output location.

ExportEvaluatedExamplesOperationMetadata

Details of EvaluatedExamples operation.

Fields
output_info

ExportEvaluatedExamplesOutputInfo

Output only. Information further describing the output of this evaluated examples export.

ExportEvaluatedExamplesOutputInfo

Further describes the output of the evaluated examples export. Supplements

ExportEvaluatedExamplesOutputConfig.

Fields
bigquery_output_dataset

string

The path of the BigQuery dataset created, in bq://projectId.bqDatasetId format, into which the output of export evaluated examples is written.

ExportModelOperationMetadata

Details of ExportModel operation.

Fields
output_info

ExportModelOutputInfo

Output only. Information further describing the output of this model export.

ExportModelOutputInfo

Further describes the output of model export. Supplements

ModelExportOutputConfig.

Fields
gcs_output_directory

string

The full path of the Google Cloud Storage directory created, into which the model will be exported.

GcsDestination

The Google Cloud Storage location where the output is to be written to.

Fields
output_uri_prefix

string

Required. Google Cloud Storage URI to output directory, up to 2000 characters long. Accepted forms: * Prefix path: gs://bucket/directory The requesting user must have write permission to the bucket. The directory is created if it doesn't exist.

GcsSource

The Google Cloud Storage location for the input content.

Fields
input_uris[]

string

Required. Google Cloud Storage URIs to input files, up to 2000 characters long. Accepted forms: * Full object path, e.g. gs://bucket/directory/object.csv

GetAnnotationSpecRequest

Request message for AutoMl.GetAnnotationSpec.

Fields
name

string

The resource name of the annotation spec to retrieve.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.annotationSpecs.get

GetDatasetRequest

Request message for AutoMl.GetDataset.

Fields
name

string

The resource name of the dataset to retrieve.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.datasets.get

GetModelEvaluationRequest

Request message for AutoMl.GetModelEvaluation.

Fields
name

string

Resource name for the model evaluation.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.modelEvaluations.get

GetModelRequest

Request message for AutoMl.GetModel.

Fields
name

string

Resource name of the model.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.get

ImportDataOperationMetadata

Details of ImportData operation.

ImportDataRequest

Request message for AutoMl.ImportData.

Fields
name

string

Required. Dataset name. Dataset must already exist. All imported annotations and examples will be added.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.datasets.import

input_config

InputConfig

Required. The desired input location and its domain specific semantics, if any.

InputConfig

Input configuration for ImportData Action.

The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the gcs_source is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an "example" file (that is, image, video etc.) with identical content (even if it had different GCS_FILE_PATH) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the same ML_USE and GCS_FILE_PATH, if it is not, then these values are nondeterministically selected from the given ones.

The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:

AutoML Natural Language

Entity Extraction

See Preparing your training data for more information.

A CSV file(s) with each line in format:

ML_USE,GCS_FILE_PATH

  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.

After the training data set has been determined from the TRAIN and UNASSIGNED CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation.

For example:

TRAIN,gs://folder/file1.jsonl
VALIDATE,gs://folder/file2.jsonl
TEST,gs://folder/file3.jsonl

In-line JSONL files

In-line .JSONL files contain, per line, a JSON document that wraps a text_snippet field followed by one or more annotations fields, which have display_name and text_extraction fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (\n).

The supplied text must be annotated exhaustively. For example, if you include the text "horse", but do not label it as "animal", then "horse" is assumed to not be an "animal".

Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded.

For example:

{
  "text_snippet": {
    "content": "dog car cat"
  },
  "annotations": [
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 0, "end_offset": 2}
      }
     },
     {
      "display_name": "vehicle",
       "text_extraction": {
         "text_segment": {"start_offset": 4, "end_offset": 6}
       }
     },
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 8, "end_offset": 10}
       }
     }
 ]
}\n
{
   "text_snippet": {
     "content": "This dog is good."
   },
   "annotations": [
      {
        "display_name": "animal",
        "text_extraction": {
          "text_segment": {"start_offset": 5, "end_offset": 7}
        }
      }
   ]
}

JSONL files that reference documents

.JSONL files contain, per line, a JSON document that wraps a input_config that contains the path to a source PDF document. Multiple JSON documents can be separated using line breaks (\n).

For example:

{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
      }
    }
  }
}\n
{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ]
      }
    }
  }
}

In-line JSONL files with PDF layout information

Note: You can only annotate PDF files using the UI. The format described below applies to annotated PDF files exported using the UI or exportData.

In-line .JSONL files for PDF documents contain, per line, a JSON document that wraps a document field that provides the textual content of the PDF document and the layout information.

For example:

{
  "document": {
          "document_text": {
            "content": "dog car cat"
          }
          "layout": [
            {
              "text_segment": {
                "start_offset": 0,
                "end_offset": 11,
               },
               "page_number": 1,
               "bounding_poly": {
                  "normalized_vertices": [
                    {"x": 0.1, "y": 0.1},
                    {"x": 0.1, "y": 0.3},
                    {"x": 0.3, "y": 0.3},
                    {"x": 0.3, "y": 0.1},
                  ],
                },
                "text_segment_type": TOKEN,
            }
          ],
          "document_dimensions": {
            "width": 8.27,
            "height": 11.69,
            "unit": INCH,
          }
          "page_count": 3,
        },
        "annotations": [
          {
            "display_name": "animal",
            "text_extraction": {
              "text_segment": {"start_offset": 0, "end_offset": 3}
            }
          },
          {
            "display_name": "vehicle",
            "text_extraction": {
              "text_segment": {"start_offset": 4, "end_offset": 7}
            }
          },
          {
            "display_name": "animal",
            "text_extraction": {
              "text_segment": {"start_offset": 8, "end_offset": 11}
            }
          },
        ],
      }

Classification

See Preparing your training data for more information.

CSV file(s) with each line in format:

ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...

TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, i.e. prefixed by "gs://", it will be treated as a GCS_FILE_PATH, else if the content is enclosed within double quotes (""), it is treated as a TEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a .txt file with UTF-8 encoding, for example, "gs://folder/content.txt", and the content in it is extracted as a text snippet. In TEXT_SNIPPET case, the column content excluding quotes is treated as to be imported text snippet. In both cases, the text snippet/file size must be within 128kB. Maximum 100 unique labels are allowed per CSV row.

Sample rows:

TRAIN,"They have bad food and very rude",RudeService,BadFood
TRAIN,gs://folder/content.txt,SlowService
TEST,"Typically always bad service there.",RudeService
VALIDATE,"Stomach ache to go.",BadFood

Sentiment Analysis

See Preparing your training data for more information.

CSV file(s) with each line in format:

ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT

TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as a GCS_FILE_PATH, otherwise it is treated as a TEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a .txt file with UTF-8 encoding, for example, "gs://folder/content.txt", and the content in it is extracted as a text snippet. In TEXT_SNIPPET case, the column content itself is treated as to be imported text snippet. In both cases, the text snippet must be up to 500 characters long.

Sample rows:

TRAIN,"@freewrytin this is way too good for your product",2
TRAIN,"I need this product so bad",3
TEST,"Thank you for this product.",4
VALIDATE,gs://folder/content.txt,2

Input field definitions:
ML_USE
("TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED") Describes how the given example (file) should be used for model training. "UNASSIGNED" can be used when user has no preference.
GCS_FILE_PATH
A path to file on Google Cloud Storage, e.g. "gs://folder/image1.png".
LABEL
A display name of an object on an image, video etc., e.g. "dog". Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions.
TEXT_SNIPPET
The content of a text snippet, UTF-8 encoded, enclosed within double quotes ("").
DOCUMENT
A field that provides the textual content with document and the layout information.
SENTIMENT
An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be huge.

Errors: If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.

Fields
params

map<string, string>

Additional domain-specific parameters describing the semantic of the imported data, any string must be up to 25000 characters long.

  • For Tables: schema_inference_version - (integer) Required. The version of the algorithm that should be used for the initial inference of the schema (columns' DataTypes) of the table the data is being imported into. Allowed values: "1".

Union field source. The source of the input. source can be only one of the following:
gcs_source

GcsSource

The Google Cloud Storage location for the input content. In ImportData, the gcs_source points to a csv with structure described in the comment.

bigquery_source

BigQuerySource

The BigQuery location for the input content.

ListDatasetsRequest

Request message for AutoMl.ListDatasets.

Fields
parent

string

The resource name of the project from which to list datasets.

Authorization requires the following Google IAM permission on the specified resource parent:

  • automl.datasets.list

filter

string

An expression for filtering the results of the request.

  • dataset_metadata: test for existence of metadata.

  • display_name: =, !=, and regex(). Uses re2 syntax.

An example of using the filter is:

  • text_extraction_dataset_metadata:* --> The dataset has text_extraction_dataset_metadata.

  • regex(display_name, "^A") -> The dataset's display name starts with "A"

page_size

int32

Requested page size. Server may return fewer results than requested. If unspecified, server will pick a default size.

page_token

string

A token identifying a page of results for the server to return Typically obtained via ListDatasetsResponse.next_page_token of the previous AutoMl.ListDatasets call.

ListDatasetsResponse

Response message for AutoMl.ListDatasets.

Fields
datasets[]

Dataset

The datasets read.

next_page_token

string

A token to retrieve next page of results. Pass to ListDatasetsRequest.page_token to obtain that page.

ListModelEvaluationsRequest

Request message for AutoMl.ListModelEvaluations.

Fields
parent

string

Resource name of the model to list the model evaluations for. If modelId is set as "-", this will list model evaluations from across all models of the parent location.

Authorization requires the following Google IAM permission on the specified resource parent:

  • automl.modelEvaluations.list

filter

string

An expression for filtering the results of the request.

  • annotation_spec_id - for =, != or existence. See example below for the last.

Some examples of using the filter are:

  • annotation_spec_id!=4 --> The model evaluation was done for annotation spec with ID different than 4.
  • NOT annotation_spec_id:* --> The model evaluation was done for aggregate of all annotation specs.

page_size

int32

Requested page size.

page_token

string

A token identifying a page of results for the server to return. Typically obtained via ListModelEvaluationsResponse.next_page_token of the previous AutoMl.ListModelEvaluations call.

ListModelEvaluationsResponse

Response message for AutoMl.ListModelEvaluations.

Fields
model_evaluation[]

ModelEvaluation

List of model evaluations in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to the ListModelEvaluationsRequest.page_token field of a new AutoMl.ListModelEvaluations request to obtain that page.

ListModelsRequest

Request message for AutoMl.ListModels.

Fields
parent

string

Resource name of the project, from which to list the models.

Authorization requires the following Google IAM permission on the specified resource parent:

  • automl.models.list

filter

string

An expression for filtering the results of the request.

  • model_metadata: test for existence of metadata.

  • dataset_id: = or != a dataset ID.

  • display_name: =, !=, and regex(). Uses re2 syntax.

Some examples of using the filter are:

  • text_extraction_model_metadata:* --> The model has text_extraction_model_metadata.

  • dataset_id=5 --> The model was created from a dataset with an ID of 5.

  • regex(display_name, "^A") -> The models's display name starts with "A".

page_size

int32

Requested page size.

page_token

string

A token identifying a page of results for the server to return Typically obtained via ListModelsResponse.next_page_token of the previous AutoMl.ListModels call.

ListModelsResponse

Response message for AutoMl.ListModels.

Fields
model[]

Model

List of models in the requested page.

next_page_token

string

A token to retrieve next page of results. Pass to ListModelsRequest.page_token to obtain that page.

Model

API proto representing a trained machine learning model.

Fields
name

string

Output only. Resource name of the model. Format: projects/{project_id}/locations/{location_id}/models/{model_id}

display_name

string

Required. The name of the model to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9. It must start with a letter.

dataset_id

string

Required. The resource ID of the dataset used to create the model. The dataset must come from the same ancestor project and location.

create_time

Timestamp

Output only. Timestamp when the model training finished and can be used for prediction.

update_time

Timestamp

Output only. Timestamp when this model was last updated.

deployment_state

DeploymentState

Output only. Deployment state of the model. A model can only serve prediction requests after it gets deployed.

Union field model_metadata. Required. The model metadata that is specific to the problem type. Must match the metadata type of the dataset used to train the model. model_metadata can be only one of the following:
text_classification_model_metadata

TextClassificationModelMetadata

Metadata for text classification models.

text_extraction_model_metadata

TextExtractionModelMetadata

Metadata for text extraction models.

text_sentiment_model_metadata

TextSentimentModelMetadata

Metadata for text extraction models.

DeploymentState

Deployment state of the model.

Enums
DEPLOYMENT_STATE_UNSPECIFIED Should not be used, an un-set enum has this value by default.
DEPLOYED Model is deployed.
UNDEPLOYED Model is not deployed.

ModelEvaluation

Evaluation results of a model.

Fields
name

string

Output only. Resource name of the model evaluation. Format:

projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}

annotation_spec_id

string

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.

NOTE: Currently there is no way to obtain the display_name of the annotation spec from its ID. To see the display_names, review the model evaluations in the AutoML UI.

display_name

string

Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings.

create_time

Timestamp

Output only. Timestamp when this model evaluation was created.

evaluated_example_count

int32

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the

annotation_spec_id.

Union field metrics. Output only. Problem type specific evaluation metrics. metrics can be only one of the following:
classification_evaluation_metrics

ClassificationEvaluationMetrics

Evaluation metrics for classification models.

text_sentiment_evaluation_metrics

TextSentimentEvaluationMetrics

Evaluation metrics for text sentiment models.

text_extraction_evaluation_metrics

TextExtractionEvaluationMetrics

Evaluation metrics for text extraction models.

OperationMetadata

Metadata used across all long running operations returned by AutoML API.

Fields
progress_percent

int32

Output only. Progress of operation. Range: [0, 100]. Not used currently.

partial_failures[]

Status

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.

create_time

Timestamp

Output only. Time when the operation was created.

update_time

Timestamp

Output only. Time when the operation was updated for the last time.

Union field details. Ouptut only. Details of specific operation. Even if this field is empty, the presence allows to distinguish different types of operations. details can be only one of the following:
delete_details

DeleteOperationMetadata

Details of a Delete operation.

deploy_model_details

DeployModelOperationMetadata

Details of a DeployModel operation.

undeploy_model_details

UndeployModelOperationMetadata

Details of an UndeployModel operation.

create_model_details

CreateModelOperationMetadata

Details of CreateModel operation.

import_data_details

ImportDataOperationMetadata

Details of ImportData operation.

batch_predict_details

BatchPredictOperationMetadata

Details of BatchPredict operation.

export_data_details

ExportDataOperationMetadata

Details of ExportData operation.

export_model_details

ExportModelOperationMetadata

Details of ExportModel operation.

export_evaluated_examples_details

ExportEvaluatedExamplesOperationMetadata

Details of ExportEvaluatedExamples operation.

OutputConfig

Output configuration for ExportData.

As destination the gcs_destination must be set unless specified otherwise for a domain. Only ground truth annotations are exported (not approved annotations are not exported).

The outputs correspond to how the data was imported, and may be used as input to import data. The output formats are represented as EBNF with literal commas and same non-terminal symbol definitions as in InputConfig, which is a CSV file(s) with each line in formats:

AutoML Natural Language Classification

ML_USE,GCS_FILE_PATH,LABEL(S)
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - Identifies a text file that contains the content of the example.

  • LABEL(S) - The classification label(s) for the sample.

AutoML Natural Language Entity Extraction

ML_USE,GCS_FILE_PATH
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - Identifies a JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.

AutoML Natural Language Sentiment Analysis

ML_USE,GCS_FILE_PATH,SENTIMENT
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - Identifies a text file that contains the content of the example.

  • SENTIMENT - The sentiment score for the sample.

Fields
gcs_destination

GcsDestination

The Google Cloud Storage location where the output is to be written to. In the given directory a new directory will be created with name: export_data-<dataset-display-name>-<timestamp-of-export-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory.

PredictRequest

Request message for PredictionService.Predict.

Fields
name

string

Name of the model requested to serve the prediction.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.predict

payload

ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params

map<string, string>

Additional domain-specific parameters, any string must be up to 25000 characters long.

PredictResponse

Response message for PredictionService.Predict.

Fields
payload[]

AnnotationPayload

Prediction result.

metadata

map<string, string>

Additional domain-specific prediction response metadata.

Row

A representation of a row in a relational table.

Fields
column_spec_ids[]

string

The resource IDs of the column specs describing the columns of the row. If set must contain, but possibly in a different order, all input feature

column_spec_ids of the Model this row is being passed to. Note: The below values field must match order of this field, if this field is set.

values[]

Value

Required. The values of the row cells, given in the same order as the column_spec_ids, or, if not set, then in the same order as input feature

column_specs of the Model this row is being passed to.

TextClassificationDatasetMetadata

Dataset metadata for classification.

Fields
classification_type

ClassificationType

Required. Type of the classification problem.

TextClassificationModelMetadata

Model metadata that is specific to text classification.

TextExtractionAnnotation

Annotation for identifying spans of text.

Fields
score

float

Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence in correctness of the annotation.

text_segment

TextSegment

Required. The part of the original text to which this annotation pertains.

TextExtractionDatasetMetadata

Dataset metadata that is specific to text extraction

TextExtractionEvaluationMetrics

Model evaluation metrics for text extraction problems.

Fields
au_prc

float

Output only. The Area under precision recall curve metric.

confidence_metrics_entries[]

ConfidenceMetricsEntry

Output only. Metrics that have confidence thresholds. Precision-recall curve can be derived from it.

ConfidenceMetricsEntry

Metrics for a single confidence threshold.

Fields
confidence_threshold

float

Output only. The confidence threshold value used to compute the metrics. Only annotations with score of at least this threshold are considered to be ones the model would return.

recall

float

Output only. Recall under the given confidence threshold.

precision

float

Output only. Precision under the given confidence threshold.

f1_score

float

Output only. The harmonic mean of recall and precision.

TextExtractionModelMetadata

Model metadata that is specific to text extraction.

TextSegment

A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.

Fields
content

string

Output only. The content of the TextSegment.

start_offset

int64

Required. Zero-based character index of the first character of the text segment (counting characters from the beginning of the text).

end_offset

int64

Required. Zero-based character index of the first character past the end of the text segment (counting character from the beginning of the text). The character at the end_offset is NOT included in the text segment.

TextSentimentAnnotation

Contains annotation details specific to text sentiment.

Fields
sentiment

int32

Output only. The sentiment with the semantic, as given to the AutoMl.ImportData when populating the dataset from which the model used for the prediction had been trained. The sentiment values are between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive), with higher value meaning more positive sentiment. They are completely relative, i.e. 0 means least positive sentiment and sentiment_max means the most positive from the sentiments present in the train data. Therefore e.g. if train data had only negative sentiment, then sentiment_max, would be still negative (although least negative). The sentiment shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API.

TextSentimentDatasetMetadata

Dataset metadata for text sentiment.

Fields
sentiment_max

int32

Required. A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentiment_max (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. sentiment_max value must be between 1 and 10 (inclusive).

TextSentimentEvaluationMetrics

Model evaluation metrics for text sentiment problems.

Fields
precision

float

Output only. Precision.

recall

float

Output only. Recall.

f1_score

float

Output only. The harmonic mean of recall and precision.

mean_absolute_error

float

Output only. Mean absolute error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

mean_squared_error

float

Output only. Mean squared error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

linear_kappa

float

Output only. Linear weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

quadratic_kappa

float

Output only. Quadratic weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

confusion_matrix

ConfusionMatrix

Output only. Confusion matrix of the evaluation. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

annotation_spec_id[]
(deprecated)

string

Output only. The annotation spec ids used for this evaluation. Deprecated .

TextSentimentModelMetadata

Model metadata that is specific to text classification.

TextSnippet

A representation of a text snippet.

Fields
content

string

Required. The content of the text snippet as a string. Up to 250000 characters long.

mime_type

string

The format of the source text. Currently the only two allowed values are "text/html" and "text/plain". If left blank the format is automatically determined from the type of the uploaded content.

content_uri

string

Output only. HTTP URI where you can download the content.

UndeployModelOperationMetadata

Details of UndeployModel operation.

UndeployModelRequest

Request message for AutoMl.UndeployModel.

Fields
name

string

Resource name of the model to undeploy.

Authorization requires the following Google IAM permission on the specified resource name:

  • automl.models.undeploy

UpdateDatasetRequest

Request message for AutoMl.UpdateDataset

Fields
dataset

Dataset

The dataset which replaces the resource on the server.

Authorization requires the following Google IAM permission on the specified resource dataset:

  • automl.datasets.update

update_mask

FieldMask

The update mask applies to the resource. For the FieldMask definition.

Was this page helpful? Let us know how we did:

Send feedback about...

AutoML Natural Language Entity Extraction