Module types (0.8.0)

API documentation for automl_v1.types module.

Classes

AnnotationPayload

Contains annotation information that is relevant to AutoML.

Annotation details for translation.

Annotation details for image object detection.

Annotation details for text sentiment.

Output only. The value of [display_name][google.cloud.automl. v1.AnnotationSpec.display_name] when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the display_name between any two model training.

AnnotationSpec

A definition of an annotation spec.

Required. The name of the annotation spec to show in the interface. The name can be up to 32 characters long and must match the regexp [a-zA-Z0-9_]+. (_), and ASCII digits 0-9.

Any

API documentation for automl_v1.types.Any class.

BatchPredictInputConfig

Input configuration for BatchPredict Action.

The format of input depends on the ML problem of the model used for prediction. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise.

The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:

One or more CSV files where each line is a single column:

::

GCS_FILE_PATH

GCS_FILE_PATH is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF Text files can be no larger than 10MB in size.

Sample rows:

::

gs://folder/text1.txt
gs://folder/text2.pdf

One or more CSV files where each line is a single column:

::

GCS_FILE_PATH

GCS_FILE_PATH is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF Text files can be no larger than 128kB in size.

Sample rows:

::

gs://folder/text1.txt
gs://folder/text2.pdf

One or more JSONL (JSON Lines) files that either provide inline text or documents. You can only use one format, either inline text or documents, for a single call to [AutoMl.BatchPredict].

Each JSONL file contains a per line a proto that wraps a temporary user-assigned TextSnippet ID (string up to 2000 characters long) called "id", a TextSnippet proto (in JSON representation) and zero or more TextFeature protos. Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded (ASCII already is). The IDs provided should be unique.

Each document JSONL file contains, per line, a proto that wraps a Document proto with input_config set. Only PDF documents are currently supported, and each PDF document cannot exceed 2MB in size.

Each JSONL file must not exceed 100MB in size, and no more than 20 JSONL files may be passed.

Sample inline JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".):

::

{
   "id": "my_first_id",
   "text_snippet": { "content": "dog car cat"},
   "text_features": [
     {
       "text_segment": {"start_offset": 4, "end_offset": 6},
       "structural_type": PARAGRAPH,
       "bounding_poly": {
         "normalized_vertices": [
           {"x": 0.1, "y": 0.1},
           {"x": 0.1, "y": 0.3},
           {"x": 0.3, "y": 0.3},
           {"x": 0.3, "y": 0.1},
         ]
       },
     }
   ],
 }\n
 {
   "id": "2",
   "text_snippet": {
     "content": "Extended sample content",
     "mime_type": "text/plain"
   }
 }

Sample document JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".):

::

 {
   "document": {
     "input_config": {
       "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
       }
     }
   }
 }\n
 {
   "document": {
     "input_config": {
       "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ]
       }
     }
   }
 }

Input field definitions:

GCS_FILE_PATH The path to a file on Google Cloud Storage. For example, "gs://folder/video.avi".

Errors:

If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and prediction does not happen. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.

Required. The Google Cloud Storage location for the input content.

BatchPredictOperationMetadata

Details of BatchPredict operation.

Output only. Information further describing this batch predict's output.

BatchPredictOutputConfig

Output configuration for BatchPredict Action.

As destination the

[gcs_destination][google.cloud.automl.v1.BatchPredictOutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "prediction--", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it depends on the ML problem the predictions are made for.

  • For Text Classification: In the created directory files text_classification_1.jsonl, text_classification_2.jsonl,...,\ text_classification_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found.

    ::

    Each .JSONL file will contain, per line, a JSON representation of a
    proto that wraps input text (or pdf) file in
    the text snippet (or document) proto and a list of
    zero or more AnnotationPayload protos (called annotations), which
    have classification detail populated. A single text (or pdf) file
    will be listed only once with all its annotations, and its
    annotations will never be split across files.
    
    If prediction for any text (or pdf) file failed (partially or
    completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,...,
    `errors_N.jsonl` files will be created (N depends on total number of
    failed predictions). These files will have a JSON representation of a
    proto that wraps input text (or pdf) file followed by exactly one
    

`google.rpc.Status <https:%20//github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>__ containing onlycodeandmessage`.

  • For Text Sentiment: In the created directory files text_sentiment_1.jsonl, text_sentiment_2.jsonl,...,\ text_sentiment_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found.

    ::

    Each .JSONL file will contain, per line, a JSON representation of a
    proto that wraps input text (or pdf) file in
    the text snippet (or document) proto and a list of
    zero or more AnnotationPayload protos (called annotations), which
    have text_sentiment detail populated. A single text (or pdf) file
    will be listed only once with all its annotations, and its
    annotations will never be split across files.
    
    If prediction for any text (or pdf) file failed (partially or
    completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,...,
    `errors_N.jsonl` files will be created (N depends on total number of
    failed predictions). These files will have a JSON representation of a
    proto that wraps input text (or pdf) file followed by exactly one
    

`google.rpc.Status <https:%20//github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>__ containing onlycodeandmessage`.

  • For Text Extraction: In the created directory files text_extraction_1.jsonl, text_extraction_2.jsonl,...,\ text_extraction_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet's "id" (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additional errors_1.jsonl, errors_2.jsonl,..., errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "" (in case of inline) or the document proto (in case of document) but here followed by exactly one `google.rpc.Status <https:%20//github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>__ containing onlycodeandmessage`.

    Required. The Google Cloud Storage location of the directory where the output is to be written to.

BatchPredictRequest

Request message for PredictionService.BatchPredict.

Required. The input configuration for batch prediction.

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. - For Text Classification: score_threshold - (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5. - For Image Classification: score_threshold - (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5. - For Image Object Detection: score_threshold - (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5. max_bounding_box_count - (int64) No more than this number of bounding boxes will be produced per image. Default is 100, the requested value may be limited by server.

BatchPredictResult

Result of the Batch Predict. This message is returned in response][google.longrunning.Operation.response] of the operation returned by the PredictionService.BatchPredict.

BoundingBoxMetricsEntry

Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.

Output only. The mean average precision, most often close to au_prc.

BoundingPoly

A bounding polygon of a detected object on a plane. On output both vertices and normalized_vertices are provided. The polygon is formed by connecting vertices in the order they are listed.

CancelOperationRequest

API documentation for automl_v1.types.CancelOperationRequest class.

ClassificationAnnotation

Contains annotation details specific to classification.

ClassificationEvaluationMetrics

Model evaluation metrics for classification problems.

Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.

Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision- recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.

Output only. The annotation spec ids used for this evaluation.

CreateDatasetOperationMetadata

Details of CreateDataset operation.

CreateDatasetRequest

Request message for AutoMl.CreateDataset.

The dataset to create.

CreateModelOperationMetadata

Details of CreateModel operation.

CreateModelRequest

Request message for AutoMl.CreateModel.

The model to create.

Dataset

A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.

Metadata for a dataset used for translation.

Metadata for a dataset used for text classification.

Metadata for a dataset used for text extraction.

Output only. The resource name of the dataset. Form: project s/{project_id}/locations/{location_id}/datasets/{dataset_id}

User-provided description of the dataset. The description can be up to 25000 characters long.

Output only. Timestamp when this dataset was created.

Optional. The labels with user-defined metadata to organize your dataset. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter. See https://goo.gl/xmQnxf for more information on and examples of labels.

DeleteDatasetRequest

Request message for AutoMl.DeleteDataset.

DeleteModelRequest

Request message for AutoMl.DeleteModel.

DeleteOperationMetadata

Details of operations that perform deletes of any entities.

DeleteOperationRequest

API documentation for automl_v1.types.DeleteOperationRequest class.

DeployModelOperationMetadata

Details of DeployModel operation.

DeployModelRequest

Request message for AutoMl.DeployModel.

Model deployment metadata specific to Image Object Detection.

Resource name of the model to deploy.

Document

A structured text document e.g. a PDF.

The plain text version of this document.

The dimensions of the page in the document.

DocumentDimensions

Message that describes dimension of a document.

Width value of the document, works together with the unit.

DocumentInputConfig

Input configuration of a Document.

ExamplePayload

Example data used for training or prediction.

Example image.

Example document.

ExportDataOperationMetadata

Details of ExportData operation.

ExportDataRequest

Request message for AutoMl.ExportData.

Required. The desired output location.

ExportModelOperationMetadata

Details of ExportModel operation.

ExportModelRequest

Request message for AutoMl.ExportModel. Models need to be enabled for exporting, otherwise an error code will be returned.

Required. The desired output location and configuration.

FieldMask

API documentation for automl_v1.types.FieldMask class.

GcsDestination

The Google Cloud Storage location where the output is to be written to.

GcsSource

The Google Cloud Storage location for the input content.

GetAnnotationSpecRequest

Request message for AutoMl.GetAnnotationSpec.

GetDatasetRequest

Request message for AutoMl.GetDataset.

GetModelEvaluationRequest

Request message for AutoMl.GetModelEvaluation.

GetModelRequest

Request message for AutoMl.GetModel.

GetOperationRequest

API documentation for automl_v1.types.GetOperationRequest class.

Image

A representation of an image. Only images up to 30MB in size are supported.

Image content represented as a stream of bytes. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

ImageClassificationDatasetMetadata

Dataset metadata that is specific to image classification.

ImageClassificationModelDeploymentMetadata

Model deployment metadata specific to Image Classification.

ImageClassificationModelMetadata

Model metadata for image classification.

The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual train_cost will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be MODEL_CONVERGED. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type cloud\ (default), the train budget must be between 8,000 and 800,000 milli node hours, inclusive. The default value is 192, 000 which represents one day in wall time. For model type mobile-low-latency-1, mobile-versatile-1, mobile- high-accuracy-1, mobile-core-ml-low-latency-1, mobile- core-ml-versatile-1, mobile-core-ml-high-accuracy-1, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.

Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.

ImageObjectDetectionAnnotation

Annotation details for image object detection.

Output only. The confidence that this annotation is positive for the parent example, value in [0, 1], higher means higher positivity confidence.

ImageObjectDetectionDatasetMetadata

Dataset metadata specific to image object detection.

ImageObjectDetectionEvaluationMetrics

Model evaluation metrics for image object detection problems. Evaluates prediction quality of labeled bounding boxes.

Output only. The bounding boxes match metrics for each Intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.

ImageObjectDetectionModelDeploymentMetadata

Model deployment metadata specific to Image Object Detection.

ImageObjectDetectionModelMetadata

Model metadata specific to image object detection.

Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.

Output only. The reason that this create model operation stopped, e.g. BUDGET_REACHED, MODEL_CONVERGED.

Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.

ImportDataOperationMetadata

Details of ImportData operation.

ImportDataRequest

Request message for AutoMl.ImportData.

Required. The desired input location and its domain specific semantics, if any.

InputConfig

Input configuration for AutoMl.ImportData action.

The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an "example" file (that is, image, video etc.) with identical content (even if it had different GCS_FILE_PATH) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the same ML_USE and GCS_FILE_PATH, if it is not, then these values are nondeterministically selected from the given ones.

The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:

See Preparing your training data <https://cloud.google.com/vision/automl/docs/prepare>__ for more information.

CSV file(s) with each line in format:

::

ML_USE,GCS_FILE_PATH,LABEL,LABEL,...
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG, .WEBP, .BMP, .TIFF, .ICO.

  • LABEL - A label that identifies the object in the image.

For the MULTICLASS classification type, at most one LABEL is allowed per image. If an image has not yet been labeled, then it should be mentioned just once with no LABEL.

Some sample rows:

::

TRAIN,gs://folder/image1.jpg,daisy
TEST,gs://folder/image2.jpg,dandelion,tulip,rose
UNASSIGNED,gs://folder/image3.jpg,daisy
UNASSIGNED,gs://folder/image4.jpg

See Preparing your training data <https://cloud.google.com/vision/automl/object-detection/docs/prepare>__ for more information.

A CSV file(s) with each line in format:

::

ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,)
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • GCS_FILE_PATH - The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. Each image is assumed to be exhaustively labeled.

  • LABEL - A label that identifies the object in the image specified by the BOUNDING_BOX.

  • BOUNDING BOX - The vertices of an object in the example image. The minimum allowed BOUNDING_BOX edge length is 0.01, and no more than 500 BOUNDING_BOX instances per image are allowed (one BOUNDING_BOX per line). If an image has no looked for objects then it should be mentioned just once with no LABEL and the ",,,,,,," in place of the BOUNDING_BOX.

Four sample rows:

::

TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,,
TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,,
UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3
TEST,gs://folder/im3.png,,,,,,,,,

See Preparing your training data </natural-language/automl/entity-analysis/docs/prepare>__ for more information.

One or more CSV file(s) with each line in the following format:

::

ML_USE,GCS_FILE_PATH
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing..
  • GCS_FILE_PATH - a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.

After the training data set has been determined from the TRAIN and UNASSIGNED CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation.

For example:

::

TRAIN,gs://folder/file1.jsonl
VALIDATE,gs://folder/file2.jsonl
TEST,gs://folder/file3.jsonl

In-line JSONL files

In-line .JSONL files contain, per line, a JSON document that wraps a [text_snippet][google.cloud.automl.v1.TextSnippet] field followed by one or more [annotations][google.cloud.automl.v1.AnnotationPayload] fields, which have display_name and text_extraction fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (\n).

The supplied text must be annotated exhaustively. For example, if you include the text "horse", but do not label it as "animal", then "horse" is assumed to not be an "animal".

Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded.

For example:

::

{
  "text_snippet": {
    "content": "dog car cat"
  },
  "annotations": [
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 0, "end_offset": 2}
      }
     },
     {
      "display_name": "vehicle",
       "text_extraction": {
         "text_segment": {"start_offset": 4, "end_offset": 6}
       }
     },
     {
       "display_name": "animal",
       "text_extraction": {
         "text_segment": {"start_offset": 8, "end_offset": 10}
       }
     }
 ]
}\n
{
   "text_snippet": {
     "content": "This dog is good."
   },
   "annotations": [
      {
        "display_name": "animal",
        "text_extraction": {
          "text_segment": {"start_offset": 5, "end_offset": 7}
        }
      }
   ]
}

JSONL files that reference documents

.JSONL files contain, per line, a JSON document that wraps a input_config that contains the path to a source PDF document. Multiple JSON documents can be separated using line breaks (\n).

For example:

::

{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
      }
    }
  }
}\n
{
  "document": {
    "input_config": {
      "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ]
      }
    }
  }
}

In-line JSONL files with PDF layout information

Note: You can only annotate PDF files using the UI. The format described below applies to annotated PDF files exported using the UI or exportData.

In-line .JSONL files for PDF documents contain, per line, a JSON document that wraps a document field that provides the textual content of the PDF document and the layout information.

For example:

::

{
  "document": {
          "document_text": {
            "content": "dog car cat"
          }
          "layout": [
            {
              "text_segment": {
                "start_offset": 0,
                "end_offset": 11,
               },
               "page_number": 1,
               "bounding_poly": {
                  "normalized_vertices": [
                    {"x": 0.1, "y": 0.1},
                    {"x": 0.1, "y": 0.3},
                    {"x": 0.3, "y": 0.3},
                    {"x": 0.3, "y": 0.1},
                  ],
                },
                "text_segment_type": TOKEN,
            }
          ],
          "document_dimensions": {
            "width": 8.27,
            "height": 11.69,
            "unit": INCH,
          }
          "page_count": 3,
        },
        "annotations": [
          {
            "display_name": "animal",
            "text_extraction": {
              "text_segment": {"start_offset": 0, "end_offset": 3}
            }
          },
          {
            "display_name": "vehicle",
            "text_extraction": {
              "text_segment": {"start_offset": 4, "end_offset": 7}
            }
          },
          {
            "display_name": "animal",
            "text_extraction": {
              "text_segment": {"start_offset": 8, "end_offset": 11}
            }
          },
        ],

See Preparing your training data <https://cloud.google.com/natural-language/automl/docs/prepare>__ for more information.

One or more CSV file(s) with each line in the following format:

::

ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as a GCS_FILE_PATH. Otherwise, if the content is enclosed in double quotes (""), it is treated as a TEXT_SNIPPET. For GCS_FILE_PATH, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. For TEXT_SNIPPET, AutoML imports the column content excluding quotes. In both cases, size of the content must be 10MB or less in size. For zip files, the size of each file inside the zip must be 10MB or less in size.

    For the MULTICLASS classification type, at most one LABEL is allowed. The ML_USE and LABEL columns are optional. Supported file extensions: .TXT, .PDF, .ZIP

A maximum of 100 unique labels are allowed per CSV row.

Sample rows:

::

TRAIN,"They have bad food and very rude",RudeService,BadFood
gs://folder/content.txt,SlowService
TEST,gs://folder/document.pdf
VALIDATE,gs://folder/text_files.zip,BadFood

See Preparing your training data <https://cloud.google.com/natural-language/automl/docs/prepare>__ for more information.

CSV file(s) with each line in format:

::

ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT
  • ML_USE - Identifies the data set that the current row (file) applies to. This value can be one of the following:

    • TRAIN - Rows in this file are used to train the model.
    • TEST - Rows in this file are used to test the model during training.
    • UNASSIGNED - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
  • TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as a GCS_FILE_PATH. Otherwise, if the content is enclosed in double quotes (""), it is treated as a TEXT_SNIPPET. For GCS_FILE_PATH, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. For TEXT_SNIPPET, AutoML imports the column content excluding quotes. In both cases, size of the content must be 128kB or less in size. For zip files, the size of each file inside the zip must be 128kB or less in size.

    The ML_USE and SENTIMENT columns are optional. Supported file extensions: .TXT, .PDF, .ZIP

  • SENTIMENT - An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be large.

Sample rows:

::

TRAIN,"@freewrytin this is way too good for your product",2
gs://folder/content.txt,3
TEST,gs://folder/document.pdf
VALIDATE,gs://folder/text_files.zip,2

Input field definitions:

ML_USE ("TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED") Describes how the given example (file) should be used for model training. "UNASSIGNED" can be used when user has no preference. GCS_FILE_PATH The path to a file on Google Cloud Storage. For example, "gs://folder/image1.png". LABEL A display name of an object on an image, video etc., e.g. "dog". Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions. BOUNDING_BOX (VERTEX,VERTEX,VERTEX,VERTEX | VERTEX,,,VERTEX,,) A rectangle parallel to the frame of the example (image, video). If 4 vertices are given they are connected by edges in the order provided, if 2 are given they are recognized as diagonally opposite vertices of the rectangle. VERTEX (COORDINATE,COORDINATE) First coordinate is horizontal (x), the second is vertical (y). COORDINATE A float in 0 to 1 range, relative to total length of image or video in given dimension. For fractions the leading non-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in top left. TEXT_SNIPPET The content of a text snippet, UTF-8 encoded, enclosed within double quotes (""). DOCUMENT A field that provides the textual content with document and the layout information.

Errors:

If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.

The Google Cloud Storage location for the input content. For AutoMl.ImportData, gcs_source points to a CSV file with a structure described in InputConfig.

ListDatasetsRequest

Request message for AutoMl.ListDatasets.

An expression for filtering the results of the request. - dataset_metadata - for existence of the case (e.g. image_classification_dataset_metadata:*). Some examples of using the filter are: - translation_dataset_metadata:* --> The dataset has translation_dataset_metadata.

A token identifying a page of results for the server to return Typically obtained via [ListDatasetsResponse.next_page_token ][google.cloud.automl.v1.ListDatasetsResponse.next_page_toke n] of the previous [AutoMl.ListDatasets][google.cloud.automl.v 1.AutoMl.ListDatasets] call.

ListDatasetsResponse

Response message for AutoMl.ListDatasets.

A token to retrieve next page of results. Pass to [ListDataset sRequest.page_token][google.cloud.automl.v1.ListDatasetsReque st.page_token] to obtain that page.

ListModelEvaluationsRequest

Request message for AutoMl.ListModelEvaluations.

An expression for filtering the results of the request. - annotation_spec_id - for =, != or existence. See example below for the last. Some examples of using the filter are:

  • annotation_spec_id!=4 --> The model evaluation was done for annotation spec with ID different than 4. - NOT annotation_spec_id:* --> The model evaluation was done for aggregate of all annotation specs.

    A token identifying a page of results for the server to return. Typically obtained via [ListModelEvaluationsResponse.n ext_page_token][google.cloud.automl.v1.ListModelEvaluationsR esponse.next_page_token] of the previous [AutoMl.ListModelEv aluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations] call.

ListModelEvaluationsResponse

Response message for AutoMl.ListModelEvaluations.

A token to retrieve next page of results. Pass to the [ListMod elEvaluationsRequest.page_token][google.cloud.automl.v1.ListM odelEvaluationsRequest.page_token] field of a new [AutoMl.Lis tModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEval uations] request to obtain that page.

ListModelsRequest

Request message for AutoMl.ListModels.

An expression for filtering the results of the request. - model_metadata - for existence of the case (e.g. image_classification_model_metadata:*). - dataset_id

  • for = or !=. Some examples of using the filter are: - image_classification_model_metadata:* --> The model has image_classification_model_metadata. - dataset_id=5 --> The model was created from a dataset with ID 5.

    A token identifying a page of results for the server to return Typically obtained via [ListModelsResponse.next_page_token][ google.cloud.automl.v1.ListModelsResponse.next_page_token] of the previous AutoMl.ListModels call.

ListModelsResponse

Response message for AutoMl.ListModels.

A token to retrieve next page of results. Pass to [ListModelsR equest.page_token][google.cloud.automl.v1.ListModelsRequest.p age_token] to obtain that page.

ListOperationsRequest

API documentation for automl_v1.types.ListOperationsRequest class.

ListOperationsResponse

API documentation for automl_v1.types.ListOperationsResponse class.

Model

API proto representing a trained machine learning model.

Metadata for translation models.

Metadata for text classification models.

Metadata for text extraction models.

Output only. Resource name of the model. Format: projects/{p roject_id}/locations/{location_id}/models/{model_id}

Required. The resource ID of the dataset used to create the model. The dataset must come from the same ancestor project and location.

Output only. Timestamp when this model was last updated.

Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

ModelEvaluation

Evaluation results of a model.

Model evaluation metrics for image, text classification.

Model evaluation metrics for image object detection.

Evaluation metrics for text extraction models.

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.

Output only. Timestamp when this model evaluation was created.

ModelExportOutputConfig

Output configuration for ModelExport Action.

Required. The Google Cloud Storage location where the model is to be written to. This location may only be set for the following model formats: "tflite", "edgetpu_tflite", "tf_saved_model", "tf_js", "core_ml". Under the directory given as the destination a new one with name "model-export--", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside the model and any of its supporting files will be written.

Additional model-type and format specific parameters describing the requirements for the to be exported model files, any string must be up to 25000 characters long.

NormalizedVertex

Required. Horizontal coordinate.

Operation

API documentation for automl_v1.types.Operation class.

OperationInfo

API documentation for automl_v1.types.OperationInfo class.

OperationMetadata

Metadata used across all long running operations returned by AutoML API.

Details of a Delete operation.

Details of an UndeployModel operation.

Details of CreateDataset operation.

Details of BatchPredict operation.

Details of ExportModel operation.

Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.

Output only. Time when the operation was updated for the last time.

OutputConfig

Output configuration for ExportData.

As destination the [gcs_destination][google.cloud.automl.v1.OutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "export_data--", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. Only ground truth annotations are exported (not approved annotations are not exported).

The outputs correspond to how the data was imported, and may be used as input to import data. The output formats are represented as EBNF with literal commas and same non-terminal symbols definitions are these in import data's InputConfig:

  • For Image Classification: CSV file(s) image_classification_1.csv, image_classification_2.csv,...,\ image_classification_N.csv\ with each line in format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... where GCS_FILE_PATHs point at the original, source locations of the imported images. For MULTICLASS classification type, there can be at most one LABEL per example.

  • For Image Object Detection: CSV file(s) image_object_detection_1.csv, image_object_detection_2.csv,...,\ image_object_detection_N.csv with each line in format: ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,) where GCS_FILE_PATHs point at the original, source locations of the imported images.

  • For Text Classification: In the created directory CSV file(s) text_classification_1.csv, text_classification_2.csv, ...,\ text_classification_N.csv will be created where N depends on the total number of examples exported. Each line in the CSV is of the format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... where GCS_FILE_PATHs point at the exported .txt files containing the text content of the imported example. For MULTICLASS classification type, there will be at most one LABEL per example.

  • For Text Sentiment: In the created directory CSV file(s) text_sentiment_1.csv, text_sentiment_2.csv, ...,\ text_sentiment_N.csv will be created where N depends on the total number of examples exported. Each line in the CSV is of the format: ML_USE,GCS_FILE_PATH,SENTIMENT where GCS_FILE_PATHs point at the exported .txt files containing the text content of the imported example.

  • For Text Extraction: CSV file text_extraction.csv, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .JSONL (i.e. JSON Lines) file which contains, per line, a proto that wraps a TextSnippet proto (in json representation) followed by AnnotationPayload protos (called annotations). If initially documents had been imported, the JSONL will point at the original, source locations of the imported documents.

  • For Translation: CSV file translation.csv, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV file which describes examples that have given ML_USE, using the following row format per line: TEXT_SNIPPET (in source language) \tTEXT_SNIPPET (in target language)

    Required. The Google Cloud Storage location where the output is to be written to. For Image Object Detection, Text Extraction in the given directory a new directory will be created with name: export_data-- where timestamp is in YYYY- MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory.

PredictRequest

Request message for PredictionService.Predict.

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

PredictResponse

Response message for PredictionService.Predict.

The preprocessed example that AutoML actually makes prediction on. Empty if AutoML does not preprocess the input example. * For Text Extraction: If the input is a .pdf file, the OCR'ed text will be provided in [document_text][google.cloud.automl. v1.Document.document_text]. - For Text Classification: If the input is a .pdf file, the OCR'ed trucated text will be provided in [document_text][google.cloud.automl.v1.Documen t.document_text]. - For Text Sentiment: If the input is a .pdf file, the OCR'ed trucated text will be provided in [document_text][google.cloud.automl.v1.Document.document_tex t].

Status

API documentation for automl_v1.types.Status class.

TextClassificationDatasetMetadata

Dataset metadata for classification.

TextClassificationModelMetadata

Model metadata that is specific to text classification.

TextExtractionAnnotation

Annotation for identifying spans of text.

An entity annotation will set this, which is the part of the original text to which the annotation pertains.

TextExtractionDatasetMetadata

Dataset metadata that is specific to text extraction

TextExtractionEvaluationMetrics

Model evaluation metrics for text extraction problems.

Output only. Metrics that have confidence thresholds. Precision-recall curve can be derived from it.

TextExtractionModelMetadata

Model metadata that is specific to text extraction.

TextSegment

A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.

Required. Zero-based character index of the first character of the text segment (counting characters from the beginning of the text).

TextSentimentAnnotation

Contains annotation details specific to text sentiment.

TextSentimentDatasetMetadata

Dataset metadata for text sentiment.

TextSentimentEvaluationMetrics

Model evaluation metrics for text sentiment problems.

Output only. Recall.

Output only. Mean absolute error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

Output only. Linear weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

Output only. Confusion matrix of the evaluation. Only set for the overall model evaluation, not for evaluation of a single annotation spec.

TextSentimentModelMetadata

Model metadata that is specific to text sentiment.

TextSnippet

A representation of a text snippet.

Optional. The format of content. Currently the only two allowed values are "text/html" and "text/plain". If left blank, the format is automatically determined from the type of the uploaded content.

Timestamp

API documentation for automl_v1.types.Timestamp class.

TranslationAnnotation

Annotation details specific to translation.

TranslationDatasetMetadata

Dataset metadata that is specific to translation.

Required. The BCP-47 language code of the target language.

TranslationEvaluationMetrics

Evaluation metrics for the dataset.

Output only. BLEU score for base model.

TranslationModelMetadata

Model metadata that is specific to translation.

Output only. Inferred from the dataset. The source languge (The BCP-47 language code) that is used for training.

UndeployModelOperationMetadata

Details of UndeployModel operation.

UndeployModelRequest

Request message for AutoMl.UndeployModel.

UpdateDatasetRequest

Request message for AutoMl.UpdateDataset

Required. The update mask applies to the resource.

UpdateModelRequest

Request message for AutoMl.UpdateModel

Required. The update mask applies to the resource.

WaitOperationRequest

API documentation for automl_v1.types.WaitOperationRequest class.