Index
AutoMl
(interface)PredictionService
(interface)AnnotationPayload
(message)AnnotationSpec
(message)BatchPredictInputConfig
(message)BatchPredictOperationMetadata
(message)BatchPredictOperationMetadata.BatchPredictOutputInfo
(message)BatchPredictOutputConfig
(message)BatchPredictRequest
(message)BatchPredictResult
(message)BoundingBoxMetricsEntry
(message)BoundingBoxMetricsEntry.ConfidenceMetricsEntry
(message)BoundingPoly
(message)ClassificationAnnotation
(message)ClassificationEvaluationMetrics
(message)ClassificationEvaluationMetrics.ConfidenceMetricsEntry
(message)ClassificationEvaluationMetrics.ConfusionMatrix
(message)ClassificationEvaluationMetrics.ConfusionMatrix.Row
(message)ClassificationType
(enum)CreateDatasetOperationMetadata
(message)CreateDatasetRequest
(message)CreateModelOperationMetadata
(message)CreateModelRequest
(message)Dataset
(message)DeleteDatasetRequest
(message)DeleteModelRequest
(message)DeleteOperationMetadata
(message)DeployModelOperationMetadata
(message)DeployModelRequest
(message)Document
(message)Document.Layout
(message)Document.Layout.TextSegmentType
(enum)DocumentDimensions
(message)DocumentDimensions.DocumentDimensionUnit
(enum)DocumentInputConfig
(message)ExamplePayload
(message)ExportDataOperationMetadata
(message)ExportDataOperationMetadata.ExportDataOutputInfo
(message)ExportDataRequest
(message)ExportModelOperationMetadata
(message)ExportModelOperationMetadata.ExportModelOutputInfo
(message)ExportModelRequest
(message)GcsDestination
(message)GcsSource
(message)GetAnnotationSpecRequest
(message)GetDatasetRequest
(message)GetModelEvaluationRequest
(message)GetModelRequest
(message)Image
(message)ImageClassificationDatasetMetadata
(message)ImageClassificationModelDeploymentMetadata
(message)ImageClassificationModelMetadata
(message)ImageObjectDetectionAnnotation
(message)ImageObjectDetectionDatasetMetadata
(message)ImageObjectDetectionEvaluationMetrics
(message)ImageObjectDetectionModelDeploymentMetadata
(message)ImageObjectDetectionModelMetadata
(message)ImportDataOperationMetadata
(message)ImportDataRequest
(message)InputConfig
(message)ListDatasetsRequest
(message)ListDatasetsResponse
(message)ListModelEvaluationsRequest
(message)ListModelEvaluationsResponse
(message)ListModelsRequest
(message)ListModelsResponse
(message)Model
(message)Model.DeploymentState
(enum)ModelEvaluation
(message)ModelExportOutputConfig
(message)NormalizedVertex
(message)OperationMetadata
(message)OutputConfig
(message)PredictRequest
(message)PredictResponse
(message)TextClassificationDatasetMetadata
(message)TextClassificationModelMetadata
(message)TextExtractionAnnotation
(message)TextExtractionDatasetMetadata
(message)TextExtractionEvaluationMetrics
(message)TextExtractionEvaluationMetrics.ConfidenceMetricsEntry
(message)TextExtractionModelMetadata
(message)TextSegment
(message)TextSentimentAnnotation
(message)TextSentimentDatasetMetadata
(message)TextSentimentEvaluationMetrics
(message)TextSentimentModelMetadata
(message)TextSnippet
(message)TranslationAnnotation
(message)TranslationDatasetMetadata
(message)TranslationEvaluationMetrics
(message)TranslationModelMetadata
(message)UndeployModelOperationMetadata
(message)UndeployModelRequest
(message)UpdateDatasetRequest
(message)UpdateModelRequest
(message)
AutoMl
AutoML Server API.
The resource names are assigned by the server. The server never reuses names that it has created after the resources with those names are deleted.
An ID of a resource is the last element of the item's resource name. For projects/{project_id}/locations/{location_id}/datasets/{dataset_id}
, then the id for the item is {dataset_id}
.
Currently the only supported location_id
is "us-central1".
On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
CreateDataset | |
---|---|
Creates a dataset.
|
CreateModel | |
---|---|
Creates a model. Returns a Model in the
|
DeleteDataset | |
---|---|
Deletes a dataset and all of its contents. Returns empty response in the
|
DeleteModel | |
---|---|
Deletes a model. Returns
|
DeployModel | |
---|---|
Deploys a model. If a model is already deployed, deploying it with the same parameters has no effect. Deploying with different parametrs (as e.g. changing [node_number][google.cloud.automl.v1p1beta.ImageObjectDetectionModelDeploymentMetadata.node_number]) will reset the deployment state without pausing the model's availability. Only applicable for Text Classification, Image Object Detection , Tables, and Image Segmentation; all other domains manage deployment automatically. Returns an empty response in the
|
ExportData | |
---|---|
Exports dataset's data to the provided output location. Returns an empty response in the
|
ExportModel | |
---|---|
Exports a trained, "export-able", model to a user specified Google Cloud Storage location. A model is considered export-able if and only if it has an export format defined for it in Returns an empty response in the
|
GetAnnotationSpec | |
---|---|
Gets an annotation spec.
|
GetDataset | |
---|---|
Gets a dataset.
|
GetModel | |
---|---|
Gets a model.
|
GetModelEvaluation | |
---|---|
Gets a model evaluation.
|
ImportData | |
---|---|
Imports data into a dataset. For Tables this method can only be called on an empty Dataset. For Tables: * A
|
ListDatasets | |
---|---|
Lists datasets in a project.
|
ListModelEvaluations | |
---|---|
Lists model evaluations.
|
ListModels | |
---|---|
Lists models.
|
UndeployModel | |
---|---|
Undeploys a model. If the model is not deployed this method has no effect. Only applicable for Text Classification, Image Object Detection and Tables; all other domains manage deployment automatically. Returns an empty response in the
|
UpdateDataset | |
---|---|
Updates a dataset.
|
UpdateModel | |
---|---|
Updates a model.
|
PredictionService
AutoML Prediction API.
On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
BatchPredict | |
---|---|
Perform a batch prediction. Unlike the online
|
Predict | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:
|
AnnotationPayload
Contains annotation information that is relevant to AutoML.
Fields | ||
---|---|---|
annotation_spec_id |
Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use. |
|
display_name |
Output only. The value of |
|
Union field detail . Output only . Additional information about the annotation specific to the AutoML domain. detail can be only one of the following: |
||
translation |
Annotation details for translation. |
|
classification |
Annotation details for content or image classification. |
|
image_object_detection |
Annotation details for image object detection. |
|
text_extraction |
Annotation details for text extraction. |
|
text_sentiment |
Annotation details for text sentiment. |
AnnotationSpec
A definition of an annotation spec.
Fields | |
---|---|
name |
Output only. Resource name of the annotation spec. Form: 'projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/annotationSpecs/{annotation_spec_id}' |
display_name |
Required. The name of the annotation spec to show in the interface. The name can be up to 32 characters long and must match the regexp |
BatchPredictInputConfig
Input configuration for BatchPredict Action.
The format of input depends on the ML problem of the model used for prediction. As input source the gcs_source
is expected, unless specified otherwise.
The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:
AutoML Vision
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg
gs://folder/image2.gif
gs://folder/image3.png
Object Detection
One or more CSV files where each line is a single column:
GCS_FILE_PATH
The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the batch predict output.
Sample rows:
gs://folder/image1.jpeg
gs://folder/image2.gif
gs://folder/image3.png
AutoML Video Intelligence
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in size and up to 3h in duration duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START
and TIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time.
Sample rows:
gs://folder/video1.mp4,10,40
gs://folder/video1.mp4,20,60
gs://folder/vid2.mov,0,inf
Object Tracking
One or more CSV files where each line is a single column:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END
GCS_FILE_PATH
is the Google Cloud Storage location of video up to 50GB in size and up to 3h in duration duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START
and TIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time.
Sample rows:
gs://folder/video1.mp4,10,40
gs://folder/video1.mp4,20,60
gs://folder/vid2.mov,0,inf
AutoML Natural Language
Classification
One or more CSV files where each line is a single column:
GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 10MB in size.
Sample rows:
gs://folder/text1.txt
gs://folder/text2.pdf
gs://folder/text3.tif
Sentiment Analysis
One or more CSV files where each line is a single column:GCS_FILE_PATH
GCS_FILE_PATH
is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 128kB in size.
Sample rows:
gs://folder/text1.txt
gs://folder/text2.pdf
gs://folder/text3.tif
Entity Extraction
One or more JSONL (JSON Lines) files that either provide inline text or documents. You can only use one format, either inline text or documents, for a single call to [AutoMl.BatchPredict].
Each JSONL file contains a per line a proto that wraps a temporary user-assigned TextSnippet ID (string up to 2000 characters long) called "id", a TextSnippet proto (in JSON representation) and zero or more TextFeature protos. Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded (ASCII already is). The IDs provided should be unique.
Each document JSONL file contains, per line, a proto that wraps a Document proto with input_config
set. Each document cannot exceed 2MB in size.
Supported document extensions: .PDF, .TIF, .TIFF
Each JSONL file must not exceed 100MB in size, and no more than 20 JSONL files may be passed.
Sample inline JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".):
{
"id": "my_first_id",
"text_snippet": { "content": "dog car cat"},
"text_features": [
{
"text_segment": {"start_offset": 4, "end_offset": 6},
"structural_type": PARAGRAPH,
"bounding_poly": {
"normalized_vertices": [
{"x": 0.1, "y": 0.1},
{"x": 0.1, "y": 0.3},
{"x": 0.3, "y": 0.3},
{"x": 0.3, "y": 0.1},
]
},
}
],
}\n
{
"id": "2",
"text_snippet": {
"content": "Extended sample content",
"mime_type": "text/plain"
}
}
Sample document JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".):
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
}
}
}
}\n
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document2.tif" ]
}
}
}
}
AutoML Tables
See Preparing your training data for more information.
You can use either gcs_source
or [bigquery_source][BatchPredictInputConfig.bigquery_source].
For gcs_source:
CSV file(s), each by itself 10GB or smaller and total size must be 100GB or smaller, where first file must have a header containing column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns.
The column names must contain the model's
[input_feature_column_specs'][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1.ColumnSpec.display_name] (order doesn't matter). The columns corresponding to the model's input feature column specs must contain values compatible with the column spec's data types. Prediction on all the rows, i.e. the CSV lines, will be attempted.
Sample rows from a CSV file:
"First Name","Last Name","Dob","Addresses" "John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
For bigquery_source:
The URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller.
The column names must contain the model's
[input_feature_column_specs'][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1.ColumnSpec.display_name] (order doesn't matter). The columns corresponding to the model's input feature column specs must contain values compatible with the column spec's data types. Prediction on all the rows of the table will be attempted.
Input field definitions:
GCS_FILE_PATH
- The path to a file on Google Cloud Storage. For example, "gs://folder/video.avi".
TIME_SEGMENT_START
- (
TIME_OFFSET
) Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_SEGMENT_END
- (
TIME_OFFSET
) Expresses an end, exclusive, of a time segment within n example that has a time dimension (e.g. video). TIME_OFFSET
- A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. "inf" is allowed, and it means the end of the example.
Errors:
If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and prediction does not happen. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.
Fields | |
---|---|
gcs_source |
Required. The Google Cloud Storage location for the input content. |
BatchPredictOperationMetadata
Details of BatchPredict operation.
Fields | |
---|---|
input_config |
Output only. The input config that was given upon starting this batch predict operation. |
output_info |
Output only. Information further describing this batch predict's output. |
BatchPredictOutputInfo
Further describes this batch predict's output. Supplements
Fields | |
---|---|
gcs_output_directory |
The full path of the Google Cloud Storage directory created, into which the prediction output is written. |
BatchPredictOutputConfig
Output configuration for BatchPredict Action.
As destination the
gcs_destination
must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "prediction-
- For Image Classification: In the created directory files
image_classification_1.jsonl
,image_classification_2.jsonl
,...,image_classification_N.jsonl
will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. A single image will be listed only once with all its annotations, and its annotations will never be split across files. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "" followed by a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. If prediction for any image failed (partially or completely), then an additional errors_1.jsonl
,errors_2.jsonl
,...,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "" but here followed by exactly one
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only code
and message
fields.
- For Image Object Detection: In the created directory files
image_object_detection_1.jsonl
,image_object_detection_2.jsonl
,...,image_object_detection_N.jsonl
will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "" followed by a list of zero or more AnnotationPayload protos (called annotations), which have image_object_detection detail populated. A single image will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any image failed (partially or completely), then additional errors_1.jsonl
,errors_2.jsonl
,...,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "" but here followed by exactly one
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only code
and message
fields. * For Video Classification: In the created directory a video_classification.csv file, and a .JSON file per each video classification requested in the input (i.e. each line in given CSV(s)), will be created.
The format of video_classification.csv is:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_classification.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty.
Each .JSON file, assuming STATUS is "OK", will contain a list of
AnnotationPayload protos in JSON format, which are the predictions
for the video time segment the file is assigned to in the
video_classification.csv. All AnnotationPayload protos will have
video_classification field set, and will be sorted by
video_classification.type field (note that the returned types are
governed by `classifaction_types` parameter in
[PredictService.BatchPredictRequest.params][]).
- For Video Object Tracking: In the created directory a video_object_tracking.csv file will be created, and multiple files video_object_trackinng_1.json, video_object_trackinng_2.json,..., video_object_trackinng_N.json, where N is the number of requests in the input (i.e. the number of lines in given CSV(s)).
The format of video_object_tracking.csv is:
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_object_tracking.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty.
Each .JSON file, assuming STATUS is "OK", will contain a list of
AnnotationPayload protos in JSON format, which are the predictions
for each frame of the video time segment the file is assigned to in
video_object_tracking.csv. All AnnotationPayload protos will have
video_object_tracking field set.
- For Text Classification: In the created directory files
text_classification_1.jsonl
,text_classification_2.jsonl
,...,text_classification_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found.
Each .JSONL file will contain, per line, a JSON representation of a
proto that wraps input text file (or document) in
the text snippet (or document) proto and a list of
zero or more AnnotationPayload protos (called annotations), which
have classification detail populated. A single text file (or
document) will be listed only once with all its annotations, and its
annotations will never be split across files.
If prediction for any input file (or document) failed (partially or
completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,...,
`errors_N.jsonl` files will be created (N depends on total number of
failed predictions). These files will have a JSON representation of a
proto that wraps input file followed by exactly one
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only code
and message
.
- For Text Sentiment: In the created directory files
text_sentiment_1.jsonl
,text_sentiment_2.jsonl
,...,text_sentiment_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found.
Each .JSONL file will contain, per line, a JSON representation of a
proto that wraps input text file (or document) in
the text snippet (or document) proto and a list of
zero or more AnnotationPayload protos (called annotations), which
have text_sentiment detail populated. A single text file (or
document) will be listed only once with all its annotations, and its
annotations will never be split across files.
If prediction for any input file (or document) failed (partially or
completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,...,
`errors_N.jsonl` files will be created (N depends on total number of
failed predictions). These files will have a JSON representation of a
proto that wraps input file followed by exactly one
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only code
and message
.
- For Text Extraction: In the created directory files
text_extraction_1.jsonl
,text_extraction_2.jsonl
,...,text_extraction_N.jsonl
will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet's "id" (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additionalerrors_1.jsonl
,errors_2.jsonl
,...,errors_N.jsonl
files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "" (in case of inline) or the document proto (in case of document) but here followed by exactly one
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only code
and message
.
- For Tables: Output depends on whether
bigquery_destination
is set (either is allowed). Google Cloud Storage case: In the created directory files tables_1.csv
, tables_2.csv
,..., tables_N.csv
will be created, where N may be 1, and depends on the total number of the successfully predicted rows. For all CLASSIFICATION
prediction_type-s
: Each .csv file will contain a header, listing all columns'
display_name-s
given on input followed by M target column names in the format of
display_name
>_scores
. For REGRESSION and FORECASTING
prediction_type-s
: Each .csv file will contain a header, listing all columns' [display_name-s][google.cloud.automl.v1p1beta.display_name] given on input followed by the predicted target column with name in the format of
"predicted_<target_column_specs
display_name
>" Subsequent lines will contain the respective values of successfully predicted rows, with the last, i.e. the target, column having the predicted target value. If prediction for any rows failed, then an additional errors_1.csv
, errors_2.csv
,..., errors_N.csv
will be created (N depends on total number of failed rows). These files will have analogous format as tables_*.csv
, but always with a single target column having
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) represented as a JSON string, and containing only code
and message
. BigQuery case:
bigquery_destination
pointing to a BigQuery project must be set. In the given project a new dataset will be created with name prediction_<model-display-name>_<timestamp-of-prediction-call>
where predictions
, and errors
. The predictions
table's column names will be the input columns'
display_name-s
followed by the target column with name in the format of
"predicted_<target_column_specs
display_name
>" The input feature columns will contain the respective values of successfully predicted rows, with the target column having an ARRAY of
AnnotationPayloads
, represented as STRUCT-s, containing TablesAnnotation
. The errors
table contains rows for which the prediction has failed, it has analogous input columns while the target column name is in the format of
"errors_<target_column_specs
display_name
>", and as a value has
[google.rpc.Status
](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) represented as a STRUCT, and containing only code
and message
.
Fields | |
---|---|
gcs_destination |
Required. The Google Cloud Storage location of the directory where the output is to be written to. |
BatchPredictRequest
Request message for PredictionService.BatchPredict
.
Fields | |
---|---|
name |
Name of the model requested to serve the batch prediction. Authorization requires the following Google IAM permission on the specified resource
|
input_config |
Required. The input configuration for batch prediction. |
output_config |
Required. The Configuration specifying where output predictions should be written. |
params |
Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification
AutoML Vision Classification
AutoML Vision Object Detection
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.
WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking
|
BatchPredictResult
Result of the Batch Predict. This message is returned in response
of the operation returned by the PredictionService.BatchPredict
.
Fields | |
---|---|
metadata |
Additional domain-specific prediction response metadata. AutoML Vision Object Detection
AutoML Video Intelligence Object Tracking
|
BoundingBoxMetricsEntry
Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.
Fields | |
---|---|
iou_threshold |
Output only. The intersection-over-union threshold value used to compute this metrics entry. |
mean_average_precision |
Output only. The mean average precision, most often close to au_prc. |
confidence_metrics_entries[] |
Output only. Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them. |
ConfidenceMetricsEntry
Metrics for a single confidence threshold.
Fields | |
---|---|
confidence_threshold |
Output only. The confidence threshold value used to compute the metrics. |
recall |
Output only. Recall under the given confidence threshold. |
precision |
Output only. Precision under the given confidence threshold. |
f1_score |
Output only. The harmonic mean of recall and precision. |
BoundingPoly
A bounding polygon of a detected object on a plane. On output both vertices and normalized_vertices are provided. The polygon is formed by connecting vertices in the order they are listed.
Fields | |
---|---|
normalized_vertices[] |
Output only . The bounding polygon normalized vertices. |
ClassificationAnnotation
Contains annotation details specific to classification.
Fields | |
---|---|
score |
Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive. |
ClassificationEvaluationMetrics
Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.
Fields | |
---|---|
au_prc |
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation. |
au_roc |
Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation. |
log_loss |
Output only. The Log Loss metric. |
confidence_metrics_entry[] |
Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed. |
confusion_matrix |
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label. |
annotation_spec_id[] |
Output only. The annotation spec ids used for this evaluation. |
ConfidenceMetricsEntry
Metrics for a single confidence threshold.
Fields | |
---|---|
confidence_threshold |
Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value. |
position_threshold |
Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold. |
recall |
Output only. Recall (True Positive Rate) for the given confidence threshold. |
precision |
Output only. Precision for the given confidence threshold. |
false_positive_rate |
Output only. False Positive Rate for the given confidence threshold. |
f1_score |
Output only. The harmonic mean of recall and precision. |
recall_at1 |
Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example. |
precision_at1 |
Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example. |
false_positive_rate_at1 |
Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example. |
f1_score_at1 |
Output only. The harmonic mean of |
true_positive_count |
Output only. The number of model created labels that match a ground truth label. |
false_positive_count |
Output only. The number of model created labels that do not match a ground truth label. |
false_negative_count |
Output only. The number of ground truth labels that are not matched by a model created label. |
true_negative_count |
Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label. |
ConfusionMatrix
Confusion matrix of the model running the classification.
Fields | |
---|---|
annotation_spec_id[] |
Output only. IDs of the annotation specs used in the confusion matrix. For Tables CLASSIFICATION
|
display_name[] |
Output only. Display name of the annotation specs used in the confusion matrix, as they were at the moment of the evaluation. For Tables CLASSIFICATION
|
row[] |
Output only. Rows in the confusion matrix. The number of rows is equal to the size of |
Row
Output only. A row in the confusion matrix.
Fields | |
---|---|
example_count[] |
Output only. Value of the specific cell in the confusion matrix. The number of values each row has (i.e. the length of the row) is equal to the length of the |
ClassificationType
Type of the classification problem.
Enums | |
---|---|
CLASSIFICATION_TYPE_UNSPECIFIED |
An un-set value of this enum. |
MULTICLASS |
At most one label is allowed per example. |
MULTILABEL |
Multiple labels are allowed for one example. |
CreateDatasetOperationMetadata
Details of CreateDataset operation.
CreateDatasetRequest
Request message for AutoMl.CreateDataset
.
Fields | |
---|---|
parent |
The resource name of the project to create the dataset for. Authorization requires the following Google IAM permission on the specified resource
|
dataset |
The dataset to create. |
CreateModelOperationMetadata
Details of CreateModel operation.
CreateModelRequest
Request message for AutoMl.CreateModel
.
Fields | |
---|---|
parent |
Resource name of the parent project where the model is being created. Authorization requires the following Google IAM permission on the specified resource
|
model |
The model to create. |
Dataset
A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.
Fields | ||
---|---|---|
name |
Output only. The resource name of the dataset. Form: |
|
display_name |
Required. The name of the dataset to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9. |
|
description |
User-provided description of the dataset. The description can be up to 25000 characters long. |
|
example_count |
Output only. The number of examples in the dataset. |
|
create_time |
Output only. Timestamp when this dataset was created. |
|
etag |
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
|
labels |
Optional. The labels with user-defined metadata to organize your dataset. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter. See https://goo.gl/xmQnxf for more information on and examples of labels. |
|
Union field dataset_metadata . Required. The dataset metadata that is specific to the problem type. dataset_metadata can be only one of the following: |
||
translation_dataset_metadata |
Metadata for a dataset used for translation. |
|
image_classification_dataset_metadata |
Metadata for a dataset used for image classification. |
|
text_classification_dataset_metadata |
Metadata for a dataset used for text classification. |
|
image_object_detection_dataset_metadata |
Metadata for a dataset used for image object detection. |
|
text_extraction_dataset_metadata |
Metadata for a dataset used for text extraction. |
|
text_sentiment_dataset_metadata |
Metadata for a dataset used for text sentiment. |
DeleteDatasetRequest
Request message for AutoMl.DeleteDataset
.
Fields | |
---|---|
name |
The resource name of the dataset to delete. Authorization requires the following Google IAM permission on the specified resource
|
DeleteModelRequest
Request message for AutoMl.DeleteModel
.
Fields | |
---|---|
name |
Resource name of the model being deleted. Authorization requires the following Google IAM permission on the specified resource
|
DeleteOperationMetadata
Details of operations that perform deletes of any entities.
DeployModelOperationMetadata
Details of DeployModel operation.
DeployModelRequest
Request message for AutoMl.DeployModel
.
Fields | ||
---|---|---|
name |
Resource name of the model to deploy. Authorization requires the following Google IAM permission on the specified resource
|
|
Union field model_deployment_metadata . The per-domain specific deployment parameters. model_deployment_metadata can be only one of the following: |
||
image_object_detection_model_deployment_metadata |
Model deployment metadata specific to Image Object Detection. |
|
image_classification_model_deployment_metadata |
Model deployment metadata specific to Image Classification. |
Document
A structured text document e.g. a PDF.
Fields | |
---|---|
input_config |
An input config specifying the content of the document. |
document_text |
The plain text version of this document. |
layout[] |
Describes the layout of the document. Sorted by [page_number][]. |
document_dimensions |
The dimensions of the page in the document. |
page_count |
Number of pages in the document. |
Layout
Describes the layout information of a text_segment
in the document.
Fields | |
---|---|
text_segment |
Text Segment that represents a segment in |
page_number |
Page number of the |
bounding_poly |
The position of the
|
text_segment_type |
The type of the |
TextSegmentType
The type of TextSegment in the context of the original document.
Enums | |
---|---|
TEXT_SEGMENT_TYPE_UNSPECIFIED |
Should not be used. |
TOKEN |
The text segment is a token. e.g. word. |
PARAGRAPH |
The text segment is a paragraph. |
FORM_FIELD |
The text segment is a form field. |
FORM_FIELD_NAME |
The text segment is the name part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD. |
FORM_FIELD_CONTENTS |
The text segment is the text content part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD. |
TABLE |
The text segment is a whole table, including headers, and all rows. |
TABLE_HEADER |
The text segment is a table's headers. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE. |
TABLE_ROW |
The text segment is a row in table. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE. |
TABLE_CELL |
The text segment is a cell in table. It will be treated as child of another TABLE_ROW TextSegment if its span is subspan of another TextSegment with type TABLE_ROW. |
DocumentDimensions
Message that describes dimension of a document.
Fields | |
---|---|
unit |
Unit of the dimension. |
width |
Width value of the document, works together with the unit. |
height |
Height value of the document, works together with the unit. |
DocumentDimensionUnit
Unit of the document dimension.
Enums | |
---|---|
DOCUMENT_DIMENSION_UNIT_UNSPECIFIED |
Should not be used. |
INCH |
Document dimension is measured in inches. |
CENTIMETER |
Document dimension is measured in centimeters. |
POINT |
Document dimension is measured in points. 72 points = 1 inch. |
DocumentInputConfig
Input configuration of a Document
.
Fields | |
---|---|
gcs_source |
The Google Cloud Storage location of the document file. Only a single path should be given. Max supported size: 512MB. Supported extensions: .PDF. |
ExamplePayload
Example data used for training or prediction.
Fields | ||
---|---|---|
Union field payload . Required. The example data. payload can be only one of the following: |
||
image |
Example image. |
|
text_snippet |
Example text. |
|
document |
Example document. |
ExportDataOperationMetadata
Details of ExportData operation.
Fields | |
---|---|
output_info |
Output only. Information further describing this export data's output. |
ExportDataOutputInfo
Further describes this export data's output. Supplements OutputConfig
.
Fields | |
---|---|
gcs_output_directory |
The full path of the Google Cloud Storage directory created, into which the exported data is written. |
ExportDataRequest
Request message for AutoMl.ExportData
.
Fields | |
---|---|
name |
Required. The resource name of the dataset. Authorization requires the following Google IAM permission on the specified resource
|
output_config |
Required. The desired output location. |
ExportModelOperationMetadata
Details of ExportModel operation.
Fields | |
---|---|
output_info |
Output only. Information further describing the output of this model export. |
ExportModelOutputInfo
Further describes the output of model export. Supplements ModelExportOutputConfig
.
Fields | |
---|---|
gcs_output_directory |
The full path of the Google Cloud Storage directory created, into which the model will be exported. |
ExportModelRequest
Request message for AutoMl.ExportModel
. Models need to be enabled for exporting, otherwise an error code will be returned.
Fields | |
---|---|
name |
Required. The resource name of the model to export. Authorization requires the following Google IAM permission on the specified resource
|
output_config |
Required. The desired output location and configuration. |
GcsDestination
The Google Cloud Storage location where the output is to be written to.
Fields | |
---|---|
output_uri_prefix |
Required. Google Cloud Storage URI to output directory, up to 2000 characters long. Accepted forms: * Prefix path: gs://bucket/directory The requesting user must have write permission to the bucket. The directory is created if it doesn't exist. |
GcsSource
The Google Cloud Storage location for the input content.
Fields | |
---|---|
input_uris[] |
Required. Google Cloud Storage URIs to input files, up to 2000 characters long. Accepted forms: * Full object path, e.g. gs://bucket/directory/object.csv |
GetAnnotationSpecRequest
Request message for AutoMl.GetAnnotationSpec
.
Fields | |
---|---|
name |
The resource name of the annotation spec to retrieve. Authorization requires the following Google IAM permission on the specified resource
|
GetDatasetRequest
Request message for AutoMl.GetDataset
.
Fields | |
---|---|
name |
The resource name of the dataset to retrieve. Authorization requires the following Google IAM permission on the specified resource
|
GetModelEvaluationRequest
Request message for AutoMl.GetModelEvaluation
.
Fields | |
---|---|
name |
Resource name for the model evaluation. Authorization requires the following Google IAM permission on the specified resource
|
GetModelRequest
Request message for AutoMl.GetModel
.
Fields | |
---|---|
name |
Resource name of the model. Authorization requires the following Google IAM permission on the specified resource
|
Image
A representation of an image. Only images up to 30MB in size are supported.
Fields | |
---|---|
thumbnail_uri |
Output only. HTTP URI to the thumbnail image. |
image_bytes |
Image content represented as a stream of bytes. Note: As with all |
ImageClassificationDatasetMetadata
Dataset metadata that is specific to image classification.
Fields | |
---|---|
classification_type |
Required. Type of the classification problem. |
ImageClassificationModelDeploymentMetadata
Model deployment metadata specific to Image Classification.
Fields | |
---|---|
node_count |
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model's
|
ImageClassificationModelMetadata
Model metadata for image classification.
Fields | |
---|---|
base_model_id |
Optional. The ID of the |
train_budget_milli_node_hours |
The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual |
train_cost_milli_node_hours |
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
stop_reason |
Output only. The reason that this create model operation stopped, e.g. |
model_type |
Optional. Type of the model. The available values are: * |
node_qps |
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed. |
node_count |
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field. |
ImageObjectDetectionAnnotation
Annotation details for image object detection.
Fields | |
---|---|
bounding_box |
Output only. The rectangle representing the object location. |
score |
Output only. The confidence that this annotation is positive for the parent example, value in [0, 1], higher means higher positivity confidence. |
ImageObjectDetectionDatasetMetadata
Dataset metadata specific to image object detection.
ImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection problems. Evaluates prediction quality of labeled bounding boxes.
Fields | |
---|---|
evaluated_bounding_box_count |
Output only. The total number of bounding boxes (i.e. summed over all images) the ground truth used to create this evaluation had. |
bounding_box_metrics_entries[] |
Output only. The bounding boxes match metrics for each Intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair. |
bounding_box_mean_average_precision |
Output only. The single metric for bounding boxes evaluation: the mean_average_precision averaged over all bounding_box_metrics_entries. |
ImageObjectDetectionModelDeploymentMetadata
Model deployment metadata specific to Image Object Detection.
Fields | |
---|---|
node_count |
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model's [qps_per_node][google.cloud.automl.v1.ImageObjectDetectionModelMetadata.qps_per_node]. Must be between 1 and 100, inclusive on both ends. |
ImageObjectDetectionModelMetadata
Model metadata specific to image object detection.
Fields | |
---|---|
model_type |
Optional. Type of the model. The available values are: * |
node_count |
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field. |
node_qps |
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed. |
stop_reason |
Output only. The reason that this create model operation stopped, e.g. |
train_budget_milli_node_hours |
The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual |
train_cost_milli_node_hours |
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget. |
ImportDataOperationMetadata
Details of ImportData operation.
ImportDataRequest
Request message for AutoMl.ImportData
.
Fields | |
---|---|
name |
Required. Dataset name. Dataset must already exist. All imported annotations and examples will be added. Authorization requires the following Google IAM permission on the specified resource
|
input_config |
Required. The desired input location and its domain specific semantics, if any. |
InputConfig
Input configuration for AutoMl.ImportData
action.
The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the gcs_source
is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an "example" file (that is, image, video etc.) with identical content (even if it had different GCS_FILE_PATH
) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the same ML_USE
and GCS_FILE_PATH
, if it is not, then these values are nondeterministically selected from the given ones.
The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are:
AutoML Vision
Classification
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH,LABEL,LABEL,...
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
GCS_FILE_PATH
- The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG, .WEBP, .BMP, .TIFF, .ICO.LABEL
- A label that identifies the object in the image.
For the MULTICLASS
classification type, at most one LABEL
is allowed per image. If an image has not yet been labeled, then it should be mentioned just once with no LABEL
.
Some sample rows:
TRAIN,gs://folder/image1.jpg,daisy
TEST,gs://folder/image2.jpg,dandelion,tulip,rose
UNASSIGNED,gs://folder/image3.jpg,daisy
UNASSIGNED,gs://folder/image4.jpg
Object Detection
See Preparing your training data for more information.A CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,)
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
GCS_FILE_PATH
- The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. Each image is assumed to be exhaustively labeled.LABEL
- A label that identifies the object in the image specified by theBOUNDING_BOX
.BOUNDING BOX
- The vertices of an object in the example image. The minimum allowedBOUNDING_BOX
edge length is 0.01, and no more than 500BOUNDING_BOX
instances per image are allowed (oneBOUNDING_BOX
per line). If an image has no looked for objects then it should be mentioned just once with no LABEL and the ",,,,,,," in place of theBOUNDING_BOX
.
Four sample rows:
TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,,
TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,,
UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3
TEST,gs://folder/im3.png,,,,,,,,,
AutoML Video Intelligence
Classification
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH
For ML_USE
, do not use VALIDATE
.
GCS_FILE_PATH
is the path to another .csv file that describes training example for a given ML_USE
, using the following row format:
GCS_FILE_PATH,(LABEL,TIME_SEGMENT_START,TIME_SEGMENT_END | ,,)
Here GCS_FILE_PATH
leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START
and TIME_SEGMENT_END
must be within the length of the video, and the end time must be after the start time. Any segment of a video which has one or more labels on it, is considered a hard negative for all other labels. Any segment with no labels on it is considered to be unknown. If a whole video is unknown, then it should be mentioned just once with ",," in place of LABEL,
TIME_SEGMENT_START,TIME_SEGMENT_END
.
Sample top level CSV file:
TRAIN,gs://folder/train_videos.csv
TEST,gs://folder/test_videos.csv
UNASSIGNED,gs://folder/other_videos.csv
Sample rows of a CSV file for a particular ML_USE:
gs://folder/video1.avi,car,120,180.000021
gs://folder/video1.avi,bike,150,180.000021
gs://folder/vid2.avi,car,0,60.5
gs://folder/vid3.avi,,,
Object Tracking
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,GCS_FILE_PATH
For ML_USE
, do not use VALIDATE
.
GCS_FILE_PATH
is the path to another .csv file that describes training example for a given ML_USE
, using the following row format:
GCS_FILE_PATH,LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX
or
GCS_FILE_PATH,,,,,,,,,,
Here GCS_FILE_PATH
leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. Providing INSTANCE_ID
s can help to obtain a better model. When a specific labeled entity leaves the video frame, and shows up afterwards it is not required, albeit preferable, that the same INSTANCE_ID
is given to it.
TIMESTAMP
must be within the length of the video, the BOUNDING_BOX
is assumed to be drawn on the closest video's frame to the TIMESTAMP
. Any mentioned by the TIMESTAMP
frame is expected to be exhaustively labeled and no more than 500 BOUNDING_BOX
-es per frame are allowed. If a whole video is unknown, then it should be mentioned just once with ",,,,,,,,,," in place of LABEL,
[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX
.
Sample top level CSV file:
TRAIN,gs://folder/train_videos.csv
TEST,gs://folder/test_videos.csv
UNASSIGNED,gs://folder/other_videos.csv
Seven sample rows of a CSV file for a particular ML_USE:
gs://folder/video1.avi,car,1,12.10,0.8,0.8,0.9,0.8,0.9,0.9,0.8,0.9
gs://folder/video1.avi,car,1,12.90,0.4,0.8,0.5,0.8,0.5,0.9,0.4,0.9
gs://folder/video1.avi,car,2,12.10,.4,.2,.5,.2,.5,.3,.4,.3
gs://folder/video1.avi,car,2,12.90,.8,.2,,,.9,.3,,
gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,,
gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,,
gs://folder/video2.avi,,,,,,,,,,,
AutoML Natural Language
Entity Extraction
See Preparing your training data for more information.
One or more CSV file(s) with each line in the following format:
ML_USE,GCS_FILE_PATH
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing..
GCS_FILE_PATH
- a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training.
After the training data set has been determined from the TRAIN
and UNASSIGNED
CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation.
For example:
TRAIN,gs://folder/file1.jsonl
VALIDATE,gs://folder/file2.jsonl
TEST,gs://folder/file3.jsonl
In-line JSONL files
In-line .JSONL files contain, per line, a JSON document that wraps a
field followed by one or more text_snippet
fields, which have annotations
display_name
and text_extraction
fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (\n).
The supplied text must be annotated exhaustively. For example, if you include the text "horse", but do not label it as "animal", then "horse" is assumed to not be an "animal".
Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded.
For example:
{
"text_snippet": {
"content": "dog car cat"
},
"annotations": [
{
"display_name": "animal",
"text_extraction": {
"text_segment": {"start_offset": 0, "end_offset": 2}
}
},
{
"display_name": "vehicle",
"text_extraction": {
"text_segment": {"start_offset": 4, "end_offset": 6}
}
},
{
"display_name": "animal",
"text_extraction": {
"text_segment": {"start_offset": 8, "end_offset": 10}
}
}
]
}\n
{
"text_snippet": {
"content": "This dog is good."
},
"annotations": [
{
"display_name": "animal",
"text_extraction": {
"text_segment": {"start_offset": 5, "end_offset": 7}
}
}
]
}
JSONL files that reference documents
.JSONL files contain, per line, a JSON document that wraps a input_config
that contains the path to a source document. Multiple JSON documents can be separated using line breaks (\n).
Supported document extensions: .PDF, .TIF, .TIFF
For example:
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ]
}
}
}
}\n
{
"document": {
"input_config": {
"gcs_source": { "input_uris": [ "gs://folder/document2.tif" ]
}
}
}
}
In-line JSONL files with document layout information
Note: You can only annotate documents using the UI. The format described below applies to annotated documents exported using the UI or exportData
.
In-line .JSONL files for documents contain, per line, a JSON document that wraps a document
field that provides the textual content of the document and the layout information.
For example:
{
"document": {
"document_text": {
"content": "dog car cat"
}
"layout": [
{
"text_segment": {
"start_offset": 0,
"end_offset": 11,
},
"page_number": 1,
"bounding_poly": {
"normalized_vertices": [
{"x": 0.1, "y": 0.1},
{"x": 0.1, "y": 0.3},
{"x": 0.3, "y": 0.3},
{"x": 0.3, "y": 0.1},
],
},
"text_segment_type": TOKEN,
}
],
"document_dimensions": {
"width": 8.27,
"height": 11.69,
"unit": INCH,
}
"page_count": 3,
},
"annotations": [
{
"display_name": "animal",
"text_extraction": {
"text_segment": {"start_offset": 0, "end_offset": 3}
}
},
{
"display_name": "vehicle",
"text_extraction": {
"text_segment": {"start_offset": 4, "end_offset": 7}
}
},
{
"display_name": "animal",
"text_extraction": {
"text_segment": {"start_offset": 8, "end_offset": 11}
}
},
],
Classification
See Preparing your training data for more information.
One or more CSV file(s) with each line in the following format:
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
TEXT_SNIPPET
andGCS_FILE_PATH
are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as aGCS_FILE_PATH
. Otherwise, if the content is enclosed in double quotes (""), it is treated as aTEXT_SNIPPET
. ForGCS_FILE_PATH
, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. ForTEXT_SNIPPET
, AutoML imports the column content excluding quotes. In both cases, size of the content must be 10MB or less in size. For zip files, the size of each file inside the zip must be 10MB or less in size.
For the `MULTICLASS` classification type, at most one `LABEL` is allowed.
The `ML_USE` and `LABEL` columns are optional.
Supported file extensions: .TXT, .PDF, .TIF, .TIFF, .ZIP
A maximum of 100 unique labels are allowed per CSV row.
Sample rows:
TRAIN,"They have bad food and very rude",RudeService,BadFood
gs://folder/content.txt,SlowService
TEST,gs://folder/document.pdf
VALIDATE,gs://folder/text_files.zip,BadFood
Sentiment Analysis
See Preparing your training data for more information.
CSV file(s) with each line in format:
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT
ML_USE
- Identifies the data set that the current row (file) applies to. This value can be one of the following:TRAIN
- Rows in this file are used to train the model.TEST
- Rows in this file are used to test the model during training.UNASSIGNED
- Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.
TEXT_SNIPPET
andGCS_FILE_PATH
are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as aGCS_FILE_PATH
. Otherwise, if the content is enclosed in double quotes (""), it is treated as aTEXT_SNIPPET
. ForGCS_FILE_PATH
, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. ForTEXT_SNIPPET
, AutoML imports the column content excluding quotes. In both cases, size of the content must be 128kB or less in size. For zip files, the size of each file inside the zip must be 128kB or less in size.
The `ML_USE` and `SENTIMENT` columns are optional.
Supported file extensions: .TXT, .PDF, .TIF, .TIFF, .ZIP
SENTIMENT
- An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be large.
Sample rows:
TRAIN,"@freewrytin this is way too good for your product",2
gs://folder/content.txt,3
TEST,gs://folder/document.pdf
VALIDATE,gs://folder/text_files.zip,2
AutoML Tables
See Preparing your training data for more information.
You can use either gcs_source
or [bigquery_source][google.cloud.automl.v1.InputConfig.bigquery_source]. All input is concatenated into a single
[primary_table_spec_id][google.cloud.automl.v1.TablesDatasetMetadata.primary_table_spec_id]
For gcs_source:
CSV file(s), where the first row of the first file is the header, containing unique column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns.
Each .CSV file by itself must be 10GB or smaller, and their total size must be 100GB or smaller.
First three sample rows of a CSV file:
"Id","First Name","Last Name","Dob","Addresses" "1","John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "2","Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}
For bigquery_source:
An URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller.
An imported table must have between 2 and 1,000 columns, inclusive, and between 1000 and 100,000,000 rows, inclusive. There are at most 5 import data running in parallel.
Input field definitions:
ML_USE
- ("TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED") Describes how the given example (file) should be used for model training. "UNASSIGNED" can be used when user has no preference.
GCS_FILE_PATH
- The path to a file on Google Cloud Storage. For example, "gs://folder/image1.png".
LABEL
- A display name of an object on an image, video etc., e.g. "dog". Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions.
INSTANCE_ID
- A positive integer that identifies a specific instance of a labeled entity on an example. Used e.g. to track two cars on a video while being able to tell apart which one is which.
BOUNDING_BOX
- (
VERTEX,VERTEX,VERTEX,VERTEX
|VERTEX,,,VERTEX,,
) A rectangle parallel to the frame of the example (image, video). If 4 vertices are given they are connected by edges in the order provided, if 2 are given they are recognized as diagonally opposite vertices of the rectangle. VERTEX
- (
COORDINATE,COORDINATE
) First coordinate is horizontal (x), the second is vertical (y). COORDINATE
- A float in 0 to 1 range, relative to total length of image or video in given dimension. For fractions the leading non-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in top left.
TIME_SEGMENT_START
- (
TIME_OFFSET
) Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_SEGMENT_END
- (
TIME_OFFSET
) Expresses an end, exclusive, of a time segment within n example that has a time dimension (e.g. video). TIME_OFFSET
- A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. "inf" is allowed, and it means the end of the example.
TEXT_SNIPPET
- The content of a text snippet, UTF-8 encoded, enclosed within double quotes ("").
DOCUMENT
- A field that provides the textual content with document and the layout information.
Errors:
If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.
Fields | |
---|---|
params |
Additional domain-specific parameters describing the semantic of the imported data, any string must be up to 25000 characters long. AutoML Tables
|
gcs_source |
The Google Cloud Storage location for the input content. For |
ListDatasetsRequest
Request message for AutoMl.ListDatasets
.
Fields | |
---|---|
parent |
The resource name of the project from which to list datasets. Authorization requires the following Google IAM permission on the specified resource
|
filter |
An expression for filtering the results of the request.
|
page_size |
Requested page size. Server may return fewer results than requested. If unspecified, server will pick a default size. |
page_token |
A token identifying a page of results for the server to return Typically obtained via |
ListDatasetsResponse
Response message for AutoMl.ListDatasets
.
Fields | |
---|---|
datasets[] |
The datasets read. |
next_page_token |
A token to retrieve next page of results. Pass to |
ListModelEvaluationsRequest
Request message for AutoMl.ListModelEvaluations
.
Fields | |
---|---|
parent |
Resource name of the model to list the model evaluations for. If modelId is set as "-", this will list model evaluations from across all models of the parent location. Authorization requires the following Google IAM permission on the specified resource
|
filter |
An expression for filtering the results of the request.
Some examples of using the filter are:
|
page_size |
Requested page size. |
page_token |
A token identifying a page of results for the server to return. Typically obtained via |
ListModelEvaluationsResponse
Response message for AutoMl.ListModelEvaluations
.
Fields | |
---|---|
model_evaluation[] |
List of model evaluations in the requested page. |
next_page_token |
A token to retrieve next page of results. Pass to the |
ListModelsRequest
Request message for AutoMl.ListModels
.
Fields | |
---|---|
parent |
Resource name of the project, from which to list the models. Authorization requires the following Google IAM permission on the specified resource
|
filter |
An expression for filtering the results of the request.
|
page_size |
Requested page size. |
page_token |
A token identifying a page of results for the server to return Typically obtained via |
ListModelsResponse
Response message for AutoMl.ListModels
.
Fields | |
---|---|
model[] |
List of models in the requested page. |
next_page_token |
A token to retrieve next page of results. Pass to |
Model
API proto representing a trained machine learning model.
Fields | ||
---|---|---|
name |
Output only. Resource name of the model. Format: |
|
display_name |
Required. The name of the model to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9. It must start with a letter. |
|
dataset_id |
Required. The resource ID of the dataset used to create the model. The dataset must come from the same ancestor project and location. |
|
create_time |
Output only. Timestamp when the model training finished and can be used for prediction. |
|
update_time |
Output only. Timestamp when this model was last updated. |
|
deployment_state |
Output only. Deployment state of the model. A model can only serve prediction requests after it gets deployed. |
|
etag |
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens. |
|
labels |
Optional. The labels with user-defined metadata to organize your model. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter. See https://goo.gl/xmQnxf for more information on and examples of labels. |
|
Union field model_metadata . Required. The model metadata that is specific to the problem type. Must match the metadata type of the dataset used to train the model. model_metadata can be only one of the following: |
||
translation_model_metadata |
Metadata for translation models. |
|
image_classification_model_metadata |
Metadata for image classification models. |
|
text_classification_model_metadata |
Metadata for text classification models. |
|
image_object_detection_model_metadata |
Metadata for image object detection models. |
|
text_extraction_model_metadata |
Metadata for text extraction models. |
|
text_sentiment_model_metadata |
Metadata for text sentiment models. |
DeploymentState
Deployment state of the model.
Enums | |
---|---|
DEPLOYMENT_STATE_UNSPECIFIED |
Should not be used, an un-set enum has this value by default. |
DEPLOYED |
Model is deployed. |
UNDEPLOYED |
Model is not deployed. |
ModelEvaluation
Evaluation results of a model.
Fields | ||
---|---|---|
name |
Output only. Resource name of the model evaluation. Format:
|
|
annotation_spec_id |
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION [prediction_type-s][google.cloud.automl.v1.TablesModelMetadata.prediction_type] the |
|
display_name |
Output only. The value of [prediction_type-s][google.cloud.automl.v1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation. |
|
create_time |
Output only. Timestamp when this model evaluation was created. |
|
evaluated_example_count |
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the |
|
Union field metrics . Output only. Problem type specific evaluation metrics. metrics can be only one of the following: |
||
classification_evaluation_metrics |
Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType. |
|
translation_evaluation_metrics |
Model evaluation metrics for translation. |
|
image_object_detection_evaluation_metrics |
Model evaluation metrics for image object detection. |
|
text_sentiment_evaluation_metrics |
Evaluation metrics for text sentiment models. |
|
text_extraction_evaluation_metrics |
Evaluation metrics for text extraction models. |
ModelExportOutputConfig
Output configuration for ModelExport Action.
Fields | |
---|---|
model_format |
The format in which the model must be exported. The available, and default, formats depend on the problem and model type (if given problem and type combination doesn't have a format listed, it means its models are not exportable):
quickstart](https: //cloud.google.com/vision/automl/docs/containers-gcs-quickstart) * core_ml - Used for iOS mobile devices. |
params |
Additional model-type and format specific parameters describing the requirements for the to be exported model files, any string must be up to 25000 characters long.
|
gcs_destination |
Required. The Google Cloud Storage location where the model is to be written to. This location may only be set for the following model formats: "tflite", "edgetpu_tflite", "tf_saved_model", "tf_js", "core_ml". Under the directory given as the destination a new one with name "model-export- |
NormalizedVertex
A vertex represents a 2D point in the image. The normalized vertex coordinates are between 0 to 1 fractions relative to the original plane (image, video). E.g. if the plane (e.g. whole image) would have size 10 x 20 then a point with normalized coordinates (0.1, 0.3) would be at the position (1, 6) on that plane.
Fields | |
---|---|
x |
Required. Horizontal coordinate. |
y |
Required. Vertical coordinate. |
OperationMetadata
Metadata used across all long running operations returned by AutoML API.
Fields | ||
---|---|---|
progress_percent |
Output only. Progress of operation. Range: [0, 100]. Not used currently. |
|
partial_failures[] |
Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details. |
|
create_time |
Output only. Time when the operation was created. |
|
update_time |
Output only. Time when the operation was updated for the last time. |
|
Union field details . Ouptut only. Details of specific operation. Even if this field is empty, the presence allows to distinguish different types of operations. details can be only one of the following: |
||
delete_details |
Details of a Delete operation. |
|
deploy_model_details |
Details of a DeployModel operation. |
|
undeploy_model_details |
Details of an UndeployModel operation. |
|
create_model_details |
Details of CreateModel operation. |
|
create_dataset_details |
Details of CreateDataset operation. |
|
import_data_details |
Details of ImportData operation. |
|
batch_predict_details |
Details of BatchPredict operation. |
|
export_data_details |
Details of ExportData operation. |
|
export_model_details |
Details of ExportModel operation. |
OutputConfig
For Translation: CSV file
translation.csv
, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV file which describes examples that have given ML_USE, using the following row format per line: TEXT_SNIPPET (in source language) \t TEXT_SNIPPET (in target language)For Tables: Output depends on whether the dataset was imported from Google Cloud Storage or BigQuery. Google Cloud Storage case:
gcs_destination
must be set. Exported are CSV file(s) tables_1.csv
, tables_2.csv
,...,tables_N.csv
with each having as header line the table's column names, and all other lines contain values for the header columns. BigQuery case:
bigquery_destination
pointing to a BigQuery project must be set. In the given project a new dataset will be created with name
export_data_<automl-dataset-display-name>_<timestamp-of-export-call>
where primary_table
will be created, and filled with precisely the same data as this obtained on import.
Fields | |
---|---|
gcs_destination |
Required. The Google Cloud Storage location where the output is to be written to. For Image Object Detection, Text Extraction, Video Classification and Tables, in the given directory a new directory will be created with name: export_data- |
PredictRequest
Request message for PredictionService.Predict
.
Fields | |
---|---|
name |
Name of the model requested to serve the prediction. Authorization requires the following Google IAM permission on the specified resource
|
payload |
Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
params |
Additional domain-specific parameters, any string must be up to 25000 characters long. AutoML Vision Classification
AutoML Vision Object Detection
AutoML Tables
[feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false. |
PredictResponse
Response message for PredictionService.Predict
.
Fields | |
---|---|
payload[] |
Prediction result. AutoML Translation and AutoML Natural Language Sentiment Analysis return precisely one payload. |
preprocessed_input |
The preprocessed example that AutoML actually makes prediction on. Empty if AutoML does not preprocess the input example. For AutoML Natural Language (Classification, Entity Extraction, and Sentiment Analysis), if the input is a document, the recognized text is returned in the |
metadata |
Additional domain-specific prediction response metadata. AutoML Vision Object Detection
AutoML Natural Language Sentiment Analysis
|
TextClassificationDatasetMetadata
Dataset metadata for classification.
Fields | |
---|---|
classification_type |
Required. Type of the classification problem. |
TextClassificationModelMetadata
Model metadata that is specific to text classification.
Fields | |
---|---|
classification_type |
Output only. Classification type of the dataset used to train this model. |
TextExtractionAnnotation
Annotation for identifying spans of text.
Fields | |
---|---|
score |
Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence in correctness of the annotation. |
text_segment |
An entity annotation will set this, which is the part of the original text to which the annotation pertains. |
TextExtractionDatasetMetadata
Dataset metadata that is specific to text extraction
TextExtractionEvaluationMetrics
Model evaluation metrics for text extraction problems.
Fields | |
---|---|
au_prc |
Output only. The Area under precision recall curve metric. |
confidence_metrics_entries[] |
Output only. Metrics that have confidence thresholds. Precision-recall curve can be derived from it. |
ConfidenceMetricsEntry
Metrics for a single confidence threshold.
Fields | |
---|---|
confidence_threshold |
Output only. The confidence threshold value used to compute the metrics. Only annotations with score of at least this threshold are considered to be ones the model would return. |
recall |
Output only. Recall under the given confidence threshold. |
precision |
Output only. Precision under the given confidence threshold. |
f1_score |
Output only. The harmonic mean of recall and precision. |
TextExtractionModelMetadata
Model metadata that is specific to text extraction.
TextSegment
A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.
Fields | |
---|---|
content |
Output only. The content of the TextSegment. |
start_offset |
Required. Zero-based character index of the first character of the text segment (counting characters from the beginning of the text). |
end_offset |
Required. Zero-based character index of the first character past the end of the text segment (counting character from the beginning of the text). The character at the end_offset is NOT included in the text segment. |
TextSentimentAnnotation
Contains annotation details specific to text sentiment.
Fields | |
---|---|
sentiment |
Output only. The sentiment with the semantic, as given to the |
TextSentimentDatasetMetadata
Dataset metadata for text sentiment.
Fields | |
---|---|
sentiment_max |
Required. A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentiment_max (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. sentiment_max value must be between 1 and 10 (inclusive). |
TextSentimentEvaluationMetrics
Model evaluation metrics for text sentiment problems.
Fields | |
---|---|
precision |
Output only. Precision. |
recall |
Output only. Recall. |
f1_score |
Output only. The harmonic mean of recall and precision. |
mean_absolute_error |
Output only. Mean absolute error. Only set for the overall model evaluation, not for evaluation of a single annotation spec. |
mean_squared_error |
Output only. Mean squared error. Only set for the overall model evaluation, not for evaluation of a single annotation spec. |
linear_kappa |
Output only. Linear weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec. |
quadratic_kappa |
Output only. Quadratic weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec. |
confusion_matrix |
Output only. Confusion matrix of the evaluation. Only set for the overall model evaluation, not for evaluation of a single annotation spec. |
TextSentimentModelMetadata
Model metadata that is specific to text sentiment.
TextSnippet
A representation of a text snippet.
Fields | |
---|---|
content |
Required. The content of the text snippet as a string. Up to 250000 characters long. |
mime_type |
Optional. The format of |
content_uri |
Output only. HTTP URI where you can download the content. |
TranslationAnnotation
Annotation details specific to translation.
Fields | |
---|---|
translated_content |
Output only . The translated content. |
TranslationDatasetMetadata
Dataset metadata that is specific to translation.
Fields | |
---|---|
source_language_code |
Required. The BCP-47 language code of the source language. |
target_language_code |
Required. The BCP-47 language code of the target language. |
TranslationEvaluationMetrics
Evaluation metrics for the dataset.
Fields | |
---|---|
bleu_score |
Output only. BLEU score. |
base_bleu_score |
Output only. BLEU score for base model. |
TranslationModelMetadata
Model metadata that is specific to translation.
Fields | |
---|---|
base_model |
The resource name of the model to use as a baseline to train the custom model. If unset, we use the default base model provided by Google Translate. Format: |
source_language_code |
Output only. Inferred from the dataset. The source language (The BCP-47 language code) that is used for training. |
target_language_code |
Output only. The target language (The BCP-47 language code) that is used for training. |
UndeployModelOperationMetadata
Details of UndeployModel operation.
UndeployModelRequest
Request message for AutoMl.UndeployModel
.
Fields | |
---|---|
name |
Resource name of the model to undeploy. Authorization requires the following Google IAM permission on the specified resource
|
UpdateDatasetRequest
Request message for AutoMl.UpdateDataset
Fields | |
---|---|
dataset |
The dataset which replaces the resource on the server. Authorization requires the following Google IAM permission on the specified resource
|
update_mask |
Required. The update mask applies to the resource. |
UpdateModelRequest
Request message for AutoMl.UpdateModel
Fields | |
---|---|
model |
The model which replaces the resource on the server. Authorization requires the following Google IAM permission on the specified resource
|
update_mask |
Required. The update mask applies to the resource. |