API documentation for datalabeling_v1beta1.types
module.
Classes
AnnotatedDataset
AnnotatedDataset is a set holding annotations for data in a Dataset. Each labeling task will generate an AnnotatedDataset under the Dataset that the task is requested for.
Output only. The display name of the AnnotatedDataset. It is specified in HumanAnnotationConfig when user starts a labeling task. Maximum of 64 characters.
Output only. Source of the annotation.
Output only. Number of examples in the annotated dataset.
Output only. Per label statistics.
Output only. Additional information about AnnotatedDataset.
AnnotatedDatasetMetadata
Metadata on AnnotatedDataset.
Configuration for image classification task.
Configuration for image polyline task.
Configuration for video classification task.
Configuration for video object tracking task.
Configuration for text classification task.
HumanAnnotationConfig used when requesting the human labeling task for this AnnotatedDataset.
Annotation
Annotation for Example. Each example may have one or more annotations. For example in image classification problem, each image might have one or more labels. We call labels binded with this image an Annotation.
Output only. The source of the annotation.
Output only. Annotation metadata, including information like votes for labels.
AnnotationMetadata
Additional information associated with the annotation.
AnnotationSpec
Container of information related to one possible annotation that can
be used in a labeling task. For example, an image classification task
where images are labeled as dog
or cat
must reference an
AnnotationSpec for dog
and an AnnotationSpec for cat
.
Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
AnnotationSpecSet
An AnnotationSpecSet is a collection of label definitions. For example, in image classification tasks, you define a set of possible labels for images as an AnnotationSpecSet. An AnnotationSpecSet is immutable upon creation.
Required. The display name for AnnotationSpecSet that you define when you create it. Maximum of 64 characters.
Required. The array of AnnotationSpecs that you define when you create the AnnotationSpecSet. These are the possible labels for the labeling task.
AnnotationValue
Annotation value for an example.
Annotation value for image bounding box, oriented bounding box and polygon cases.
Annotation value for image segmentation.
Annotation value for text entity extraction case.
Annotation value for video object detection and tracking case.
Any
API documentation for datalabeling_v1beta1.types.Any
class.
Attempt
Records a failed evaluation job run.
BigQuerySource
The BigQuery location for input data. If used in an EvaluationJob, this is where the service saves the prediction input and output sampled from the model version.
BoundingBoxEvaluationOptions
Options regarding evaluation between bounding boxes.
BoundingPoly
A bounding polygon in the image.
BoundingPolyConfig
Config for image bounding poly (and bounding box) human labeling task.
Optional. Instruction message showed on contributors UI.
CancelOperationRequest
API documentation for datalabeling_v1beta1.types.CancelOperationRequest
class.
ClassificationMetadata
Metadata for classification annotations.
ClassificationMetrics
Metrics calculated for a classification model.
Confusion matrix of predicted labels vs. ground truth labels.
ConfusionMatrix
Confusion matrix of the model running the classification. Only applicable when the metrics entry aggregates multiple labels. Not applicable when the entry is for a single label.
CreateAnnotationSpecSetRequest
Request message for CreateAnnotationSpecSet.
Required. Annotation spec set to create. Annotation specs must be included. Only one annotation spec will be accepted for annotation specs with same display_name.
CreateDatasetRequest
Request message for CreateDataset.
Required. The dataset to be created.
CreateEvaluationJobRequest
Request message for CreateEvaluationJob.
Required. The evaluation job to create.
CreateInstructionMetadata
Metadata of a CreateInstruction operation.
Partial failures encountered. E.g. single files that couldn’t be read. Status details field will contain standard GCP error details.
CreateInstructionRequest
Request message for CreateInstruction.
Required. Instruction of how to perform the labeling task.
CsvInstruction
Deprecated: this instruction format is not supported any more. Instruction from a CSV file.
DataItem
DataItem is a piece of data, without annotation. For example, an image.
The image payload, a container of the image bytes/uri.
The video payload, a container of the video uri.
Dataset
Dataset is the resource to hold your data. You can request multiple labeling tasks for a dataset while each one will generate an AnnotatedDataset.
Required. The display name of the dataset. Maximum of 64 characters.
Output only. Time the dataset is created.
Output only. The names of any related resources that are blocking changes to the dataset.
DeleteAnnotatedDatasetRequest
Request message for DeleteAnnotatedDataset.
DeleteAnnotationSpecSetRequest
Request message for DeleteAnnotationSpecSet.
DeleteDatasetRequest
Request message for DeleteDataset.
DeleteEvaluationJobRequest
Request message DeleteEvaluationJob.
DeleteInstructionRequest
Request message for DeleteInstruction.
DeleteOperationRequest
API documentation for datalabeling_v1beta1.types.DeleteOperationRequest
class.
Duration
API documentation for datalabeling_v1beta1.types.Duration
class.
Empty
API documentation for datalabeling_v1beta1.types.Empty
class.
Evaluation
Describes an evaluation between a machine learning model’s predictions and ground truth labels. Created when an EvaluationJob runs successfully.
Output only. Options used in the evaluation job that created this evaluation.
Output only. Timestamp for when this evaluation was created.
Output only. Type of task that the model version being evaluated performs, as defined in the [evaluationJobConfig.in putConfig.annotationType][google.cloud.datalabeling.v1beta1.Ev aluationJobConfig.input_config] field of the evaluation job that created this evaluation.
EvaluationConfig
Configuration details used for calculating evaluation metrics and creating an Evaluation.
Only specify this field if the related model performs image
object detection (IMAGE_BOUNDING_BOX_ANNOTATION
).
Describes how to evaluate bounding boxes.
EvaluationJob
Defines an evaluation job that runs periodically to generate
Evaluations. Creating
an evaluation job </ml-engine/docs/continuous-evaluation/create-
job>
__ is the starting point for using continuous evaluation.
Required. Description of the job. The description can be up to 25,000 characters long.
Required. Describes the interval at which the job runs. This
interval must be at least 1 day, and it is rounded to the
nearest day. For example, if you specify a 50-hour interval,
the job runs every 2 days. You can provide the schedule in
crontab format </scheduler/docs/configuring/cron-job-
schedules>
or in an English-like format </appengine/docs/s
tandard/python/config/cronref#schedule_format>
. Regardless
of what you specify, the job will run at 10:00 AM UTC. Only
the interval from this schedule is used, not the specific time
of day.
Required. Configuration details for the evaluation job.
Required. Whether you want Data Labeling Service to provide
ground truth labels for prediction input. If you want the
service to assign human labelers to annotate your data, set
this to true
. If you want to provide your own ground truth
labels in the evaluation job’s BigQuery table, set this to
false
.
Output only. Timestamp of when this evaluation job was created.
EvaluationJobAlertConfig
Provides details for how an evaluation job sends email alerts based on the results of a run.
Required. A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version’s predictions from the recent interval have [meanAveragePrecision][google.cl oud.datalabeling.v1beta1.PrCurve.mean_average_precision] below this threshold, then it sends an alert to your specified email.
EvaluationJobConfig
Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob.
Specify this field if your model version performs image
classification or general classification.
annotationSpecSet
in this configuration must match [Evalua
tionJob.annotationSpecSet][google.cloud.datalabeling.v1beta1.E
valuationJob.annotation_spec_set]. allowMultiLabel
in this
configuration must match
classificationMetadata.isMultiLabel
in [input_config][goog
le.cloud.datalabeling.v1beta1.EvaluationJobConfig.input_config
].
Specify this field if your model version performs text
classification. annotationSpecSet
in this configuration
must match [EvaluationJob.annotationSpecSet][google.cloud.data
labeling.v1beta1.EvaluationJob.annotation_spec_set].
allowMultiLabel
in this configuration must match
classificationMetadata.isMultiLabel
in [input_config][goog
le.cloud.datalabeling.v1beta1.EvaluationJobConfig.input_config
].
Required. Details for calculating evaluation metrics and
creating
Evaulations.
If your model version performs image object detection, you
must specify the boundingBoxEvaluationOptions
field within
this configuration. Otherwise, provide an empty object for
this configuration.
Required. Prediction keys that tell Data Labeling Service
where to find the data for evaluation in your BigQuery table.
When the service samples prediction input and output from your
model version and saves it to BigQuery, the data gets stored
as JSON strings in the BigQuery table. These keys tell Data
Labeling Service how to parse the JSON. You can provide the
following entries in this field: - data_json_key
: the
data key for prediction input. You must provide either this
key or reference_json_key
. - reference_json_key
: the
data reference key for prediction input. You must provide
either this key or data_json_key
. - label_json_key
:
the label key for prediction output. Required. -
label_score_json_key
: the score key for prediction output.
Required. - bounding_box_json_key
: the bounding box key
for prediction output. Required if your model version
perform image object detection. Learn how to configure
prediction keys </ml-engine/docs/continuous-evaluation/create-
job#prediction-keys>
__.
Required. Fraction of predictions to sample and save to BigQuery during each [evaluation interval][google.cloud.datala beling.v1beta1.EvaluationJob.schedule]. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
EvaluationMetrics
EventConfig
Config for video event human labeling task.
Example
An Example is a piece of data and its annotation. For example, an image with label “house”.
The image payload, a container of the image bytes/uri.
The video payload, a container of the video uri.
Output only. Annotations for the piece of data in Example. One piece of data can have multiple annotations.
ExportDataOperationMetadata
Metadata of an ExportData operation.
Output only. Partial failures encountered. E.g. single files that couldn’t be read. Status details field will contain standard GCP error details.
ExportDataOperationResponse
Response used for ExportDataset longrunning operation.
Output only. Total number of examples requested to export
Output only. Statistic infos of labels in the exported dataset.
ExportDataRequest
Request message for ExportData API.
Required. Annotated dataset resource name. DataItem in Dataset and their annotations in specified annotated dataset will be exported. It’s in format of projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}
Required. Specify the output destination.
FieldMask
API documentation for datalabeling_v1beta1.types.FieldMask
class.
GcsDestination
Export destination of the data.Only gcs path is allowed in output_uri.
Required. The format of the gcs destination. Only “text/csv” and “application/json” are supported.
GcsFolderDestination
Export folder destination of the data.
GcsSource
Source of the Cloud Storage file to be imported.
Required. The format of the source file. Only “text/csv” is supported.
GetAnnotatedDatasetRequest
Request message for GetAnnotatedDataset.
GetAnnotationSpecSetRequest
Request message for GetAnnotationSpecSet.
GetDataItemRequest
Request message for GetDataItem.
GetDatasetRequest
Request message for GetDataSet.
GetEvaluationJobRequest
Request message for GetEvaluationJob.
GetEvaluationRequest
Request message for GetEvaluation.
GetExampleRequest
Request message for GetExample
Optional. An expression for filtering Examples. Filter by annotation_spec.display_name is supported. Format “annotation_spec.display_name = {display_name}”
GetInstructionRequest
Request message for GetInstruction.
GetOperationRequest
API documentation for datalabeling_v1beta1.types.GetOperationRequest
class.
HumanAnnotationConfig
Configuration for how human labeling task should be done.
Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
Optional. A human-readable label used to logically group
labeling tasks. This string must match the regular expression
[a-zA-Z\d_-]{0,128}
.
Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd- compute.appspot.com/
ImageBoundingPolyAnnotation
Image bounding poly annotation. It represents a polygon including bounding box in the image.
Label of object in this bounding polygon.
ImageClassificationAnnotation
Image classification annotation definition.
ImageClassificationConfig
Config for image classification human labeling task.
Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
ImagePayload
Container of information about an image.
A byte string of a thumbnail image.
Signed uri of the image file in the service bucket.
ImagePolylineAnnotation
A polyline for the image annotation.
ImageSegmentationAnnotation
Image segmentation annotation.
Image format.
ImportDataOperationMetadata
Metadata of an ImportData operation.
Output only. Partial failures encountered. E.g. single files that couldn’t be read. Status details field will contain standard GCP error details.
ImportDataOperationResponse
Response used for ImportData longrunning operation.
Output only. Total number of examples requested to import
ImportDataRequest
Request message for ImportData API.
Required. Specify the input source of the data.
InputConfig
The configuration of input data, including data type, location, etc.
Required for text import, as language code must be specified.
Source located in Cloud Storage.
Required. Data type must be specifed when user tries to import data.
Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an [Ev aluationJob][google.cloud.datalabeling.v1beta1.EvaluationJob] for a model version that performs classification.
Instruction
Instruction of how to perform the labeling task for human operators. Currently only PDF instruction is supported.
Required. The display name of the instruction. Maximum of 64 characters.
Output only. Creation time of instruction.
Required. The data type of this instruction.
Instruction from a PDF document. The PDF should be in a Cloud Storage bucket.
LabelImageBoundingBoxOperationMetadata
Details of a LabelImageBoundingBox operation metadata.
LabelImageBoundingPolyOperationMetadata
Details of LabelImageBoundingPoly operation metadata.
LabelImageClassificationOperationMetadata
Metadata of a LabelImageClassification operation.
LabelImageOrientedBoundingBoxOperationMetadata
Details of a LabelImageOrientedBoundingBox operation metadata.
LabelImagePolylineOperationMetadata
Details of LabelImagePolyline operation metadata.
LabelImageRequest
Request message for starting an image labeling task.
Configuration for image classification task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Configuration for polyline task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Required. Name of the dataset to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. The type of image labeling task.
LabelImageSegmentationOperationMetadata
Details of a LabelImageSegmentation operation metadata.
LabelOperationMetadata
Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 20
Details of label image classification operation.
Details of label image bounding poly operation.
Details of label image polyline operation.
Details of label video classification operation.
Details of label video object tracking operation.
Details of label text classification operation.
Output only. Progress of label operation. Range: [0, 100].
Output only. Timestamp when labeling request was created.
LabelStats
Statistics about annotation specs.
LabelTextClassificationOperationMetadata
Details of a LabelTextClassification operation metadata.
LabelTextEntityExtractionOperationMetadata
Details of a LabelTextEntityExtraction operation metadata.
LabelTextRequest
Request message for LabelText.
Configuration for text classification task. One of text_classification_config and text_entity_extraction_config is required.
Required. Name of the data set to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. The type of text labeling task.
LabelVideoClassificationOperationMetadata
Details of a LabelVideoClassification operation metadata.
LabelVideoEventOperationMetadata
Details of a LabelVideoEvent operation metadata.
LabelVideoObjectDetectionOperationMetadata
Details of a LabelVideoObjectDetection operation metadata.
LabelVideoObjectTrackingOperationMetadata
Details of a LabelVideoObjectTracking operation metadata.
LabelVideoRequest
Request message for LabelVideo.
Configuration for video classification task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Configuration for video object tracking task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Required. Name of the dataset to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. The type of video labeling task.
ListAnnotatedDatasetsRequest
Request message for ListAnnotatedDatasets.
Optional. Filter is not supported at this moment.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListAnnotatedDatasetsRespons e.next_page_token][google.cloud.datalabeling.v1beta1.ListAnnot atedDatasetsResponse.next_page_token] of the previous [DataLabelingService.ListAnnotatedDatasets] call. Return first page if empty.
ListAnnotatedDatasetsResponse
Results of listing annotated datasets for a dataset.
A token to retrieve next page of results.
ListAnnotationSpecSetsRequest
Request message for ListAnnotationSpecSets.
Optional. Filter is not supported at this moment.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListAnnotationSpecSetsRespon se.next_page_token][google.cloud.datalabeling.v1beta1.ListAnno tationSpecSetsResponse.next_page_token] of the previous [DataLabelingService.ListAnnotationSpecSets] call. Return first page if empty.
ListAnnotationSpecSetsResponse
Results of listing annotation spec set under a project.
A token to retrieve next page of results.
ListDataItemsRequest
Request message for ListDataItems.
Optional. Filter is not supported at this moment.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListDataItemsResponse.next_p age_token][google.cloud.datalabeling.v1beta1.ListDataItemsResp onse.next_page_token] of the previous [DataLabelingService.ListDataItems] call. Return first page if empty.
ListDataItemsResponse
Results of listing data items in a dataset.
A token to retrieve next page of results.
ListDatasetsRequest
Request message for ListDataset.
Optional. Filter on dataset is not supported at this moment.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListDatasetsResponse.next_pa ge_token][google.cloud.datalabeling.v1beta1.ListDatasetsRespon se.next_page_token] of the previous [DataLabelingService.ListDatasets] call. Returns the first page if empty.
ListDatasetsResponse
Results of listing datasets within a project.
A token to retrieve next page of results.
ListEvaluationJobsRequest
Request message for ListEvaluationJobs.
Optional. You can filter the jobs to list by model_id (also
known as model_name, as described in [EvaluationJob.modelVersi
on][google.cloud.datalabeling.v1beta1.EvaluationJob.model_vers
ion]) or by evaluation job state (as described in [EvaluationJ
ob.state][google.cloud.datalabeling.v1beta1.EvaluationJob.stat
e]). To filter by both criteria, use the AND
operator or
the OR
operator. For example, you can use the following
string for your filter: “evaluationjob.model_id = {model_name}
AND evaluationjob.state = {evaluation_job_state}”
Optional. A token identifying a page of results for the server to return. Typically obtained by the [nextPageToken][google.cl oud.datalabeling.v1beta1.ListEvaluationJobsResponse.next_page_ token] in the response to the previous request. The request returns the first page if this is empty.
ListEvaluationJobsResponse
Results for listing evaluation jobs.
A token to retrieve next page of results.
ListExamplesRequest
Request message for ListExamples.
Optional. An expression for filtering Examples. For annotated datasets that have annotation spec set, filter by annotation_spec.display_name is supported. Format “annotation_spec.display_name = {display_name}”
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListExamplesResponse.next_pa ge_token][google.cloud.datalabeling.v1beta1.ListExamplesRespon se.next_page_token] of the previous [DataLabelingService.ListExamples] call. Return first page if empty.
ListExamplesResponse
Results of listing Examples in and annotated dataset.
A token to retrieve next page of results.
ListInstructionsRequest
Request message for ListInstructions.
Optional. Filter is not supported at this moment.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListInstructionsResponse.nex t_page_token][google.cloud.datalabeling.v1beta1.ListInstructio nsResponse.next_page_token] of the previous [DataLabelingService.ListInstructions] call. Return first page if empty.
ListInstructionsResponse
Results of listing instructions under a project.
A token to retrieve next page of results.
ListOperationsRequest
API documentation for datalabeling_v1beta1.types.ListOperationsRequest
class.
ListOperationsResponse
API documentation for datalabeling_v1beta1.types.ListOperationsResponse
class.
NormalizedBoundingPoly
Normalized bounding polygon.
NormalizedPolyline
Normalized polyline.
NormalizedVertex
X coordinate.
ObjectDetectionConfig
Config for video object detection human labeling task. Object detection will be conducted on the images extracted from the video, and those objects will be labeled with bounding boxes. User need to specify the number of images to be extracted per second as the extraction frame rate.
Required. Number of frames per second to be extracted from the video.
ObjectDetectionMetrics
Metrics calculated for an image object detection (bounding box) model.
ObjectTrackingConfig
Config for video object tracking human labeling task.
ObjectTrackingFrame
Video frame level annotation for object detection and tracking.
The time offset of this frame relative to the beginning of the video.
Operation
API documentation for datalabeling_v1beta1.types.Operation
class.
OperationInfo
API documentation for datalabeling_v1beta1.types.OperationInfo
class.
OperatorMetadata
General information useful for labels coming from contributors.
The total number of contributors that answer this question.
Comments from contributors.
OutputConfig
The configuration of output data.
Output to a file in Cloud Storage. Should be used for labeling output other than image segmentation.
PauseEvaluationJobRequest
Request message for PauseEvaluationJob.
PdfInstruction
Instruction from a PDF file.
Polyline
A line with multiple line segments.
PolylineConfig
Config for image polyline human labeling task.
Optional. Instruction message showed on contributors UI.
PrCurve
Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
Mean average prcision of this curve.
ResumeEvaluationJobRequest
Request message ResumeEvaluationJob.
SearchEvaluationsRequest
Request message for SearchEvaluation.
Optional. To search evaluations, you can filter by the
following: - evaluation_job.evaluation_job_id (the last
part of [EvaluationJob.name][google.cloud.datalabeling.v1be
ta1.EvaluationJob.name]) - evaluation_job.model_id (the
{model_name} portion of [EvaluationJob.modelVersion][google
.cloud.datalabeling.v1beta1.EvaluationJob.model_version]) -
evaluation_job.evaluation_job_run_time_start (Minimum
threshold for the [evaluationJobRunTime][google.cloud.da
talabeling.v1beta1.Evaluation.evaluation_job_run_time] that
created the evaluation) -
evaluation_job.evaluation_job_run_time_end (Maximum threshold
for the [evaluationJobRunTime][google.cloud.datalabeling
.v1beta1.Evaluation.evaluation_job_run_time] that created
the evaluation) - evaluation_job.job_state ([EvaluationJo
b.state][google.cloud.datalabeling.v1beta1.EvaluationJob.state
]) - annotation_spec.display_name (the Evaluation contains a
metric for the annotation spec with this [displayName][g
oogle.cloud.datalabeling.v1beta1.AnnotationSpec.display_name])
To filter by multiple critiera, use the AND
operator or
the OR
operator. The following examples shows a string
that filters by several critiera:
"evaluation_job.evaluation_job_id = {evaluation_job_id} AND
evaluationjob.model_id = {model_name} AND
evaluationjob.evaluation_job_run_time_start = {timestamp_1}
AND evaluationjob.evaluation_job_run_time_end = {timestamp_2}
AND annotationspec.display_name = {display_name}"
Optional. A token identifying a page of results for the server to return. Typically obtained by the [nextPageToken][google.cl oud.datalabeling.v1beta1.SearchEvaluationsResponse.next_page_t oken] of the response to a previous search request. If you don’t specify this field, the API call requests the first page of the search.
SearchEvaluationsResponse
Results of searching evaluations.
A token to retrieve next page of results.
SearchExampleComparisonsRequest
Request message of SearchExampleComparisons.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
SearchExampleComparisonsResponse
Results of searching example comparisons.
A token to retrieve next page of results.
SegmentationConfig
Config for image segmentation
Instruction message showed on labelers UI.
SentimentConfig
Config for setting up sentiments.
SequentialSegment
Start and end position in a sequence (e.g. text segment).
End position (exclusive).
Status
API documentation for datalabeling_v1beta1.types.Status
class.
TextClassificationAnnotation
Text classification annotation.
TextClassificationConfig
Config for text classification human labeling task.
Required. Annotation spec set resource name.
TextEntityExtractionAnnotation
Text entity extraction annotation.
Position of the entity.
TextEntityExtractionConfig
Config for text entity extraction human labeling task.
TextMetadata
Metadata for the text.
TextPayload
Container of information about a piece of text.
TimeSegment
A time period inside of an example that has a time dimension (e.g. video).
End of the time segment (exclusive), represented as the duration since the example start.
Timestamp
API documentation for datalabeling_v1beta1.types.Timestamp
class.
UpdateEvaluationJobRequest
Request message for UpdateEvaluationJob.
Optional. Mask for which fields to update. You can only
provide the following fields: -
evaluationJobConfig.humanAnnotationConfig.instruction
-
evaluationJobConfig.exampleCount
-
evaluationJobConfig.exampleSamplePercentage
You can
provide more than one of these fields by separating them with
commas.
Vertex
X coordinate.
VideoClassificationAnnotation
Video classification annotation.
Label of the segment specified by time_segment.
VideoClassificationConfig
Config for video classification human labeling task. Currently two types of video classification are supported: 1. Assign labels on the entire video. 2. Split the video into multiple video clips based on camera shot, and assign labels on each video clip.
Optional. Option to apply shot detection on the video.
VideoEventAnnotation
Video event annotation.
The time segment of the video to which the annotation applies.
VideoObjectTrackingAnnotation
Video object tracking annotation.
The time segment of the video to which object tracking applies.
VideoPayload
Container of information of a video.
Video uri from the user bucket.
FPS of the video.
VideoThumbnail
Container of information of a video thumbnail.
Time offset relative to the beginning of the video, corresponding to the video frame where the thumbnail has been extracted from.
WaitOperationRequest
API documentation for datalabeling_v1beta1.types.WaitOperationRequest
class.