REST Resource: projects.datasets.annotatedDatasets

Resource: AnnotatedDataset

AnnotatedDataset is a set holding annotations for data in a Dataset. Each labeling task will generate an AnnotatedDataset under the Dataset that the task is requested for.

JSON representation
{
  "name": string,
  "displayName": string,
  "description": string,
  "annotationSource": enum (AnnotationSource),
  "annotationType": enum (AnnotationType),
  "exampleCount": string,
  "completedExampleCount": string,
  "labelStats": {
    object (LabelStats)
  },
  "createTime": string,
  "metadata": {
    object (AnnotatedDatasetMetadata)
  },
  "blockingResources": [
    string
  ]
}
Fields
name

string

Output only. AnnotatedDataset resource name in format of: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}

displayName

string

Output only. The display name of the AnnotatedDataset. It is specified in HumanAnnotationConfig when user starts a labeling task. Maximum of 64 characters.

description

string

Output only. The description of the AnnotatedDataset. It is specified in HumanAnnotationConfig when user starts a labeling task. Maximum of 10000 characters.

annotationSource

enum (AnnotationSource)

Output only. Source of the annotation.

annotationType

enum (AnnotationType)

Output only. Type of the annotation. It is specified when starting labeling task.

exampleCount

string (int64 format)

Output only. Number of examples in the annotated dataset.

completedExampleCount

string (int64 format)

Output only. Number of examples that have annotation in the annotated dataset.

labelStats

object (LabelStats)

Output only. Per label statistics.

createTime

string (Timestamp format)

Output only. Time the AnnotatedDataset was created.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

metadata

object (AnnotatedDatasetMetadata)

Output only. Additional information about AnnotatedDataset.

blockingResources[]

string

Output only. The names of any related resources that are blocking changes to the annotated dataset.

AnnotationType

Enums
ANNOTATION_TYPE_UNSPECIFIED
IMAGE_CLASSIFICATION_ANNOTATION Classification annotations in an image.
IMAGE_BOUNDING_BOX_ANNOTATION Bounding box annotations in an image.
IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION Oriented bounding box. The box does not have to be parallel to horizontal line.
IMAGE_BOUNDING_POLY_ANNOTATION Bounding poly annotations in an image.
IMAGE_POLYLINE_ANNOTATION Polyline annotations in an image.
IMAGE_SEGMENTATION_ANNOTATION Segmentation annotations in an image.
VIDEO_SHOTS_CLASSIFICATION_ANNOTATION Classification annotations in video shots.
VIDEO_OBJECT_TRACKING_ANNOTATION Video object tracking annotation.
VIDEO_OBJECT_DETECTION_ANNOTATION Video object detection annotation.
VIDEO_EVENT_ANNOTATION Video event annotation.
TEXT_CLASSIFICATION_ANNOTATION Classification for text.
TEXT_ENTITY_EXTRACTION_ANNOTATION Entity extraction for text.

LabelStats

Statistics about annotation specs.

JSON representation
{
  "exampleCount": {
    string: string,
    ...
  }
}
Fields
exampleCount

map (key: string, value: string (int64 format))

Map of each annotation spec's example count. Key is the annotation spec name and value is the number of examples for that annotation spec. If the annotated dataset does not have annotation spec, the map will return a pair where the key is empty string and value is the total number of annotations.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

AnnotatedDatasetMetadata

Metadata on AnnotatedDataset.

JSON representation
{
  "humanAnnotationConfig": {
    object (HumanAnnotationConfig)
  },

  // Union field annotation_request_config can be only one of the following:
  "imageClassificationConfig": {
    object (ImageClassificationConfig)
  },
  "boundingPolyConfig": {
    object (BoundingPolyConfig)
  },
  "polylineConfig": {
    object (PolylineConfig)
  },
  "segmentationConfig": {
    object (SegmentationConfig)
  },
  "videoClassificationConfig": {
    object (VideoClassificationConfig)
  },
  "objectDetectionConfig": {
    object (ObjectDetectionConfig)
  },
  "objectTrackingConfig": {
    object (ObjectTrackingConfig)
  },
  "eventConfig": {
    object (EventConfig)
  },
  "textClassificationConfig": {
    object (TextClassificationConfig)
  },
  "textEntityExtractionConfig": {
    object (TextEntityExtractionConfig)
  }
  // End of list of possible types for union field annotation_request_config.
}
Fields
humanAnnotationConfig

object (HumanAnnotationConfig)

HumanAnnotationConfig used when requesting the human labeling task for this AnnotatedDataset.

Union field annotation_request_config. Specific request configuration used when requesting the labeling task. annotation_request_config can be only one of the following:
imageClassificationConfig

object (ImageClassificationConfig)

Configuration for image classification task.

boundingPolyConfig

object (BoundingPolyConfig)

Configuration for image bounding box and bounding poly task.

polylineConfig

object (PolylineConfig)

Configuration for image polyline task.

segmentationConfig

object (SegmentationConfig)

Configuration for image segmentation task.

videoClassificationConfig

object (VideoClassificationConfig)

Configuration for video classification task.

objectDetectionConfig

object (ObjectDetectionConfig)

Configuration for video object detection task.

objectTrackingConfig

object (ObjectTrackingConfig)

Configuration for video object tracking task.

eventConfig

object (EventConfig)

Configuration for video event labeling task.

textClassificationConfig

object (TextClassificationConfig)

Configuration for text classification task.

textEntityExtractionConfig

object (TextEntityExtractionConfig)

Configuration for text entity extraction task.

ImageClassificationConfig

Config for image classification human labeling task.

JSON representation
{
  "annotationSpecSet": string,
  "allowMultiLabel": boolean,
  "answerAggregationType": enum (StringAggregationType)
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

allowMultiLabel

boolean

Optional. If allowMultiLabel is true, contributors are able to choose multiple labels for one image.

answerAggregationType

enum (StringAggregationType)

Optional. The type of how to aggregate answers.

StringAggregationType

Enums
STRING_AGGREGATION_TYPE_UNSPECIFIED
MAJORITY_VOTE Majority vote to aggregate answers.
UNANIMOUS_VOTE Unanimous answers will be adopted.
NO_AGGREGATION Preserve all answers by crowd compute.

BoundingPolyConfig

Config for image bounding poly (and bounding box) human labeling task.

JSON representation
{
  "annotationSpecSet": string,
  "instructionMessage": string
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

instructionMessage

string

Optional. Instruction message showed on contributors UI.

PolylineConfig

Config for image polyline human labeling task.

JSON representation
{
  "annotationSpecSet": string,
  "instructionMessage": string
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

instructionMessage

string

Optional. Instruction message showed on contributors UI.

SegmentationConfig

Config for image segmentation

JSON representation
{
  "annotationSpecSet": string,
  "instructionMessage": string
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name. format: projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}

instructionMessage

string

Instruction message showed on labelers UI.

VideoClassificationConfig

Config for video classification human labeling task. Currently two types of video classification are supported: 1. Assign labels on the entire video. 2. Split the video into multiple video clips based on camera shot, and assign labels on each video clip.

JSON representation
{
  "annotationSpecSetConfigs": [
    {
      object (AnnotationSpecSetConfig)
    }
  ],
  "applyShotDetection": boolean
}
Fields
annotationSpecSetConfigs[]

object (AnnotationSpecSetConfig)

Required. The list of annotation spec set configs. Since watching a video clip takes much longer time than an image, we support label with multiple AnnotationSpecSet at the same time. Labels in each AnnotationSpecSet will be shown in a group to contributors. Contributors can select one or more (depending on whether to allow multi label) from each group.

applyShotDetection

boolean

Optional. Option to apply shot detection on the video.

AnnotationSpecSetConfig

Annotation spec set with the setting of allowing multi labels or not.

JSON representation
{
  "annotationSpecSet": string,
  "allowMultiLabel": boolean
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

allowMultiLabel

boolean

Optional. If allowMultiLabel is true, contributors are able to choose multiple labels from one annotation spec set.

ObjectDetectionConfig

Config for video object detection human labeling task. Object detection will be conducted on the images extracted from the video, and those objects will be labeled with bounding boxes. User need to specify the number of images to be extracted per second as the extraction frame rate.

JSON representation
{
  "annotationSpecSet": string,
  "extractionFrameRate": number
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

extractionFrameRate

number

Required. Number of frames per second to be extracted from the video.

ObjectTrackingConfig

Config for video object tracking human labeling task.

JSON representation
{
  "annotationSpecSet": string
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

EventConfig

Config for video event human labeling task.

JSON representation
{
  "annotationSpecSets": [
    string
  ]
}
Fields
annotationSpecSets[]

string

Required. The list of annotation spec set resource name. Similar to video classification, we support selecting event from multiple AnnotationSpecSet at the same time.

TextClassificationConfig

Config for text classification human labeling task.

JSON representation
{
  "allowMultiLabel": boolean,
  "annotationSpecSet": string,
  "sentimentConfig": {
    object (SentimentConfig)
  }
}
Fields
allowMultiLabel

boolean

Optional. If allowMultiLabel is true, contributors are able to choose multiple labels for one text segment.

annotationSpecSet

string

Required. Annotation spec set resource name.

sentimentConfig

object (SentimentConfig)

Optional. Configs for sentiment selection.

SentimentConfig

Config for setting up sentiments.

JSON representation
{
  "enableLabelSentimentSelection": boolean
}
Fields
enableLabelSentimentSelection

boolean

If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.

TextEntityExtractionConfig

Config for text entity extraction human labeling task.

JSON representation
{
  "annotationSpecSet": string
}
Fields
annotationSpecSet

string

Required. Annotation spec set resource name.

HumanAnnotationConfig

Configuration for how human labeling task should be done.

JSON representation
{
  "instruction": string,
  "annotatedDatasetDisplayName": string,
  "annotatedDatasetDescription": string,
  "labelGroup": string,
  "languageCode": string,
  "replicaCount": number,
  "questionDuration": string,
  "contributorEmails": [
    string
  ],
  "userEmailAddress": string
}
Fields
instruction

string

Required except for audio.label case. Instruction resource name.

annotatedDatasetDisplayName

string

Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .

annotatedDatasetDescription

string

Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.

labelGroup

string

Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression [a-zA-Z\\d_-]{0,128}.

languageCode

string

Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification or Chinese audio transcription.

replicaCount

number

Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.

questionDuration

string (Duration format)

Optional. Maximum duration for contributors to answer a question. Default is 1800 seconds.

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

contributorEmails[]

string

Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/

userEmailAddress

string

Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.

Methods

delete

Deletes an annotated dataset by resource name.

get

Gets an annotated dataset by resource name.

list

Lists annotated datasets for a dataset.
หน้านี้มีประโยชน์ไหม โปรดแสดงความคิดเห็น

ส่งความคิดเห็นเกี่ยวกับ...

หากต้องการความช่วยเหลือ ให้ไปที่หน้าการสนับสนุน