Google Cloud Datalabeling V1beta1 Client - Class EvaluationJob (0.5.6)

Reference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class EvaluationJob.

Defines an evaluation job that runs periodically to generate Evaluations. Creating an evaluation job is the starting point for using continuous evaluation.

Generated from protobuf message google.cloud.datalabeling.v1beta1.EvaluationJob

Namespace

Google \ Cloud \ DataLabeling \ V1beta1

Methods

__construct

Constructor.

Parameters
Name Description
data array

Optional. Data for populating the Message object.

↳ name string

Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/{evaluation_job_id}"

↳ description string

Required. Description of the job. The description can be up to 25,000 characters long.

↳ state int

Output only. Describes the current state of the job.

↳ schedule string

Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

↳ model_version string

Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

↳ evaluation_job_config EvaluationJobConfig

Required. Configuration details for the evaluation job.

↳ annotation_spec_set string

Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

↳ label_missing_ground_truth bool

Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

↳ attempts array<Attempt>

Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

↳ create_time Google\Protobuf\Timestamp

Output only. Timestamp of when this evaluation job was created.

getName

Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/{evaluation_job_id}"

Returns
Type Description
string

setName

Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/{evaluation_job_id}"

Parameter
Name Description
var string
Returns
Type Description
$this

getDescription

Required. Description of the job. The description can be up to 25,000 characters long.

Returns
Type Description
string

setDescription

Required. Description of the job. The description can be up to 25,000 characters long.

Parameter
Name Description
var string
Returns
Type Description
$this

getState

Output only. Describes the current state of the job.

Returns
Type Description
int

setState

Output only. Describes the current state of the job.

Parameter
Name Description
var int
Returns
Type Description
$this

getSchedule

Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days.

You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

Returns
Type Description
string

setSchedule

Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days.

You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

Parameter
Name Description
var string
Returns
Type Description
$this

getModelVersion

Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

Returns
Type Description
string

setModelVersion

Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

Parameter
Name Description
var string
Returns
Type Description
$this

getEvaluationJobConfig

Required. Configuration details for the evaluation job.

Returns
Type Description
EvaluationJobConfig|null

hasEvaluationJobConfig

clearEvaluationJobConfig

setEvaluationJobConfig

Required. Configuration details for the evaluation job.

Parameter
Name Description
var EvaluationJobConfig
Returns
Type Description
$this

getAnnotationSpecSet

Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Returns
Type Description
string

setAnnotationSpecSet

Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

Parameter
Name Description
var string
Returns
Type Description
$this

getLabelMissingGroundTruth

Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

Returns
Type Description
bool

setLabelMissingGroundTruth

Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

Parameter
Name Description
var bool
Returns
Type Description
$this

getAttempts

Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

Returns
Type Description
Google\Protobuf\Internal\RepeatedField

setAttempts

Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

Parameter
Name Description
var array<Attempt>
Returns
Type Description
$this

getCreateTime

Output only. Timestamp of when this evaluation job was created.

Returns
Type Description
Google\Protobuf\Timestamp|null

hasCreateTime

clearCreateTime

setCreateTime

Output only. Timestamp of when this evaluation job was created.

Parameter
Name Description
var Google\Protobuf\Timestamp
Returns
Type Description
$this