EvaluationJob(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Defines an evaluation job that runs periodically to generate
Evaluations.
Creating an evaluation
job </ml-engine/docs/continuous-evaluation/create-job>
__ is the
starting point for using continuous evaluation.
Attributes | |
---|---|
Name | Description |
name |
str
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/{evaluation_job_id}". |
description |
str
Required. Description of the job. The description can be up to 25,000 characters long. |
state |
google.cloud.datalabeling_v1beta1.types.EvaluationJob.State
Output only. Describes the current state of the job. |
schedule |
str
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in `crontab format `__ or in an `English-like format `__. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day. |
model_version |
str
Required. The `AI Platform Prediction model version `__ to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version. |
evaluation_job_config |
google.cloud.datalabeling_v1beta1.types.EvaluationJobConfig
Required. Configuration details for the evaluation job. |
annotation_spec_set |
str
Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}". |
label_missing_ground_truth |
bool
Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true . If you want to provide your own ground
truth labels in the evaluation job's BigQuery table, set
this to false .
|
attempts |
MutableSequence[google.cloud.datalabeling_v1beta1.types.Attempt]
Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array. |
create_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp of when this evaluation job was created. |
Classes
State
State(value)
State of the job.
Values: STATE_UNSPECIFIED (0):
SCHEDULED (1):
The job is scheduled to run at the [configured
interval][google.cloud.datalabeling.v1beta1.EvaluationJob.schedule].
You can
<xref uid="google.cloud.datalabeling.v1beta1.DataLabelingService.PauseEvaluationJob">pause</xref>
or
<xref uid="google.cloud.datalabeling.v1beta1.DataLabelingService.DeleteEvaluationJob">delete</xref>
the job.
When the job is in this state, it samples prediction input
and output from your model version into your BigQuery table
as predictions occur.
RUNNING (2):
The job is currently running. When the job runs, Data
Labeling Service does several things:
1. If you have configured your job to use Data Labeling
Service for ground truth labeling, the service creates a
<xref uid="google.cloud.datalabeling.v1beta1.Dataset">Dataset</xref> and
a labeling task for all data sampled since the last time
the job ran. Human labelers provide ground truth labels
for your data. Human labeling may take hours, or even
days, depending on how much data has been sampled. The
job remains in the `RUNNING` state during this time,
and it can even be running multiple times in parallel if
it gets triggered again (for example 24 hours later)
before the earlier run has completed. When human labelers
have finished labeling the data, the next step occurs. If
you have configured your job to provide your own ground
truth labels, Data Labeling Service still creates a
<xref uid="google.cloud.datalabeling.v1beta1.Dataset">Dataset</xref> for
newly sampled data, but it expects that you have already
added ground truth labels to the BigQuery table by this
time. The next step occurs immediately.
2. Data Labeling Service creates an
<xref uid="google.cloud.datalabeling.v1beta1.Evaluation">Evaluation</xref>
by comparing your model version's predictions with the
ground truth labels.
If the job remains in this state for a long time, it
continues to sample prediction data into your BigQuery table
and will run again at the next interval, even if it causes
the job to run multiple times in parallel.
PAUSED (3):
The job is not sampling prediction input and output into
your BigQuery table and it will not run according to its
schedule. You can
<xref uid="google.cloud.datalabeling.v1beta1.DataLabelingService.ResumeEvaluationJob">resume</xref>
the job.
STOPPED (4):
The job has this state right before it is
deleted.