Defines an evaluation job that runs periodically to generate
Evaluations. Creating
an evaluation job </ml-engine/docs/continuous-evaluation/create-
job>
__ is the starting point for using continuous evaluation.
Required. Description of the job. The description can be up to 25,000 characters long.
Required. Describes the interval at which the job runs. This
interval must be at least 1 day, and it is rounded to the
nearest day. For example, if you specify a 50-hour interval,
the job runs every 2 days. You can provide the schedule in
crontab format </scheduler/docs/configuring/cron-job-
schedules>
or in an English-like format </appengine/docs/s
tandard/python/config/cronref#schedule_format>
. Regardless
of what you specify, the job will run at 10:00 AM UTC. Only
the interval from this schedule is used, not the specific time
of day.
Required. Configuration details for the evaluation job.
Required. Whether you want Data Labeling Service to provide
ground truth labels for prediction input. If you want the
service to assign human labelers to annotate your data, set
this to true
. If you want to provide your own ground truth
labels in the evaluation job’s BigQuery table, set this to
false
.
Output only. Timestamp of when this evaluation job was created.