REST Resource: projects.locations.lakes.tasks

Resource: Task

A task represents a user-visible job.

JSON representation
{
  "name": string,
  "uid": string,
  "createTime": string,
  "updateTime": string,
  "description": string,
  "displayName": string,
  "state": enum (State),
  "labels": {
    string: string,
    ...
  },
  "triggerSpec": {
    object (TriggerSpec)
  },
  "executionSpec": {
    object (ExecutionSpec)
  },
  "executionStatus": {
    object (ExecutionStatus)
  },

  // Union field config can be only one of the following:
  "spark": {
    object (SparkTaskConfig)
  },
  "notebook": {
    object (NotebookTaskConfig)
  }
  // End of list of possible types for union field config.
}
Fields
name

string

Output only. The relative resource name of the task, of the form: projects/{project_number}/locations/{locationId}/lakes/{lakeId}/ tasks/{taskId}.

uid

string

Output only. System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.

createTime

string (Timestamp format)

Output only. The time when the task was created.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

updateTime

string (Timestamp format)

Output only. The time when the task was last updated.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

description

string

Optional. Description of the task.

displayName

string

Optional. User friendly display name.

state

enum (State)

Output only. Current state of the task.

labels

map (key: string, value: string)

Optional. User-defined labels for the task.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

triggerSpec

object (TriggerSpec)

Required. Spec related to how often and when a task should be triggered.

executionSpec

object (ExecutionSpec)

Required. Spec related to how a task is executed.

executionStatus

object (ExecutionStatus)

Output only. Status of the latest task executions.

Union field config. Task template specific user-specified config. config can be only one of the following:
spark

object (SparkTaskConfig)

Config related to running custom Spark tasks.

notebook

object (NotebookTaskConfig)

Config related to running scheduled Notebooks.

TriggerSpec

Task scheduling and trigger settings.

JSON representation
{
  "type": enum (Type),
  "startTime": string,
  "disabled": boolean,
  "maxRetries": integer,

  // Union field trigger can be only one of the following:
  "schedule": string
  // End of list of possible types for union field trigger.
}
Fields
type

enum (Type)

Required. Immutable. Trigger type of the user-specified Task.

startTime

string (Timestamp format)

Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

disabled

boolean

Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.

maxRetries

integer

Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.

Union field trigger. Trigger only applies for RECURRING tasks. trigger can be only one of the following:
schedule

string

Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.

Type

Determines how often and when the job will run.

Enums
TYPE_UNSPECIFIED Unspecified trigger type.
ON_DEMAND The task runs one-time shortly after Task Creation.
RECURRING The task is scheduled to run periodically.

ExecutionStatus

Status of the task execution (e.g. Jobs).

JSON representation
{
  "updateTime": string,
  "latestJob": {
    object (Job)
  }
}
Fields
updateTime

string (Timestamp format)

Output only. Last update time of the status.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

latestJob

object (Job)

Output only. latest job execution

SparkTaskConfig

User-specified config for running a Spark task.

JSON representation
{
  "fileUris": [
    string
  ],
  "archiveUris": [
    string
  ],
  "infrastructureSpec": {
    object (InfrastructureSpec)
  },

  // Union field driver can be only one of the following:
  "mainJarFileUri": string,
  "mainClass": string,
  "pythonScriptFile": string,
  "sqlScriptFile": string,
  "sqlScript": string
  // End of list of possible types for union field driver.
}
Fields
fileUris[]

string

Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.

archiveUris[]

string

Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

infrastructureSpec

object (InfrastructureSpec)

Optional. Infrastructure specification for the execution.

Union field driver. Required. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. driver can be only one of the following:
mainJarFileUri

string

The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).

mainClass

string

The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).

pythonScriptFile

string

The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).

sqlScriptFile

string

A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).

sqlScript

string

The query text. The execution args are used to declare a set of script variables (set key="value";).

InfrastructureSpec

Configuration for the underlying infrastructure used to run workloads.

JSON representation
{

  // Union field resources can be only one of the following:
  "batch": {
    object (BatchComputeResources)
  }
  // End of list of possible types for union field resources.

  // Union field runtime can be only one of the following:
  "containerImage": {
    object (ContainerImageRuntime)
  }
  // End of list of possible types for union field runtime.

  // Union field network can be only one of the following:
  "vpcNetwork": {
    object (VpcNetwork)
  }
  // End of list of possible types for union field network.
}
Fields
Union field resources. Hardware config. resources can be only one of the following:
batch

object (BatchComputeResources)

Compute resources needed for a Task when using Dataproc Serverless.

Union field runtime. Software config. runtime can be only one of the following:
containerImage

object (ContainerImageRuntime)

Container Image Runtime Configuration.

Union field network. Networking config. network can be only one of the following:
vpcNetwork

object (VpcNetwork)

Vpc network.

BatchComputeResources

Batch compute resources associated with the task.

JSON representation
{
  "executorsCount": integer,
  "maxExecutorsCount": integer
}
Fields
executorsCount

integer

Optional. Total number of job executors. Executor Count should be between 2 and 100. [Default=2]

maxExecutorsCount

integer

Optional. Max configurable executors. If maxExecutorsCount > executorsCount, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. [Default=1000]

ContainerImageRuntime

Container Image Runtime Configuration used with Batch execution.

JSON representation
{
  "image": string,
  "javaJars": [
    string
  ],
  "pythonPackages": [
    string
  ],
  "properties": {
    string: string,
    ...
  }
}
Fields
image

string

Optional. Container image to use.

javaJars[]

string

Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar

pythonPackages[]

string

Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz

properties

map (key: string, value: string)

Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

VpcNetwork

Cloud VPC Network used to run the infrastructure.

JSON representation
{
  "networkTags": [
    string
  ],

  // Union field network_name can be only one of the following:
  "network": string,
  "subNetwork": string
  // End of list of possible types for union field network_name.
}
Fields
networkTags[]

string

Optional. List of network tags to apply to the job.

Union field network_name. The Cloud VPC network identifier. network_name can be only one of the following:
network

string

Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.

subNetwork

string

Optional. The Cloud VPC sub-network in which the job is run.

NotebookTaskConfig

Config for running scheduled notebooks.

JSON representation
{
  "notebook": string,
  "infrastructureSpec": {
    object (InfrastructureSpec)
  },
  "fileUris": [
    string
  ],
  "archiveUris": [
    string
  ]
}
Fields
notebook

string

Required. Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).

infrastructureSpec

object (InfrastructureSpec)

Optional. Infrastructure specification for the execution.

fileUris[]

string

Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.

archiveUris[]

string

Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.

Methods

create

Creates a task resource within a lake.

delete

Delete the task resource.

get

Get task resource.

getIamPolicy

Gets the access control policy for a resource.

list

Lists tasks under the given lake.

patch

Update the task resource.

run

Run an on demand execution of a Task.

setIamPolicy

Sets the access control policy on the specified resource.

testIamPermissions

Returns permissions that a caller has on the specified resource.