REST Resource: projects.locations.workflowTemplates

Resource: WorkflowTemplate

A Cloud Dataproc workflow template resource.

JSON representation
{
  "id": string,
  "name": string,
  "version": number,
  "createTime": string,
  "updateTime": string,
  "labels": {
    string: string,
    ...
  },
  "placement": {
    object (WorkflowTemplatePlacement)
  },
  "jobs": [
    {
      object (OrderedJob)
    }
  ],
  "parameters": [
    {
      object (TemplateParameter)
    }
  ]
}
Fields
id

string

name

string

Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.

  • For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{projectId}/regions/{region}/workflowTemplates/{template_id}

  • For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{projectId}/locations/{location}/workflowTemplates/{template_id}

version

number

Optional. Used to perform a consistent read-modify-write.

This field should be left blank for a workflowTemplates.create request. It is required for an workflowTemplates.update request, and must match the current server version. A typical update template flow would fetch the current template with a workflowTemplates.get request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the workflowTemplates.update request.

createTime

string (Timestamp format)

Output only. The time template was created.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

updateTime

string (Timestamp format)

Output only. The time template was last updated.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

labels

map (key: string, value: string)

Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.

Label keys must contain 1 to 63 characters, and must conform to RFC 1035.

Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035.

No more than 32 labels can be associated with a template.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

placement

object (WorkflowTemplatePlacement)

Required. WorkflowTemplate scheduling information.

jobs[]

object (OrderedJob)

Required. The Directed Acyclic Graph of Jobs to submit.

parameters[]

object (TemplateParameter)

Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.

WorkflowTemplatePlacement

Specifies workflow execution target.

Either managedCluster or clusterSelector is required.

JSON representation
{

  // Union field placement can be only one of the following:
  "managedCluster": {
    object (ManagedCluster)
  },
  "clusterSelector": {
    object (ClusterSelector)
  }
  // End of list of possible types for union field placement.
}
Fields
Union field placement. Required. Specifies where workflow executes; either on a managed cluster or an existing cluster chosen by labels. placement can be only one of the following:
managedCluster

object (ManagedCluster)

A cluster that is managed by the workflow.

clusterSelector

object (ClusterSelector)

Optional. A selector that chooses target cluster for jobs based on metadata.

The selector is evaluated at the time each job is submitted.

ManagedCluster

Cluster that is managed by the workflow.

JSON representation
{
  "clusterName": string,
  "config": {
    object (ClusterConfig)
  },
  "labels": {
    string: string,
    ...
  }
}
Fields
clusterName

string

Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.

The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.

config

object (ClusterConfig)

Required. The cluster configuration.

labels

map (key: string, value: string)

Optional. The labels to associate with this cluster.

Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given cluster.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

ClusterSelector

A selector that chooses target cluster for jobs based on metadata.

JSON representation
{
  "zone": string,
  "clusterLabels": {
    string: string,
    ...
  }
}
Fields
zone

string

Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.

If unspecified, the zone of the first cluster matching the selector is used.

clusterLabels

map (key: string, value: string)

Required. The cluster labels. Cluster must have all labels to match.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

OrderedJob

A job executed by the workflow.

JSON representation
{
  "stepId": string,
  "labels": {
    string: string,
    ...
  },
  "scheduling": {
    object (JobScheduling)
  },
  "prerequisiteStepIds": [
    string
  ],

  // Union field job_type can be only one of the following:
  "hadoopJob": {
    object (HadoopJob)
  },
  "sparkJob": {
    object (SparkJob)
  },
  "pysparkJob": {
    object (PySparkJob)
  },
  "hiveJob": {
    object (HiveJob)
  },
  "pigJob": {
    object (PigJob)
  },
  "sparkSqlJob": {
    object (SparkSqlJob)
  }
  // End of list of possible types for union field job_type.
}
Fields
stepId

string

Required. The step id. The id must be unique among all jobs within the template.

The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.

The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

labels

map (key: string, value: string)

Optional. The labels to associate with this job.

Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

No more than 32 labels can be associated with a given job.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

scheduling

object (JobScheduling)

Optional. Job scheduling configuration.

prerequisiteStepIds[]

string

Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

Union field job_type. Required. The job definition. job_type can be only one of the following:
hadoopJob

object (HadoopJob)

Job is a Hadoop job.

sparkJob

object (SparkJob)

Job is a Spark job.

pysparkJob

object (PySparkJob)

Job is a Pyspark job.

hiveJob

object (HiveJob)

Job is a Hive job.

pigJob

object (PigJob)

Job is a Pig job.

sparkSqlJob

object (SparkSqlJob)

Job is a SparkSql job.

TemplateParameter

A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)

JSON representation
{
  "name": string,
  "fields": [
    string
  ],
  "description": string,
  "validation": {
    object (ParameterValidation)
  }
}
Fields
name

string

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

fields[]

string

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.

A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.

Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:

    • labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • placement.managedCluster.labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • jobs['step-id'].labels['key']
  • Jobs in the jobs list can be referenced by step-id:

    • jobs['step-id'].hadoopJob.mainJarFileUri
    • jobs['step-id'].hiveJob.queryFileUri
    • jobs['step-id'].pySparkJob.mainPythonFileUri
    • jobs['step-id'].hadoopJob.jarFileUris[0]
    • jobs['step-id'].hadoopJob.archiveUris[0]
    • jobs['step-id'].hadoopJob.fileUris[0]
    • jobs['step-id'].pySparkJob.pythonFileUris[0]
  • Items in repeated fields can be referenced by a zero-based index:

    • jobs['step-id'].sparkJob.args[0]
  • Other examples:

    • jobs['step-id'].hadoopJob.properties['key']
    • jobs['step-id'].hadoopJob.args[0]
    • jobs['step-id'].hiveJob.scriptVariables['key']
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • placement.clusterSelector.zone

It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:

  • placement.clusterSelector.clusterLabels
  • jobs['step-id'].sparkJob.args

description

string

Optional. Brief description of the parameter. Must not exceed 1024 characters.

validation

object (ParameterValidation)

Optional. Validation rules to be applied to this parameter's value.

ParameterValidation

Configuration for parameter validation.

JSON representation
{

  // Union field validation_type can be only one of the following:
  "regex": {
    object (RegexValidation)
  },
  "values": {
    object (ValueValidation)
  }
  // End of list of possible types for union field validation_type.
}
Fields
Union field validation_type. Required. The type of validation to be performed. validation_type can be only one of the following:
regex

object (RegexValidation)

Validation based on regular expressions.

values

object (ValueValidation)

Validation based on a list of allowed values.

RegexValidation

Validation based on regular expressions.

JSON representation
{
  "regexes": [
    string
  ]
}
Fields
regexes[]

string

Required. RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

ValueValidation

Validation based on a list of allowed values.

JSON representation
{
  "values": [
    string
  ]
}
Fields
values[]

string

Required. List of allowed values for the parameter.

Methods

create

Creates new workflow template.

delete

Deletes a workflow template.

get

Retrieves the latest workflow template.

getIamPolicy

Gets the access control policy for a resource.

instantiate

Instantiates a template and begins execution.

instantiateInline

Instantiates a template and begins execution.

list

Lists workflows that match the specified filter in the request.

setIamPolicy

Sets the access control policy on the specified resource.

testIamPermissions

Returns permissions that a caller has on the specified resource.

update

Updates (replaces) workflow template.
Kunde den här sidan hjälpa dig? Berätta:

Skicka feedback om ...

Cloud Dataproc