Google Cloud Dataproc V1 Client - Class TemplateParameter (2.1.0)

Reference documentation and code samples for the Google Cloud Dataproc V1 Client class TemplateParameter.

A configurable parameter that replaces one or more fields in the template.

Parameterizable fields:

  • Labels
  • File uris
  • Job properties
  • Job arguments
  • Script variables
  • Main class (in HadoopJob and SparkJob)
  • Zone (in ClusterSelector)

Generated from protobuf message google.cloud.dataproc.v1.TemplateParameter

Namespace

Google \ Cloud \ Dataproc \ V1

Methods

__construct

Constructor.

Parameters
Name Description
data array

Optional. Data for populating the Message object.

↳ name string

Required. Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

↳ fields array

Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone. Also, field paths can reference fields using the following syntax: * Values in maps can be referenced by key: * labels['key'] * placement.clusterSelector.clusterLabels['key'] * placement.managedCluster.labels['key'] * placement.clusterSelector.clusterLabels['key'] * jobs['step-id'].labels['key'] * Jobs in the jobs list can be referenced by step-id: * jobs['step-id'].hadoopJob.mainJarFileUri * jobs['step-id'].hiveJob.queryFileUri * jobs['step-id'].pySparkJob.mainPythonFileUri * jobs['step-id'].hadoopJob.jarFileUris[0] * jobs['step-id'].hadoopJob.archiveUris[0] * jobs['step-id'].hadoopJob.fileUris[0] * jobs['step-id'].pySparkJob.pythonFileUris[0] * Items in repeated fields can be referenced by a zero-based index: * jobs['step-id'].sparkJob.args[0] * Other examples: * jobs['step-id'].hadoopJob.properties['key'] * jobs['step-id'].hadoopJob.args[0] * jobs['step-id'].hiveJob.scriptVariables['key'] * jobs['step-id'].hadoopJob.mainJarFileUri * placement.clusterSelector.zone It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: - placement.clusterSelector.clusterLabels - jobs['step-id'].sparkJob.args

↳ description string

Optional. Brief description of the parameter. Must not exceed 1024 characters.

↳ validation Google\Cloud\Dataproc\V1\ParameterValidation

Optional. Validation rules to be applied to this parameter's value.

getName

Required. Parameter name.

The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

Returns
Type Description
string

setName

Required. Parameter name.

The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.

Parameter
Name Description
var string
Returns
Type Description
$this

getFields

Required. Paths to all fields that the parameter replaces.

A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone. Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:
    • labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • placement.managedCluster.labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • jobs['step-id'].labels['key']
  • Jobs in the jobs list can be referenced by step-id:
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • jobs['step-id'].hiveJob.queryFileUri
    • jobs['step-id'].pySparkJob.mainPythonFileUri
    • jobs['step-id'].hadoopJob.jarFileUris[0]
    • jobs['step-id'].hadoopJob.archiveUris[0]
    • jobs['step-id'].hadoopJob.fileUris[0]
    • jobs['step-id'].pySparkJob.pythonFileUris[0]
  • Items in repeated fields can be referenced by a zero-based index:
    • jobs['step-id'].sparkJob.args[0]
  • Other examples:
    • jobs['step-id'].hadoopJob.properties['key']
    • jobs['step-id'].hadoopJob.args[0]
    • jobs['step-id'].hiveJob.scriptVariables['key']
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • placement.clusterSelector.zone It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:
  • placement.clusterSelector.clusterLabels
  • jobs['step-id'].sparkJob.args
Returns
Type Description
Google\Protobuf\Internal\RepeatedField

setFields

Required. Paths to all fields that the parameter replaces.

A field is allowed to appear in at most one parameter's list of field paths. A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone. Also, field paths can reference fields using the following syntax:

  • Values in maps can be referenced by key:
    • labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • placement.managedCluster.labels['key']
    • placement.clusterSelector.clusterLabels['key']
    • jobs['step-id'].labels['key']
  • Jobs in the jobs list can be referenced by step-id:
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • jobs['step-id'].hiveJob.queryFileUri
    • jobs['step-id'].pySparkJob.mainPythonFileUri
    • jobs['step-id'].hadoopJob.jarFileUris[0]
    • jobs['step-id'].hadoopJob.archiveUris[0]
    • jobs['step-id'].hadoopJob.fileUris[0]
    • jobs['step-id'].pySparkJob.pythonFileUris[0]
  • Items in repeated fields can be referenced by a zero-based index:
    • jobs['step-id'].sparkJob.args[0]
  • Other examples:
    • jobs['step-id'].hadoopJob.properties['key']
    • jobs['step-id'].hadoopJob.args[0]
    • jobs['step-id'].hiveJob.scriptVariables['key']
    • jobs['step-id'].hadoopJob.mainJarFileUri
    • placement.clusterSelector.zone It may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid:
  • placement.clusterSelector.clusterLabels
  • jobs['step-id'].sparkJob.args
Parameter
Name Description
var string[]
Returns
Type Description
$this

getDescription

Optional. Brief description of the parameter.

Must not exceed 1024 characters.

Returns
Type Description
string

setDescription

Optional. Brief description of the parameter.

Must not exceed 1024 characters.

Parameter
Name Description
var string
Returns
Type Description
$this

getValidation

Optional. Validation rules to be applied to this parameter's value.

Returns
Type Description
Google\Cloud\Dataproc\V1\ParameterValidation|null

hasValidation

clearValidation

setValidation

Optional. Validation rules to be applied to this parameter's value.

Parameter
Name Description
var Google\Cloud\Dataproc\V1\ParameterValidation
Returns
Type Description
$this