Cloud Dataproc V1 API - Class Google::Cloud::Dataproc::V1::OrderedJob (v1.2.0)

Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::OrderedJob.

A job executed by the workflow.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

def flink_job() -> ::Google::Cloud::Dataproc::V1::FlinkJob
Returns
  • (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

def flink_job=(value) -> ::Google::Cloud::Dataproc::V1::FlinkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.

    Note: The following fields are mutually exclusive: flink_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hadoop_job

def hadoop_job() -> ::Google::Cloud::Dataproc::V1::HadoopJob
Returns
  • (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hadoop_job=

def hadoop_job=(value) -> ::Google::Cloud::Dataproc::V1::HadoopJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.

    Note: The following fields are mutually exclusive: hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hive_job

def hive_job() -> ::Google::Cloud::Dataproc::V1::HiveJob
Returns
  • (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#hive_job=

def hive_job=(value) -> ::Google::Cloud::Dataproc::V1::HiveJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.

    Note: The following fields are mutually exclusive: hive_job, hadoop_job, spark_job, pyspark_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#labels

def labels() -> ::Google::Protobuf::Map{::String => ::String}
Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.

    Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

    Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

    No more than 32 labels can be associated with a given job.

#labels=

def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
Parameter
  • value (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.

    Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

    Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

    No more than 32 labels can be associated with a given job.

Returns
  • (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.

    Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}

    Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}

    No more than 32 labels can be associated with a given job.

#pig_job

def pig_job() -> ::Google::Cloud::Dataproc::V1::PigJob
Returns
  • (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pig_job=

def pig_job=(value) -> ::Google::Cloud::Dataproc::V1::PigJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.

    Note: The following fields are mutually exclusive: pig_job, hadoop_job, spark_job, pyspark_job, hive_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#prerequisite_step_ids

def prerequisite_step_ids() -> ::Array<::String>
Returns
  • (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

#prerequisite_step_ids=

def prerequisite_step_ids=(value) -> ::Array<::String>
Parameter
  • value (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
Returns
  • (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.

#presto_job

def presto_job() -> ::Google::Cloud::Dataproc::V1::PrestoJob
Returns
  • (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#presto_job=

def presto_job=(value) -> ::Google::Cloud::Dataproc::V1::PrestoJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.

    Note: The following fields are mutually exclusive: presto_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pyspark_job

def pyspark_job() -> ::Google::Cloud::Dataproc::V1::PySparkJob
Returns
  • (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#pyspark_job=

def pyspark_job=(value) -> ::Google::Cloud::Dataproc::V1::PySparkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.

    Note: The following fields are mutually exclusive: pyspark_job, hadoop_job, spark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#scheduling

def scheduling() -> ::Google::Cloud::Dataproc::V1::JobScheduling
Returns

#scheduling=

def scheduling=(value) -> ::Google::Cloud::Dataproc::V1::JobScheduling
Parameter
Returns

#spark_job

def spark_job() -> ::Google::Cloud::Dataproc::V1::SparkJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_job=

def spark_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.

    Note: The following fields are mutually exclusive: spark_job, hadoop_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_r_job

def spark_r_job() -> ::Google::Cloud::Dataproc::V1::SparkRJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_r_job=

def spark_r_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkRJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.

    Note: The following fields are mutually exclusive: spark_r_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_sql_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_sql_job

def spark_sql_job() -> ::Google::Cloud::Dataproc::V1::SparkSqlJob
Returns
  • (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#spark_sql_job=

def spark_sql_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkSqlJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.

    Note: The following fields are mutually exclusive: spark_sql_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, presto_job, trino_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#step_id

def step_id() -> ::String
Returns
  • (::String) — Required. The step id. The id must be unique among all jobs within the template.

    The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.

    The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

#step_id=

def step_id=(value) -> ::String
Parameter
  • value (::String) — Required. The step id. The id must be unique among all jobs within the template.

    The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.

    The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

Returns
  • (::String) — Required. The step id. The id must be unique among all jobs within the template.

    The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.

    The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

#trino_job

def trino_job() -> ::Google::Cloud::Dataproc::V1::TrinoJob
Returns
  • (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

#trino_job=

def trino_job=(value) -> ::Google::Cloud::Dataproc::V1::TrinoJob
Parameter
  • value (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.

Returns
  • (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.

    Note: The following fields are mutually exclusive: trino_job, hadoop_job, spark_job, pyspark_job, hive_job, pig_job, spark_r_job, spark_sql_job, presto_job, flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.