Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::Job.
A Dataproc job resource.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#done
def done() -> ::Boolean
-
(::Boolean) — Output only. Indicates whether the job is completed. If the value is
false
, the job is still in progress. Iftrue
, the job is completed, andstatus.state
field will indicate if it was successful, failed, or cancelled.
#driver_control_files_uri
def driver_control_files_uri() -> ::String
-
(::String) — Output only. If present, the location of miscellaneous control files
which can be used as part of job setup and handling. If not present,
control files might be placed in the same location as
driver_output_uri
.
#driver_output_resource_uri
def driver_output_resource_uri() -> ::String
- (::String) — Output only. A URI pointing to the location of the stdout of the job's driver program.
#driver_scheduling_config
def driver_scheduling_config() -> ::Google::Cloud::Dataproc::V1::DriverSchedulingConfig
- (::Google::Cloud::Dataproc::V1::DriverSchedulingConfig) — Optional. Driver scheduling configuration.
#driver_scheduling_config=
def driver_scheduling_config=(value) -> ::Google::Cloud::Dataproc::V1::DriverSchedulingConfig
- value (::Google::Cloud::Dataproc::V1::DriverSchedulingConfig) — Optional. Driver scheduling configuration.
- (::Google::Cloud::Dataproc::V1::DriverSchedulingConfig) — Optional. Driver scheduling configuration.
#flink_job
def flink_job() -> ::Google::Cloud::Dataproc::V1::FlinkJob
-
(::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#flink_job=
def flink_job=(value) -> ::Google::Cloud::Dataproc::V1::FlinkJob
-
value (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hadoop_job
def hadoop_job() -> ::Google::Cloud::Dataproc::V1::HadoopJob
-
(::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hadoop_job=
def hadoop_job=(value) -> ::Google::Cloud::Dataproc::V1::HadoopJob
-
value (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hive_job
def hive_job() -> ::Google::Cloud::Dataproc::V1::HiveJob
-
(::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job
,hadoop_job
,spark_job
,pyspark_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hive_job=
def hive_job=(value) -> ::Google::Cloud::Dataproc::V1::HiveJob
-
value (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job
,hadoop_job
,spark_job
,pyspark_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job
,hadoop_job
,spark_job
,pyspark_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#job_uuid
def job_uuid() -> ::String
- (::String) — Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that might be reused over time.
#labels
def labels() -> ::Google::Protobuf::Map{::String => ::String}
- (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.
#labels=
def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}
- value (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.
- (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values can be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.
#pig_job
def pig_job() -> ::Google::Cloud::Dataproc::V1::PigJob
-
(::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pig_job=
def pig_job=(value) -> ::Google::Cloud::Dataproc::V1::PigJob
-
value (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#placement
def placement() -> ::Google::Cloud::Dataproc::V1::JobPlacement
- (::Google::Cloud::Dataproc::V1::JobPlacement) — Required. Job information, including how, when, and where to run the job.
#placement=
def placement=(value) -> ::Google::Cloud::Dataproc::V1::JobPlacement
- value (::Google::Cloud::Dataproc::V1::JobPlacement) — Required. Job information, including how, when, and where to run the job.
- (::Google::Cloud::Dataproc::V1::JobPlacement) — Required. Job information, including how, when, and where to run the job.
#presto_job
def presto_job() -> ::Google::Cloud::Dataproc::V1::PrestoJob
-
(::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#presto_job=
def presto_job=(value) -> ::Google::Cloud::Dataproc::V1::PrestoJob
-
value (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pyspark_job
def pyspark_job() -> ::Google::Cloud::Dataproc::V1::PySparkJob
-
(::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job
,hadoop_job
,spark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pyspark_job=
def pyspark_job=(value) -> ::Google::Cloud::Dataproc::V1::PySparkJob
-
value (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job
,hadoop_job
,spark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job
,hadoop_job
,spark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#reference
def reference() -> ::Google::Cloud::Dataproc::V1::JobReference
-
(::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to
obtain the equivalent REST path of the job resource. If this property
is not specified when a job is created, the server generates a
job_id
.
#reference=
def reference=(value) -> ::Google::Cloud::Dataproc::V1::JobReference
-
value (::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to
obtain the equivalent REST path of the job resource. If this property
is not specified when a job is created, the server generates a
job_id
.
-
(::Google::Cloud::Dataproc::V1::JobReference) — Optional. The fully qualified reference to the job, which can be used to
obtain the equivalent REST path of the job resource. If this property
is not specified when a job is created, the server generates a
job_id
.
#scheduling
def scheduling() -> ::Google::Cloud::Dataproc::V1::JobScheduling
- (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
#scheduling=
def scheduling=(value) -> ::Google::Cloud::Dataproc::V1::JobScheduling
- value (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
- (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
#spark_job
def spark_job() -> ::Google::Cloud::Dataproc::V1::SparkJob
-
(::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job
,hadoop_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_job=
def spark_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkJob
-
value (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job
,hadoop_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job
,hadoop_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_r_job
def spark_r_job() -> ::Google::Cloud::Dataproc::V1::SparkRJob
-
(::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_r_job=
def spark_r_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkRJob
-
value (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_sql_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_sql_job
def spark_sql_job() -> ::Google::Cloud::Dataproc::V1::SparkSqlJob
-
(::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_sql_job=
def spark_sql_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkSqlJob
-
value (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,presto_job
,trino_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#status
def status() -> ::Google::Cloud::Dataproc::V1::JobStatus
-
(::Google::Cloud::Dataproc::V1::JobStatus) — Output only. The job status. Additional application-specific
status information might be contained in the
type_job
andyarn_applications
fields.
#status_history
def status_history() -> ::Array<::Google::Cloud::Dataproc::V1::JobStatus>
- (::Array<::Google::Cloud::Dataproc::V1::JobStatus>) — Output only. The previous job status.
#trino_job
def trino_job() -> ::Google::Cloud::Dataproc::V1::TrinoJob
-
(::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#trino_job=
def trino_job=(value) -> ::Google::Cloud::Dataproc::V1::TrinoJob
-
value (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job
,hadoop_job
,spark_job
,pyspark_job
,hive_job
,pig_job
,spark_r_job
,spark_sql_job
,presto_job
,flink_job
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#yarn_applications
def yarn_applications() -> ::Array<::Google::Cloud::Dataproc::V1::YarnApplication>
-
(::Array<::Google::Cloud::Dataproc::V1::YarnApplication>) — Output only. The collection of YARN applications spun up by this job.
Beta Feature: This report is available for testing purposes only. It might be changed before final release.