Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::HadoopJob.
A Dataproc job for running Apache Hadoop MapReduce jobs on Apache Hadoop YARN.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#archive_uris
def archive_uris() -> ::Array<::String>
- (::Array<::String>) — Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
#archive_uris=
def archive_uris=(value) -> ::Array<::String>
- value (::Array<::String>) — Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- (::Array<::String>) — Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
#args
def args() -> ::Array<::String>
-
(::Array<::String>) — Optional. The arguments to pass to the driver. Do not
include arguments, such as
-libjars
or-Dfoo=bar
, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
#args=
def args=(value) -> ::Array<::String>
-
value (::Array<::String>) — Optional. The arguments to pass to the driver. Do not
include arguments, such as
-libjars
or-Dfoo=bar
, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
-
(::Array<::String>) — Optional. The arguments to pass to the driver. Do not
include arguments, such as
-libjars
or-Dfoo=bar
, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
#file_uris
def file_uris() -> ::Array<::String>
- (::Array<::String>) — Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
#file_uris=
def file_uris=(value) -> ::Array<::String>
- value (::Array<::String>) — Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- (::Array<::String>) — Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
#jar_file_uris
def jar_file_uris() -> ::Array<::String>
- (::Array<::String>) — Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
#jar_file_uris=
def jar_file_uris=(value) -> ::Array<::String>
- value (::Array<::String>) — Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- (::Array<::String>) — Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
#logging_config
def logging_config() -> ::Google::Cloud::Dataproc::V1::LoggingConfig
- (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
#logging_config=
def logging_config=(value) -> ::Google::Cloud::Dataproc::V1::LoggingConfig
- value (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
- (::Google::Cloud::Dataproc::V1::LoggingConfig) — Optional. The runtime log config for job execution.
#main_class
def main_class() -> ::String
-
(::String) — The name of the driver's main class. The jar file containing the class
must be in the default CLASSPATH or specified in
jar_file_uris
.Note: The following fields are mutually exclusive:
main_class
,main_jar_file_uri
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_class=
def main_class=(value) -> ::String
-
value (::String) — The name of the driver's main class. The jar file containing the class
must be in the default CLASSPATH or specified in
jar_file_uris
.Note: The following fields are mutually exclusive:
main_class
,main_jar_file_uri
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::String) — The name of the driver's main class. The jar file containing the class
must be in the default CLASSPATH or specified in
jar_file_uris
.Note: The following fields are mutually exclusive:
main_class
,main_jar_file_uri
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_jar_file_uri
def main_jar_file_uri() -> ::String
-
(::String) — The HCFS URI of the jar file containing the main class.
Examples:
'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'
'hdfs:/tmp/test-samples/custom-wordcount.jar'
'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
Note: The following fields are mutually exclusive:
main_jar_file_uri
,main_class
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#main_jar_file_uri=
def main_jar_file_uri=(value) -> ::String
-
value (::String) — The HCFS URI of the jar file containing the main class.
Examples:
'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'
'hdfs:/tmp/test-samples/custom-wordcount.jar'
'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
Note: The following fields are mutually exclusive:
main_jar_file_uri
,main_class
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::String) — The HCFS URI of the jar file containing the main class.
Examples:
'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'
'hdfs:/tmp/test-samples/custom-wordcount.jar'
'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
Note: The following fields are mutually exclusive:
main_jar_file_uri
,main_class
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#properties
def properties() -> ::Google::Protobuf::Map{::String => ::String}
-
(::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Hadoop.
Properties that conflict with values set by the Dataproc API might be
overwritten. Can include properties set in
/etc/hadoop/conf/*-site
and classes in user code.
#properties=
def properties=(value) -> ::Google::Protobuf::Map{::String => ::String}
-
value (::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Hadoop.
Properties that conflict with values set by the Dataproc API might be
overwritten. Can include properties set in
/etc/hadoop/conf/*-site
and classes in user code.
-
(::Google::Protobuf::Map{::String => ::String}) — Optional. A mapping of property names to values, used to configure Hadoop.
Properties that conflict with values set by the Dataproc API might be
overwritten. Can include properties set in
/etc/hadoop/conf/*-site
and classes in user code.