Cloud Dataproc V1 API - Class Google::Cloud::Dataproc::V1::ExecutionConfig (v0.19.0)

Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::ExecutionConfig.

Execution configuration for a workload.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#kms_key

def kms_key() -> ::String
Returns
  • (::String) — Optional. The Cloud KMS key to use for encryption.

#kms_key=

def kms_key=(value) -> ::String
Parameter
  • value (::String) — Optional. The Cloud KMS key to use for encryption.
Returns
  • (::String) — Optional. The Cloud KMS key to use for encryption.

#network_tags

def network_tags() -> ::Array<::String>
Returns
  • (::Array<::String>) — Optional. Tags used for network traffic control.

#network_tags=

def network_tags=(value) -> ::Array<::String>
Parameter
  • value (::Array<::String>) — Optional. Tags used for network traffic control.
Returns
  • (::Array<::String>) — Optional. Tags used for network traffic control.

#network_uri

def network_uri() -> ::String
Returns
  • (::String) — Optional. Network URI to connect workload to.

#network_uri=

def network_uri=(value) -> ::String
Parameter
  • value (::String) — Optional. Network URI to connect workload to.
Returns
  • (::String) — Optional. Network URI to connect workload to.

#service_account

def service_account() -> ::String
Returns
  • (::String) — Optional. Service account that used to execute workload.

#service_account=

def service_account=(value) -> ::String
Parameter
  • value (::String) — Optional. Service account that used to execute workload.
Returns
  • (::String) — Optional. Service account that used to execute workload.

#staging_bucket

def staging_bucket() -> ::String
Returns
  • (::String) — Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

#staging_bucket=

def staging_bucket=(value) -> ::String
Parameter
  • value (::String) — Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
Returns
  • (::String) — Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

#subnetwork_uri

def subnetwork_uri() -> ::String
Returns
  • (::String) — Optional. Subnetwork URI to connect workload to.

#subnetwork_uri=

def subnetwork_uri=(value) -> ::String
Parameter
  • value (::String) — Optional. Subnetwork URI to connect workload to.
Returns
  • (::String) — Optional. Subnetwork URI to connect workload to.

#ttl

def ttl() -> ::Google::Protobuf::Duration
Returns
  • (::Google::Protobuf::Duration) — Optional. The duration after which the workload will be terminated. When the workload passes this ttl, it will be unconditionally killed without waiting for ongoing work to finish. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). If both ttl and idle_ttl are specified, the conditions are treated as and OR: the workload will be terminated when it has been idle for idle_ttl or when the ttl has passed, whichever comes first. If ttl is not specified for a session, it defaults to 24h.

#ttl=

def ttl=(value) -> ::Google::Protobuf::Duration
Parameter
  • value (::Google::Protobuf::Duration) — Optional. The duration after which the workload will be terminated. When the workload passes this ttl, it will be unconditionally killed without waiting for ongoing work to finish. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). If both ttl and idle_ttl are specified, the conditions are treated as and OR: the workload will be terminated when it has been idle for idle_ttl or when the ttl has passed, whichever comes first. If ttl is not specified for a session, it defaults to 24h.
Returns
  • (::Google::Protobuf::Duration) — Optional. The duration after which the workload will be terminated. When the workload passes this ttl, it will be unconditionally killed without waiting for ongoing work to finish. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration). If both ttl and idle_ttl are specified, the conditions are treated as and OR: the workload will be terminated when it has been idle for idle_ttl or when the ttl has passed, whichever comes first. If ttl is not specified for a session, it defaults to 24h.