Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::VirtualClusterConfig.
Dataproc cluster config for a cluster that does not directly control the underlying compute resources, such as a Dataproc-on-GKE cluster.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#auxiliary_services_config
def auxiliary_services_config() -> ::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig
Returns
- (::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig) — Optional. Configuration of auxiliary services used by this cluster.
#auxiliary_services_config=
def auxiliary_services_config=(value) -> ::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig
Parameter
- value (::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig) — Optional. Configuration of auxiliary services used by this cluster.
Returns
- (::Google::Cloud::Dataproc::V1::AuxiliaryServicesConfig) — Optional. Configuration of auxiliary services used by this cluster.
#kubernetes_cluster_config
def kubernetes_cluster_config() -> ::Google::Cloud::Dataproc::V1::KubernetesClusterConfig
Returns
- (::Google::Cloud::Dataproc::V1::KubernetesClusterConfig) — Required. The configuration for running the Dataproc cluster on Kubernetes.
#kubernetes_cluster_config=
def kubernetes_cluster_config=(value) -> ::Google::Cloud::Dataproc::V1::KubernetesClusterConfig
Parameter
- value (::Google::Cloud::Dataproc::V1::KubernetesClusterConfig) — Required. The configuration for running the Dataproc cluster on Kubernetes.
Returns
- (::Google::Cloud::Dataproc::V1::KubernetesClusterConfig) — Required. The configuration for running the Dataproc cluster on Kubernetes.
#staging_bucket
def staging_bucket() -> ::String
Returns
-
(::String) — Optional. A Storage bucket used to stage job
dependencies, config files, and job driver console output.
If you do not specify a staging bucket, Cloud
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's staging bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.
#staging_bucket=
def staging_bucket=(value) -> ::String
Parameter
-
value (::String) — Optional. A Storage bucket used to stage job
dependencies, config files, and job driver console output.
If you do not specify a staging bucket, Cloud
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's staging bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.
Returns
-
(::String) — Optional. A Storage bucket used to stage job
dependencies, config files, and job driver console output.
If you do not specify a staging bucket, Cloud
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's staging bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.
#temp_bucket
def temp_bucket() -> ::String
Returns
-
(::String) — Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files.
If you do not specify a temp bucket,
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's temp bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket. The default bucket has
a TTL of 90 days, but you can use any TTL (or none) if you specify a
bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.
#temp_bucket=
def temp_bucket=(value) -> ::String
Parameter
-
value (::String) — Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files.
If you do not specify a temp bucket,
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's temp bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket. The default bucket has
a TTL of 90 days, but you can use any TTL (or none) if you specify a
bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.
Returns
-
(::String) — Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files.
If you do not specify a temp bucket,
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's temp bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket. The default bucket has
a TTL of 90 days, but you can use any TTL (or none) if you specify a
bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a
gs://...
URI to a Cloud Storage bucket.