Class VirtualClusterConfig (4.0.3)

VirtualClusterConfig(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Dataproc cluster config for a cluster that does not directly control the underlying compute resources, such as a Dataproc-on-GKE cluster <https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster>__.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Attributes

NameDescription
staging_bucket str
Optional. A Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see `Dataproc staging and temp buckets
temp_bucket str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see `Dataproc staging and temp buckets
kubernetes_cluster_config google.cloud.dataproc_v1.types.KubernetesClusterConfig
Required. The configuration for running the Dataproc cluster on Kubernetes. This field is a member of oneof_ infrastructure_config.
auxiliary_services_config google.cloud.dataproc_v1.types.AuxiliaryServicesConfig
Optional. Configuration of auxiliary services used by this cluster.