Class ClusterConfig (0.8.1)

The cluster config. .. attribute:: config_bucket

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket <https://cloud.google.com/dataproc/docs/concepts/configuring- clusters/staging-bucket>__).

Optional. The Compute Engine config settings for the master instance in a cluster.

Optional. The Compute Engine config settings for additional worker instances in a cluster.

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=\ :math:(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/ins tance/attributes/dataproc-role) if [[ "\ {ROLE}" == ‘Master’ ]]; then … master specific actions … else … worker specific actions … fi

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

Optional. Lifecycle setting for the cluster.

Inheritance

builtins.object > google.protobuf.pyext._message.CMessage > builtins.object > google.protobuf.message.Message > ClusterConfig