Class ClusterConfig (5.0.0)

public sealed class ClusterConfig : IMessage<ClusterConfig>, IEquatable<ClusterConfig>, IDeepCloneable<ClusterConfig>, IBufferMessage, IMessage

The cluster config.

Inheritance

Object > ClusterConfig

Namespace

Google.Cloud.Dataproc.V1

Assembly

Google.Cloud.Dataproc.V1.dll

Constructors

ClusterConfig()

public ClusterConfig()

ClusterConfig(ClusterConfig)

public ClusterConfig(ClusterConfig other)
Parameter
NameDescription
otherClusterConfig

Properties

AutoscalingConfig

public AutoscalingConfig AutoscalingConfig { get; set; }

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

Property Value
TypeDescription
AutoscalingConfig

ConfigBucket

public string ConfigBucket { get; set; }

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Property Value
TypeDescription
String

EncryptionConfig

public EncryptionConfig EncryptionConfig { get; set; }

Optional. Encryption settings for the cluster.

Property Value
TypeDescription
EncryptionConfig

EndpointConfig

public EndpointConfig EndpointConfig { get; set; }

Optional. Port/endpoint configuration for this cluster

Property Value
TypeDescription
EndpointConfig

GceClusterConfig

public GceClusterConfig GceClusterConfig { get; set; }

Optional. The shared Compute Engine config settings for all instances in a cluster.

Property Value
TypeDescription
GceClusterConfig

InitializationActions

public RepeatedField<NodeInitializationAction> InitializationActions { get; }

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget):

ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

Property Value
TypeDescription
RepeatedField<NodeInitializationAction>

LifecycleConfig

public LifecycleConfig LifecycleConfig { get; set; }

Optional. Lifecycle setting for the cluster.

Property Value
TypeDescription
LifecycleConfig

MasterConfig

public InstanceGroupConfig MasterConfig { get; set; }

Optional. The Compute Engine config settings for the cluster's master instance.

Property Value
TypeDescription
InstanceGroupConfig

MetastoreConfig

public MetastoreConfig MetastoreConfig { get; set; }

Optional. Metastore configuration.

Property Value
TypeDescription
MetastoreConfig

SecondaryWorkerConfig

public InstanceGroupConfig SecondaryWorkerConfig { get; set; }

Optional. The Compute Engine config settings for a cluster's secondary worker instances

Property Value
TypeDescription
InstanceGroupConfig

SecurityConfig

public SecurityConfig SecurityConfig { get; set; }

Optional. Security settings for the cluster.

Property Value
TypeDescription
SecurityConfig

SoftwareConfig

public SoftwareConfig SoftwareConfig { get; set; }

Optional. The config settings for cluster software.

Property Value
TypeDescription
SoftwareConfig

TempBucket

public string TempBucket { get; set; }

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

Property Value
TypeDescription
String

WorkerConfig

public InstanceGroupConfig WorkerConfig { get; set; }

Optional. The Compute Engine config settings for the cluster's worker instances.

Property Value
TypeDescription
InstanceGroupConfig