- 4.51.0 (latest)
- 4.50.0
- 4.49.0
- 4.48.0
- 4.46.0
- 4.45.0
- 4.44.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.34.0
- 4.33.0
- 4.32.0
- 4.31.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.16.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.0
- 4.8.0
- 4.6.0
- 4.5.0
- 4.4.0
- 4.3.0
- 4.2.0
- 4.1.0
- 4.0.8
- 3.1.2
- 3.0.3
- 2.3.1
public interface VirtualClusterConfigOrBuilder extends MessageOrBuilder
Implements
MessageOrBuilderMethods
getAuxiliaryServicesConfig()
public abstract AuxiliaryServicesConfig getAuxiliaryServicesConfig()
Optional. Configuration of auxiliary services used by this cluster.
.google.cloud.dataproc.v1.AuxiliaryServicesConfig auxiliary_services_config = 7 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
AuxiliaryServicesConfig | The auxiliaryServicesConfig. |
getAuxiliaryServicesConfigOrBuilder()
public abstract AuxiliaryServicesConfigOrBuilder getAuxiliaryServicesConfigOrBuilder()
Optional. Configuration of auxiliary services used by this cluster.
.google.cloud.dataproc.v1.AuxiliaryServicesConfig auxiliary_services_config = 7 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
AuxiliaryServicesConfigOrBuilder |
getInfrastructureConfigCase()
public abstract VirtualClusterConfig.InfrastructureConfigCase getInfrastructureConfigCase()
Type | Description |
VirtualClusterConfig.InfrastructureConfigCase |
getKubernetesClusterConfig()
public abstract KubernetesClusterConfig getKubernetesClusterConfig()
Required. The configuration for running the Dataproc cluster on Kubernetes.
.google.cloud.dataproc.v1.KubernetesClusterConfig kubernetes_cluster_config = 6 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
KubernetesClusterConfig | The kubernetesClusterConfig. |
getKubernetesClusterConfigOrBuilder()
public abstract KubernetesClusterConfigOrBuilder getKubernetesClusterConfigOrBuilder()
Required. The configuration for running the Dataproc cluster on Kubernetes.
.google.cloud.dataproc.v1.KubernetesClusterConfig kubernetes_cluster_config = 6 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
KubernetesClusterConfigOrBuilder |
getStagingBucket()
public abstract String getStagingBucket()
Optional. A Storage bucket used to stage job
dependencies, config files, and job driver console output.
If you do not specify a staging bucket, Cloud
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's staging bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a gs://...
URI to
a Cloud Storage bucket.
string staging_bucket = 1 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
String | The stagingBucket. |
getStagingBucketBytes()
public abstract ByteString getStagingBucketBytes()
Optional. A Storage bucket used to stage job
dependencies, config files, and job driver console output.
If you do not specify a staging bucket, Cloud
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's staging bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a gs://...
URI to
a Cloud Storage bucket.
string staging_bucket = 1 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
ByteString | The bytes for stagingBucket. |
getTempBucket()
public abstract String getTempBucket()
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files.
If you do not specify a temp bucket,
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's temp bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket. The default bucket has
a TTL of 90 days, but you can use any TTL (or none) if you specify a
bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a gs://...
URI to
a Cloud Storage bucket.
string temp_bucket = 2 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
String | The tempBucket. |
getTempBucketBytes()
public abstract ByteString getTempBucketBytes()
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data,
such as Spark and MapReduce history files.
If you do not specify a temp bucket,
Dataproc will determine a Cloud Storage location (US,
ASIA, or EU) for your cluster's temp bucket according to the
Compute Engine zone where your cluster is deployed, and then create
and manage this project-level, per-location bucket. The default bucket has
a TTL of 90 days, but you can use any TTL (or none) if you specify a
bucket (see
Dataproc staging and temp
buckets).
This field requires a Cloud Storage bucket name, not a gs://...
URI to
a Cloud Storage bucket.
string temp_bucket = 2 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
ByteString | The bytes for tempBucket. |
hasAuxiliaryServicesConfig()
public abstract boolean hasAuxiliaryServicesConfig()
Optional. Configuration of auxiliary services used by this cluster.
.google.cloud.dataproc.v1.AuxiliaryServicesConfig auxiliary_services_config = 7 [(.google.api.field_behavior) = OPTIONAL];
Type | Description |
boolean | Whether the auxiliaryServicesConfig field is set. |
hasKubernetesClusterConfig()
public abstract boolean hasKubernetesClusterConfig()
Required. The configuration for running the Dataproc cluster on Kubernetes.
.google.cloud.dataproc.v1.KubernetesClusterConfig kubernetes_cluster_config = 6 [(.google.api.field_behavior) = REQUIRED];
Type | Description |
boolean | Whether the kubernetesClusterConfig field is set. |