Class ClusterControllerClient (0.7.0)

ClusterControllerClient(
    transport=None,
    channel=None,
    credentials=None,
    client_config=None,
    client_info=None,
    client_options=None,
)

The ClusterControllerService provides methods to manage clusters of Compute Engine instances.

Methods

ClusterControllerClient

ClusterControllerClient(
    transport=None,
    channel=None,
    credentials=None,
    client_config=None,
    client_info=None,
    client_options=None,
)

Constructor.

Parameters
NameDescription
channel grpc.Channel

DEPRECATED. A Channel instance through which to make calls. This argument is mutually exclusive with credentials; providing both will raise an exception.

credentials google.auth.credentials.Credentials

The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. This argument is mutually exclusive with providing a transport instance to transport; doing so will raise an exception.

client_config dict

DEPRECATED. A dictionary of call options for each method. If not specified, the default configuration is used.

client_info google.api_core.gapic_v1.client_info.ClientInfo

The client info used to send a user-agent string along with API requests. If None, then default info will be used. Generally, you only need to set this if you're developing your own client library.

client_options Union[dict, google.api_core.client_options.ClientOptions]

Client options used to set user options on the client. API Endpoint should be set through client_options.

create_cluster

create_cluster(project_id, region, cluster, request_id=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata <https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#clusteroperationmetadata>__.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

TODO: Initialize cluster:

cluster = {}

response = client.create_cluster(project_id, region, cluster)

def callback(operation_future): ... # Handle result. ... result = operation_future.result()

response.add_done_callback(callback)

Handle metadata.

metadata = response.metadata()

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

cluster Union[dict, Cluster]

Required. The cluster to create. If a dict is provided, it must be of the same form as the protobuf message Cluster

request_id str

Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID https://en.wikipedia.org/wiki/Universally_unique_identifier_. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (\), and hyphens (-). The maximum length is 40 characters.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

delete_cluster

delete_cluster(project_id, region, cluster_name, cluster_uuid=None, request_id=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Deletes a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata <https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#clusteroperationmetadata>__.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

TODO: Initialize cluster_name:

cluster_name = ''

response = client.delete_cluster(project_id, region, cluster_name)

def callback(operation_future): ... # Handle result. ... result = operation_future.result()

response.add_done_callback(callback)

Handle metadata.

metadata = response.metadata()

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

cluster_name str

Required. The cluster name.

cluster_uuid str

Optional. Specifying the cluster_uuid means the RPC should fail (with error NOT_FOUND) if cluster with specified UUID does not exist.

request_id str

Optional. A unique id used to identify the request. If the server receives two DeleteClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID https://en.wikipedia.org/wiki/Universally_unique_identifier_. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (\), and hyphens (-). The maximum length is 40 characters.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

diagnose_cluster

diagnose_cluster(project_id, region, cluster_name, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Gets cluster diagnostic information. The returned Operation.metadata will be ClusterOperationMetadata <https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#clusteroperationmetadata>__. After the operation completes, Operation.response contains Empty.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

TODO: Initialize cluster_name:

cluster_name = ''

response = client.diagnose_cluster(project_id, region, cluster_name)

def callback(operation_future): ... # Handle result. ... result = operation_future.result()

response.add_done_callback(callback)

Handle metadata.

metadata = response.metadata()

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

cluster_name str

Required. The cluster name.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

from_service_account_file

from_service_account_file(filename, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
ClusterControllerClientThe constructed client.

from_service_account_json

from_service_account_json(filename, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
ClusterControllerClientThe constructed client.

get_cluster

get_cluster(project_id, region, cluster_name, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Gets the resource representation for a cluster in a project.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

TODO: Initialize cluster_name:

cluster_name = ''

response = client.get_cluster(project_id, region, cluster_name)

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

cluster_name str

Required. The cluster name.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

list_clusters

list_clusters(project_id, region, filter_=None, page_size=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Lists all regions/{region}/clusters in a project.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

Iterate over all results

for element in client.list_clusters(project_id, region): ... # process element ... pass

Alternatively:

Iterate over results one page at a time

for page in client.list_clusters(project_id, region).pages: ... for element in page: ... # process element ... pass

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project that the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

filter_ str

Optional. A filter constraining the clusters to list. Filters are case-sensitive and have the following syntax: field = value [AND [field = value]] ... where field is one of status.state, clusterName, or labels.[KEY], and [KEY] is a label key. value can be * to match all values. status.state can be one of the following: ACTIVE, INACTIVE, CREATING, RUNNING, ERROR, DELETING, or UPDATING. ACTIVE contains the CREATING, UPDATING, and RUNNING states. INACTIVE contains the DELETING and ERROR states. clusterName is the name of the cluster provided at creation time. Only the logical AND operator is supported; space-separated items are treated as having an implicit AND operator. Example filter: status.state = ACTIVE AND clusterName = mycluster AND labels.env = staging AND labels.starred = *

page_size int

The maximum number of resources contained in the underlying API response. If page streaming is performed per- resource, this parameter does not affect the return value. If page streaming is performed per-page, this determines the maximum number of resources in a page.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

update_cluster

update_cluster(project_id, region, cluster_name, cluster, update_mask, graceful_decommission_timeout=None, request_id=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)

Updates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata <https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#clusteroperationmetadata>__.

.. rubric:: Example

from google.cloud import dataproc_v1beta2

client = dataproc_v1beta2.ClusterControllerClient()

TODO: Initialize project_id:

project_id = ''

TODO: Initialize region:

region = ''

TODO: Initialize cluster_name:

cluster_name = ''

TODO: Initialize cluster:

cluster = {}

TODO: Initialize update_mask:

update_mask = {}

response = client.update_cluster(project_id, region, cluster_name, cluster, update_mask)

def callback(operation_future): ... # Handle result. ... result = operation_future.result()

response.add_done_callback(callback)

Handle metadata.

metadata = response.metadata()

Parameters
NameDescription
project_id str

Required. The ID of the Google Cloud Platform project the cluster belongs to.

region str

Required. The Dataproc region in which to handle the request.

cluster_name str

Required. The cluster name.

cluster Union[dict, Cluster]

Required. The changes to the cluster. If a dict is provided, it must be of the same form as the protobuf message Cluster

update_mask Union[dict, FieldMask]

Required. Specifies the path, relative to Cluster, of the field to update. For example, to change the number of workers in a cluster to 5, the update_mask parameter would be specified as config.worker_config.num_instances, and the PATCH request body would specify the new value, as follows: :: { "config":{ "workerConfig":{ "numInstances":"5" } } } Similarly, to change the number of preemptible workers in a cluster to 5, the update_mask parameter would be config.secondary_worker_config.num_instances, and the PATCH request body would be set as follows: :: { "config":{ "secondaryWorkerConfig":{ "numInstances":"5" } } } Note: currently only the following fields can be updated: .. raw:: html

MaskPurpose
labelsUpdates labels
config.worker_config.num_instancesResize primary worker group
config.secondary_worker_config.num_instancesResize secondary worker group
config.lifecycle_config.auto_delete_ttlReset MAX TTL duration
config.lifecycle_config.auto_delete_timeUpdate MAX TTL deletion timestamp
config.lifecycle_config.idle_delete_ttlUpdate Idle TTL duration
config.autoscaling_config.policy_uriUse, stop using, or change autoscaling policies
If a dict is provided, it must be of the same form as the protobuf message FieldMask

graceful_decommission_timeout Union[dict, Duration]

Optional. Timeout for graceful YARN decomissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day (see JSON representation of Duration https://developers.google.com/protocol-buffers/docs/proto3#json__). Only supported on Dataproc image versions 1.2 and higher. If a dict is provided, it must be of the same form as the protobuf message Duration

request_id str

Optional. A unique id used to identify the request. If the server receives two UpdateClusterRequest requests with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned. It is recommended to always set this value to a UUID https://en.wikipedia.org/wiki/Universally_unique_identifier_. The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (\), and hyphens (-). The maximum length is 40 characters.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.