After you create a cluster, you can stop it, then restart it when you need it. Stopping an idle cluster avoids incurring charges and avoids the need to delete an idle cluster, then create a cluster with the same configuration later.
Feature Notes:
- The cluster start/stop feature is only supported with the following
Dataproc image versions or above:
- 1.4.35-debian10/ubuntu18
- 1.5.10-debian10/ubuntu18
- 2.0.0-RC6-debian10/ubuntu18
- Stopping individual cluster nodes is not recommended since the status of a stopped VM may not be in sync with cluster status, which can result in errors.
Stopping a cluster
Stopping a cluster stops all cluster Compute Engine VMs. You do not pay for these VMs while they are stopped. However, you continue to pay for any associated cluster resources, such as persistent disks.
Notes:
- Running operations: If a cluster has running operations (such as update or diagnose operations), the stop request will fail.
- Running jobs: If a cluster has running jobs, the stop request will succeed, the VMs will stop, and the running jobs will fail.
- Stop Response: When the stop request returns a stop operation to the
user or caller in the response, the cluster will be in a
STOPPING
state, and no further jobs will be allowed to be submitted (SubmitJob
requests will fail). - Autoscaling: If you stop a cluster that has autoscaling enabled, the Dataproc autoscaler will stop scaling the cluster. It will resume scaling the cluster once it has been started again. If you enable autoscaling on a stopped cluster, the autoscaling policy will only take effect once the cluster has been started.
Monitoring the stop operation
You can run
gcloud dataproc operations describe operation-id
to monitor the
long-running cluster stop operation. You can also use the
gcloud dataproc clusters describe cluster-name
command to monitor the transitioning of the cluster's status from
RUNNING
to STOPPING
to STOPPED
.
Limitations
You cannot stop:
- clusters with secondary workers
- clusters with local ssds
After a cluster is stopped, you cannot:
- update the cluster
- submit jobs to the cluster
- access notebooks on the cluster using the Dataproc component gateway
Starting a cluster
When you start a stopped cluster, any initialization actions will not be re-run. Initialization actions are only run on cluster nodes when the cluster is created and when nodes are added when the cluster is scaled up.
After the start operation completes, you can immediately submit jobs to the cluster. However, execution of these jobs can be delayed (approximately 30 seconds) to allow HDFS and YARN to become operational.
Using Stop/Start
You can stop and start a cluster using the gcloud CLI or the Dataproc API.
gcloud command
Stop a cluster
gcloud dataproc clusters stop cluster-name \ --region=region
Start a cluster
gcloud dataproc clusters start cluster-name \ --region=region
REST API
Stop a cluster
Submit a clusters.stop request.
Start a cluster
Submit a clusters.start request.
Console
Click the cluster name from the Dataproc Clusters page in the Google Cloud console, then click STOP to stop and START to start the cluster.