Dataproc staging and temp buckets

When you create a cluster, by default, Dataproc creates a Cloud Storage staging and a Cloud Storage temp bucket in your project or reuses existing Dataproc-created staging and temp buckets from previous cluster creation requests.

  • Staging bucket: Used to stage cluster job dependencies, job driver output, and cluster config files. Also receives output from the Cloud SDK gcloud dataproc clusters diagnose command.

  • Temp bucket: Used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files.

If you do not specify a staging ot temp bucket, Dataproc sets a Cloud Storage location in US, ASIA, or EU for your cluster's staging and temp buckets according to the Compute Engine zone where your cluster is deployed, and then creates and manages these project-level, per-location buckets. Dataproc-created staging and temp buckets are shared among clusters in the same region. By default, temp bucket has a TTL of 90 days.

Instead of relying on the creation of a default staging and temp bucket, you can specify existing Cloud Storage buckets that Dataproc will use as your cluster's staging and temp bucket.

gcloud command

Run the gcloud dataproc clusters create command with the --bucket and/or --temp-bucket flags locally in a terminal window or in Cloud Shell to specify your cluster's staging and/or temp bucket.

gcloud dataproc clusters create cluster-name \
    --region=region \
    --bucket=bucket-name \
    --temp-bucket=bucket-name \
    other args ...

REST API

Use the ClusterConfig.configBucket and ClusterConfig.tempBucket fields in a clusters.create request to specify your cluster's staging and temp buckets.

Console

Use the Cloud Storage staging bucket field in the Create a cluster→Advanced options panel of the Google Cloud Console to specify or select your cluster's staging bucket.

Note: Currently, specifying a temp bucket using the Cloud Console is not supported.

Dataproc uses a defined folder structure for Cloud Storage buckets attached to clusters. Dataproc also supports attaching more than one cluster to a Cloud Storage bucket. The folder structure used for saving job driver output in Cloud Storage is:

cloud-storage-bucket-name
  - google-cloud-dataproc-metainfo
    - list of cluster IDs
        - list of job IDs
          - list of output logs for a job

You can use the gcloud command line tool, Dataproc API, or Google Cloud Console to list the name of a cluster's staging and temp buckets.

gcloud command

Run the gcloud dataproc clusters describe command locally in a terminal window or in Cloud Shell. The staging and temp buckets associated with your cluster are listed in the output.

gcloud dataproc clusters describe cluster-name \
    --region=region \
...
clusterName: cluster-name
clusterUuid: daa40b3f-5ff5-4e89-9bf1-bcbfec ...
config:
    configBucket: dataproc-...
    ...
    tempBucket: dataproc-temp...

REST API

Call clusters.get to list the cluster details, including the name of the cluster's staging and temp buckets.

{
 "projectId": "vigilant-sunup-163401",
 "clusterName": "cluster-name",
 "config": {
  "configBucket": "dataproc-...",
...
  "tempBucket": "dataproc-temp-...",
}

Console

View cluster details, including the name of the cluster's staging bucket, in the Cloud Console.

Note: Currently, console display of the temp bucket is not supported.