[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eDataproc Serverless utilizes Cloud Storage staging buckets to store workload dependencies, config files, and job driver console output.\u003c/p\u003e\n"],["\u003cp\u003eThese staging buckets are created by Dataproc Serverless within your project, or an existing one is reused, similar to the default bucket used with Dataproc on Compute Engine clusters.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless creates regional staging buckets, which are shared across workloads within the same region, based on the Compute Engine zone where the workload is deployed.\u003c/p\u003e\n"],["\u003cp\u003eThe staging buckets created by Dataproc Serverless can be identified by filtering for the \u003ccode\u003edataproc-staging-\u003c/code\u003e prefix in Cloud Storage, and they are created with a 0-second soft delete retention.\u003c/p\u003e\n"]]],[],null,["# Serverless for Apache Spark staging buckets\n\nThis document provides information about Serverless for Apache Spark staging buckets.\nServerless for Apache Spark creates a Cloud Storage staging bucket in your project\nor reuses an existing staging bucket from previous batch\ncreation requests. This is the default bucket created by\nDataproc on Compute Engine clusters. For more\ninformation, see\n[Dataproc staging and temp buckets](/dataproc/docs/concepts/configuring-clusters/staging-bucket).\n\nServerless for Apache Spark stores workload dependencies, config files, and\njob driver console output in the staging bucket.\n\nServerless for Apache Spark sets regional staging buckets in\n[Cloud Storage locations](/storage/docs/locations#location-r)\naccording to the Compute Engine zone where your workload is deployed,\nand then creates and manages these project-level, per-location buckets.\nServerless for Apache Spark-created staging buckets are shared among\nworkloads in the same region, and are created with a\nCloud Storage [soft delete retention](/storage/docs/soft-delete#retention-duration)\nduration set to 0 seconds.\n| To locate the Dataproc default staging bucket, in the Google Cloud console, go to **[Cloud Storage](https://console.cloud.google.com/storage/browser)** and filter the results using the `dataproc-staging-` prefix."]]