Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Questo documento fornisce informazioni sui bucket di staging di Serverless per Apache Spark.
Serverless per Apache Spark crea un bucket gestione temporanea Cloud Storage nel tuo progetto
o riutilizza un bucket gestione temporanea esistente dalle richieste di creazione batch precedenti. Questo è il bucket predefinito creato dai
cluster Dataproc su Compute Engine. Per ulteriori
informazioni, vedi
Bucket temporanei e di gestione temporanea di Dataproc.
Serverless for Apache Spark archivia le dipendenze dei workload, i file di configurazione e
l'output della console del driver dei job nel bucket gestione temporanea.
Serverless for Apache Spark imposta i bucket di staging regionali nelle
posizioni Cloud Storage
in base alla zona Compute Engine in cui viene eseguito il deployment del workload,
quindi crea e gestisce questi bucket a livello di progetto per località.
I bucket di staging creati da Serverless per Apache Spark sono condivisi tra i workload nella stessa regione e vengono creati con una durata di conservazione dell'eliminazione temporanea di Cloud Storage impostata su 0 secondi.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[[["\u003cp\u003eDataproc Serverless utilizes Cloud Storage staging buckets to store workload dependencies, config files, and job driver console output.\u003c/p\u003e\n"],["\u003cp\u003eThese staging buckets are created by Dataproc Serverless within your project, or an existing one is reused, similar to the default bucket used with Dataproc on Compute Engine clusters.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless creates regional staging buckets, which are shared across workloads within the same region, based on the Compute Engine zone where the workload is deployed.\u003c/p\u003e\n"],["\u003cp\u003eThe staging buckets created by Dataproc Serverless can be identified by filtering for the \u003ccode\u003edataproc-staging-\u003c/code\u003e prefix in Cloud Storage, and they are created with a 0-second soft delete retention.\u003c/p\u003e\n"]]],[],null,["# Serverless for Apache Spark staging buckets\n\nThis document provides information about Serverless for Apache Spark staging buckets.\nServerless for Apache Spark creates a Cloud Storage staging bucket in your project\nor reuses an existing staging bucket from previous batch\ncreation requests. This is the default bucket created by\nDataproc on Compute Engine clusters. For more\ninformation, see\n[Dataproc staging and temp buckets](/dataproc/docs/concepts/configuring-clusters/staging-bucket).\n\nServerless for Apache Spark stores workload dependencies, config files, and\njob driver console output in the staging bucket.\n\nServerless for Apache Spark sets regional staging buckets in\n[Cloud Storage locations](/storage/docs/locations#location-r)\naccording to the Compute Engine zone where your workload is deployed,\nand then creates and manages these project-level, per-location buckets.\nServerless for Apache Spark-created staging buckets are shared among\nworkloads in the same region, and are created with a\nCloud Storage [soft delete retention](/storage/docs/soft-delete#retention-duration)\nduration set to 0 seconds.\n| To locate the Dataproc default staging bucket, in the Google Cloud console, go to **[Cloud Storage](https://console.cloud.google.com/storage/browser)** and filter the results using the `dataproc-staging-` prefix."]]