Profila l'utilizzo delle risorse di Google Cloud Serverless per Apache Spark
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Questo documento descrive come profilare l'utilizzo delle risorse di Google Cloud Serverless per Apache Spark. Cloud Profiler raccoglie e segnala continuamente
informazioni sull'utilizzo della CPU e sull'allocazione della memoria dell'applicazione. Puoi attivare
la profilazione quando invii un batch o crei un carico di lavoro di sessione
utilizzando le proprietà di profilazione elencate nella tabella seguente.
Google Cloud Serverless per Apache Spark aggiunge le opzioni JVM correlate alle configurazioni spark.driver.extraJavaOptions e spark.executor.extraJavaOptions utilizzate per il workload.
Profiler supporta i seguenti tipi di workload Spark:
Spark, PySpark, SparkSql e SparkR.
Un carico di lavoro deve essere eseguito per più di tre minuti per consentire a Profiler
di raccogliere e caricare i dati in un progetto.
Puoi ignorare le opzioni di profilazione inviate con un carico di lavoro creando un
SparkConf e poi impostando extraJavaOptions nel codice.
Tieni presente che l'impostazione delle proprietà extraJavaOptions al momento dell'invio del workload
non esegue l'override delle opzioni di profilazione inviate con il workload.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[[["\u003cp\u003eDataproc Serverless for Spark allows profiling of resource usage, including CPU and memory, through Cloud Profiler.\u003c/p\u003e\n"],["\u003cp\u003eProfiling is enabled by setting the \u003ccode\u003edataproc.profiling.enabled\u003c/code\u003e property to \u003ccode\u003etrue\u003c/code\u003e when submitting a batch or creating a session workload.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003edataproc.profiling.name\u003c/code\u003e property sets the profile name in the Profiler service, with a default format of \u003ccode\u003espark-WORKLOAD_TYPE-WORKLOAD_ID\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWorkloads must run for at least three minutes for Profiler to successfully gather and upload the resource usage data.\u003c/p\u003e\n"],["\u003cp\u003eTo enable profiling, the Profiler must be enabled, the correct roles must be assigned to the custom service accounts if needed, and the profiling properties must be set during workload submission or session creation.\u003c/p\u003e\n"]]],[],null,["# Profile Google Cloud Serverless for Apache Spark resource usage\n\nThis document describes how to profile Google Cloud Serverless for Apache Spark resource\nusage. [Cloud Profiler](/profiler/docs) continuously gathers and reports\napplication CPU usage and memory allocation information. You can enable\nprofiling when you submit a batch or create a session workload\nby using the profiling properties listed in the following table.\nGoogle Cloud Serverless for Apache Spark appends related JVM options to\nthe `spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions`\nconfigurations used for the workload.\n\nNotes:\n\n- Serverless for Apache Spark sets the profiler version to either the [batch UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch.FIELDS.uuid) or the [session UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.sessions#Session.FIELDS.uuid).\n- Profiler supports the following Spark workload types: `Spark`, `PySpark`, `SparkSql`, and `SparkR`.\n- A workload must run for more than three minutes to allow Profiler to collect and upload data to a project.\n- You can override profiling options submitted with a workload by constructing a `SparkConf`, and then setting `extraJavaOptions` in your code. Note that setting `extraJavaOptions` properties when the workload is submitted doesn't override profiling options submitted with the workload.\n\nFor an example of profiler options used with a batch submission, see the\n[PySpark batch workload example](#pyspark_batch_workload_example).\n\nEnable profiling\n----------------\n\nComplete the following steps to enable profiling on a workload:\n\n1. [Enable the Profiler](/profiler/docs/profiling-java#enabling-profiler).\n2. If you are using a [custom VM service account](/dataproc-serverless/docs/concepts/service-account), grant the [Cloud Profiler Agent](/profiler/docs/iam#cloudprofiler.agent) role to the custom VM service account. This role contains required Profiler permissions.\n3. Set profiling properties when you [submit a batch workload](/dataproc-serverless/docs/quickstarts/spark-batch#submit_a_spark_batch_workload) or [create a session template](/dataproc-serverless/docs/quickstarts/jupyterlab-sessions#create_a_serverless_runtime_template).\n\n### PySpark batch workload example\n\nThe following example uses the gcloud CLI to submit a PySpark batch\nworkload with profiling enabled. \n\n```\ngcloud dataproc batches submit pyspark PYTHON_WORKLOAD_FILE \\\n --region=REGION \\\n --properties=dataproc.profiling.enabled=true,dataproc.profiling.name=PROFILE_NAME \\\n -- other args\n```\n\nTwo profiles are created:\n\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-driver` to profile spark driver tasks\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-executor` to profile spark executor tasks\n\nView profiles\n-------------\n\nYou can view profiles from [Profiler](https://console.cloud.google.com/profiler)\nin the Google Cloud console.\n\nWhat's next\n-----------\n\n- See [Monitor and troubleshoot Serverless for Apache Spark workloads](/dataproc-serverless/docs/guides/monitor-troubleshoot-batches)."]]