Criar perfil do uso de recursos do Google Cloud Serverless para Apache Spark
Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Este documento descreve como criar perfis de uso de recursos do Google Cloud Serverless para Apache Spark. O Cloud Profiler coleta e informa continuamente o uso da CPU do aplicativo e as informações de alocação de memória. É possível ativar a criação de perfis ao enviar um lote ou criar uma carga de trabalho de sessão usando as propriedades de criação de perfis listadas na tabela a seguir.
OGoogle Cloud Serverless para Apache Spark adiciona opções relacionadas da JVM às configurações spark.driver.extraJavaOptions e spark.executor.extraJavaOptions usadas para a carga de trabalho.
O Serverless para Apache Spark define a versão do criador de perfis como o UUID do lote ou o UUID da sessão.
O criador de perfil é compatível com os seguintes tipos de carga de trabalho do Spark: Spark, PySpark, SparkSql e SparkR.
Uma carga de trabalho precisa ser executada por mais de três minutos para permitir que o Profiler
colete e faça upload de dados para um projeto.
É possível substituir as opções de criação de perfil enviadas com uma carga de trabalho construindo um
SparkConf e definindo extraJavaOptions no seu código.
Definir propriedades extraJavaOptions ao enviar a carga de trabalho não substitui as opções de criação de perfil enviadas com ela.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-09 UTC."],[[["\u003cp\u003eDataproc Serverless for Spark allows profiling of resource usage, including CPU and memory, through Cloud Profiler.\u003c/p\u003e\n"],["\u003cp\u003eProfiling is enabled by setting the \u003ccode\u003edataproc.profiling.enabled\u003c/code\u003e property to \u003ccode\u003etrue\u003c/code\u003e when submitting a batch or creating a session workload.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003edataproc.profiling.name\u003c/code\u003e property sets the profile name in the Profiler service, with a default format of \u003ccode\u003espark-WORKLOAD_TYPE-WORKLOAD_ID\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWorkloads must run for at least three minutes for Profiler to successfully gather and upload the resource usage data.\u003c/p\u003e\n"],["\u003cp\u003eTo enable profiling, the Profiler must be enabled, the correct roles must be assigned to the custom service accounts if needed, and the profiling properties must be set during workload submission or session creation.\u003c/p\u003e\n"]]],[],null,["# Profile Google Cloud Serverless for Apache Spark resource usage\n\nThis document describes how to profile Google Cloud Serverless for Apache Spark resource\nusage. [Cloud Profiler](/profiler/docs) continuously gathers and reports\napplication CPU usage and memory allocation information. You can enable\nprofiling when you submit a batch or create a session workload\nby using the profiling properties listed in the following table.\nGoogle Cloud Serverless for Apache Spark appends related JVM options to\nthe `spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions`\nconfigurations used for the workload.\n\nNotes:\n\n- Serverless for Apache Spark sets the profiler version to either the [batch UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch.FIELDS.uuid) or the [session UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.sessions#Session.FIELDS.uuid).\n- Profiler supports the following Spark workload types: `Spark`, `PySpark`, `SparkSql`, and `SparkR`.\n- A workload must run for more than three minutes to allow Profiler to collect and upload data to a project.\n- You can override profiling options submitted with a workload by constructing a `SparkConf`, and then setting `extraJavaOptions` in your code. Note that setting `extraJavaOptions` properties when the workload is submitted doesn't override profiling options submitted with the workload.\n\nFor an example of profiler options used with a batch submission, see the\n[PySpark batch workload example](#pyspark_batch_workload_example).\n\nEnable profiling\n----------------\n\nComplete the following steps to enable profiling on a workload:\n\n1. [Enable the Profiler](/profiler/docs/profiling-java#enabling-profiler).\n2. If you are using a [custom VM service account](/dataproc-serverless/docs/concepts/service-account), grant the [Cloud Profiler Agent](/profiler/docs/iam#cloudprofiler.agent) role to the custom VM service account. This role contains required Profiler permissions.\n3. Set profiling properties when you [submit a batch workload](/dataproc-serverless/docs/quickstarts/spark-batch#submit_a_spark_batch_workload) or [create a session template](/dataproc-serverless/docs/quickstarts/jupyterlab-sessions#create_a_serverless_runtime_template).\n\n### PySpark batch workload example\n\nThe following example uses the gcloud CLI to submit a PySpark batch\nworkload with profiling enabled. \n\n```\ngcloud dataproc batches submit pyspark PYTHON_WORKLOAD_FILE \\\n --region=REGION \\\n --properties=dataproc.profiling.enabled=true,dataproc.profiling.name=PROFILE_NAME \\\n -- other args\n```\n\nTwo profiles are created:\n\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-driver` to profile spark driver tasks\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-executor` to profile spark executor tasks\n\nView profiles\n-------------\n\nYou can view profiles from [Profiler](https://console.cloud.google.com/profiler)\nin the Google Cloud console.\n\nWhat's next\n-----------\n\n- See [Monitor and troubleshoot Serverless for Apache Spark workloads](/dataproc-serverless/docs/guides/monitor-troubleshoot-batches)."]]