Membuat profil penggunaan resource Google Cloud Serverless untuk Apache Spark
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Dokumen ini menjelaskan cara membuat profil penggunaan resource Serverless untuk Apache Spark. Google Cloud Cloud Profiler terus mengumpulkan dan melaporkan informasi penggunaan CPU dan alokasi memori aplikasi. Anda dapat mengaktifkan
pembuatan profil saat mengirimkan batch atau membuat beban kerja sesi
dengan menggunakan properti pembuatan profil yang tercantum dalam tabel berikut.
Google Cloud Serverless for Apache Spark menambahkan opsi JVM terkait ke
konfigurasi spark.driver.extraJavaOptions dan spark.executor.extraJavaOptions
yang digunakan untuk workload.
Serverless untuk Apache Spark menetapkan versi profiler ke
UUID batch
atau UUID sesi.
Profiler mendukung jenis beban kerja Spark berikut:
Spark, PySpark, SparkSql, dan SparkR.
Beban kerja harus berjalan selama lebih dari tiga menit agar Profiler dapat mengumpulkan dan mengupload data ke project.
Anda dapat mengganti opsi pembuatan profil yang dikirimkan dengan workload dengan membuat
SparkConf, lalu menyetel extraJavaOptions dalam kode Anda.
Perhatikan bahwa menyetel properti extraJavaOptions saat workload dikirimkan
tidak akan menggantikan opsi pembuatan profil yang dikirimkan bersama workload.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eDataproc Serverless for Spark allows profiling of resource usage, including CPU and memory, through Cloud Profiler.\u003c/p\u003e\n"],["\u003cp\u003eProfiling is enabled by setting the \u003ccode\u003edataproc.profiling.enabled\u003c/code\u003e property to \u003ccode\u003etrue\u003c/code\u003e when submitting a batch or creating a session workload.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003edataproc.profiling.name\u003c/code\u003e property sets the profile name in the Profiler service, with a default format of \u003ccode\u003espark-WORKLOAD_TYPE-WORKLOAD_ID\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWorkloads must run for at least three minutes for Profiler to successfully gather and upload the resource usage data.\u003c/p\u003e\n"],["\u003cp\u003eTo enable profiling, the Profiler must be enabled, the correct roles must be assigned to the custom service accounts if needed, and the profiling properties must be set during workload submission or session creation.\u003c/p\u003e\n"]]],[],null,["# Profile Google Cloud Serverless for Apache Spark resource usage\n\nThis document describes how to profile Google Cloud Serverless for Apache Spark resource\nusage. [Cloud Profiler](/profiler/docs) continuously gathers and reports\napplication CPU usage and memory allocation information. You can enable\nprofiling when you submit a batch or create a session workload\nby using the profiling properties listed in the following table.\nGoogle Cloud Serverless for Apache Spark appends related JVM options to\nthe `spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions`\nconfigurations used for the workload.\n\nNotes:\n\n- Serverless for Apache Spark sets the profiler version to either the [batch UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch.FIELDS.uuid) or the [session UUID](/dataproc-serverless/docs/reference/rest/v1/projects.locations.sessions#Session.FIELDS.uuid).\n- Profiler supports the following Spark workload types: `Spark`, `PySpark`, `SparkSql`, and `SparkR`.\n- A workload must run for more than three minutes to allow Profiler to collect and upload data to a project.\n- You can override profiling options submitted with a workload by constructing a `SparkConf`, and then setting `extraJavaOptions` in your code. Note that setting `extraJavaOptions` properties when the workload is submitted doesn't override profiling options submitted with the workload.\n\nFor an example of profiler options used with a batch submission, see the\n[PySpark batch workload example](#pyspark_batch_workload_example).\n\nEnable profiling\n----------------\n\nComplete the following steps to enable profiling on a workload:\n\n1. [Enable the Profiler](/profiler/docs/profiling-java#enabling-profiler).\n2. If you are using a [custom VM service account](/dataproc-serverless/docs/concepts/service-account), grant the [Cloud Profiler Agent](/profiler/docs/iam#cloudprofiler.agent) role to the custom VM service account. This role contains required Profiler permissions.\n3. Set profiling properties when you [submit a batch workload](/dataproc-serverless/docs/quickstarts/spark-batch#submit_a_spark_batch_workload) or [create a session template](/dataproc-serverless/docs/quickstarts/jupyterlab-sessions#create_a_serverless_runtime_template).\n\n### PySpark batch workload example\n\nThe following example uses the gcloud CLI to submit a PySpark batch\nworkload with profiling enabled. \n\n```\ngcloud dataproc batches submit pyspark PYTHON_WORKLOAD_FILE \\\n --region=REGION \\\n --properties=dataproc.profiling.enabled=true,dataproc.profiling.name=PROFILE_NAME \\\n -- other args\n```\n\nTwo profiles are created:\n\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-driver` to profile spark driver tasks\n- \u003cvar translate=\"no\"\u003ePROFILE_NAME\u003c/var\u003e`-executor` to profile spark executor tasks\n\nView profiles\n-------------\n\nYou can view profiles from [Profiler](https://console.cloud.google.com/profiler)\nin the Google Cloud console.\n\nWhat's next\n-----------\n\n- See [Monitor and troubleshoot Serverless for Apache Spark workloads](/dataproc-serverless/docs/guides/monitor-troubleshoot-batches)."]]