spark.dataproc.sql.catalog.file.index.stats.enabled: 이 설정은 Hive 파티션 수가 높은 경우 드라이버에서 OOM(메모리 부족) 조건을 일으킬 수 있습니다. 이 속성을 사용 중지하면 OOM 조건이 해결될 수 있습니다.
클러스터 생성 시 개선사항 사용 설정
이미지 버전이 2.0.69+, 2.1.17+, 2.2.0+ 이상인 이미지 출시 버전으로 Dataproc 클러스터를 만들 때 Google Cloud 콘솔, Google Cloud CLI, Dataproc API를 사용하여 Dataproc Spark 성능 향상을 사용 설정할 수 있습니다.
선택사항인 --zone=ZONE 플래그를 추가하여 us-central1-a와 같이 지정된 리전 내의 영역을 지정할 수 있습니다. 영역을 지정하지 않으면 Dataproc 자동 영역 배치 기능에서 지정된 리전이 있는 영역을 선택합니다.
IMAGE: Dataproc Spark 최적화 도구 및 실행 성능 향상은 Dataproc 이미지 버전 2.0.69+, 2.1.17+ 및 이후 출시 버전에서 사용할 수 있습니다. 이 플래그를 생략하면 Dataproc이 클러스터 Compute Engine 이미지에 있는 기본 Dataproc의 최신 하위 부 버전을 선택합니다(기본 Dataproc 이미지 버전 참조).
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-27(UTC)"],[[["\u003cp\u003eDataproc Spark performance enhancements help process more data faster and at a reduced cost, featuring Spark Optimizer and Spark Execution improvements.\u003c/p\u003e\n"],["\u003cp\u003eThese enhancements can be enabled by default for all jobs on a cluster or selectively enabled/disabled for individual jobs submitted to the Dataproc service.\u003c/p\u003e\n"],["\u003cp\u003eEnabling Spark performance enhancements on a specific job will override any conflicting settings that were configured at the cluster level for that specific job.\u003c/p\u003e\n"],["\u003cp\u003eThere are no additional charges for Spark performance enhancements, and standard Dataproc on Compute Engine pricing applies to this feature.\u003c/p\u003e\n"],["\u003cp\u003eSpark performance enhancements can be enabled through the Google Cloud console, gcloud CLI, or the Dataproc API, and they are available on Dataproc image versions 2.0.69+, 2.1.17+, 2.2.0+, and later.\u003c/p\u003e\n"]]],[],null,["This document shows you how to enable the Dataproc\nSpark performance enhancements to help your Dataproc Spark jobs\nprocess more data in less time with reduced costs.\n\nDataproc Spark performance enhancements include:\n\n- Spark Optimizer enhancements:\n - Optimizer rules written for better Spark plans\n - Improved performance of the Dataproc BigQuery connector when used in Spark jobs\n- Spark Execution enhancements:\n - Spark execution engine improvements\n\n**Other Dataproc performance improvements:** See Dataproc\n[cluster caching](/dataproc/docs/concepts/cluster-caching),\nwhich helps reduce the amount of time spent accessing data in Cloud Storage.\n\nYou can enable Spark performance enhancements on a cluster or on a Spark job:\n\n- Spark performance enhancements [enabled on a cluster](#enable_enhancements_at_cluster_creation)\n apply, by default, to all Spark jobs run on the cluster, whether submitted to the\n Dataproc service or\n [submitted directly to the cluster](/dataproc/docs/guides/submit-job#submit_a_job_directly_on_your_cluster).\n\n- Spark performance enhancements can also be\n [enabled or disabled on a job](#enable_or_disable_enhancements_at_job_submission)\n that is submitted to the Dataproc service. Spark performance\n enhancements settings applied to a job override any conflicting settings set\n at the cluster level for the specified job only.\n\nPricing\n\nSpark performance enhancements don't incur additional charges. Standard\n[Dataproc on Compute Engine pricing](/dataproc/pricing#on_pricing) applies.\n\nConsiderations\n\nSpark performance enhancements adjusts Spark properties, including the following\nproperties:\n\n- [`spark.sql.shuffle.partitions`](/dataproc/docs/support/spark-job-tuning#configuring_partitions): Spark performance enhancements set this property to `1000` for `2.2` image version clusters. This setting can slow small jobs.\n- `spark.dataproc.sql.catalog.file.index.stats.enabled`: This setting can result in driver OOM (Out-Of-Memory) conditions if the Hive partition count is high. Disabling this property can fix the OOM condition.\n\nEnable enhancements at cluster creation\n\nYou can use the Google Cloud console, Google Cloud CLI, and the Dataproc\nAPI to enable Dataproc Spark performance enhancements\nwhen you create a Dataproc cluster with image versions\n2.0.69+, 2.1.17+, 2.2.0+, and later image releases. \n\nConsole\n\n1. In the Google Cloud console, open the Dataproc [Create a cluster](https://console.cloud.google.com/dataproc/clustersAdd) page.\n2. On the **Create Dataproc cluster** form, click **Create** on the **Cluster on Compute Engine** line.\n3. On the **Create a Dataproc cluster on Compute Engine** page, click the **Customize cluster** panel, then scroll to the **Cluster properties** section.\n 1. To enable Spark optimization enhancements:\n 1. Click **+ ADD PROPERTIES**.\n 2. Select **spark** in the **Prefix** list, then add \"spark.dataproc.enhanced.optimizer.enabled\" in the **Key** field and \"true\" in **Value** field.\n 2. To enable Spark execution enhancements:\n 1. Click **+ ADD PROPERTIES**.\n 2. Select **spark** in the **Prefix** list, then add \"spark.dataproc.enhanced.execution.enabled\" in the **Key** field and \"true\" in **Value** field.\n4. Complete filling in or confirming the other cluster creation fields, then click **Create**.\n\ngcloud\n\n1. Run the following\n [gcloud dataproc clusters create](/sdk/gcloud/reference/dataproc/clusters/create)\n command locally in a terminal window or in\n [Cloud Shell](https://console.cloud.google.com/?cloudshell=true%22).\n\n ```\n gcloud dataproc clusters create CLUSTER_NAME \\\n --project=PROJECT_ID \\\n --region=REGION \\\n --image-version=IMAGE \\\n --properties=PROPERTIES\n ```\n\n Notes:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The project to associate with the cluster.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: The [Compute Engine region](/compute/docs/regions-zones#available) where the cluster will be located, such as `us-central1`.\n - You can add the optional `--zone=`\u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e flag to specify a zone within the specified region, such as `us-central1-a`. If you do not specify a zone, the Dataproc [autozone placement](/dataproc/docs/concepts/configuring-clusters/auto-zone) feature selects a zone with the specified region.\n - \u003cvar translate=\"no\"\u003eIMAGE\u003c/var\u003e: The Dataproc Spark optimizer and execution performance enhancements are available in Dataproc image versions `2.0.69+` and `2.1.17+` and later releases. If you omit this flag, Dataproc will select the latest subminor version of the default Dataproc on Compute Engine image for the cluster (see [Default Dataproc image version](/dataproc/docs/concepts/versioning/dataproc-version-clusters#default_dataproc_image_version)).\n - \u003cvar translate=\"no\"\u003ePROPERTIES\u003c/var\u003e:\n\n - To enable Spark optimization enhancements, specify:\n\n spark:spark.dataproc.enhanced.optimizer.enabled=true\n\n - To enable Spark execution enhancements, specify:\n\n spark:spark.dataproc.enhanced.execution.enabled=true\n\n - To enable Spark optimization and execution enhancements, specify:\n\n spark:spark.dataproc.enhanced.optimizer.enabled=true,spark:spark.dataproc.enhanced.execution.enabled=true\n\nAPI\n\n1. Specify the following\n [`SoftwareConfig.properties`](/static/dataproc/docs/reference/rest/v1/ClusterConfig#SoftwareConfig.FIELDS.properties)\n as part of a [`clusters.create`](/dataproc/docs/reference/rest/v1/projects.regions.clusters/create) request:\n\n - To enable Spark optimization enhancements, specify:\n\n \"spark:spark.dataproc.enhanced.optimizer.enabled\": \"true\"\n\n - To enable Spark execution enhancements, specify:\n\n \"spark:spark.dataproc.enhanced.execution.enabled\": \"true\"\n\n - To enable Spark optimization and execution enhancements, specify:\n\n \"spark:spark.dataproc.enhanced.optimizer.enabled\": \"true\",\"spark:spark.dataproc.enhanced.execution.enabled\": \"true\"\n\n\u003cbr /\u003e\n\nEnable or disable enhancements at job submission\n\nYou can use the Google Cloud console, Google Cloud CLI, and the Dataproc\nAPI to enable or disable Spark performance enhancements on a Spark job submitted\nto the Dataproc service. \n\nConsole\n\n1. In the Google Cloud console, open the Dataproc [Jobs](https://console.cloud.google.com/dataproc/jobs) page.\n2. On the **Jobs** page, click **Submit job** , then scroll to the job **Properties** section.\n 1. To enable Spark optimization enhancements:\n 1. Click **+ ADD PROPERTIES** . Add \"spark.dataproc.enhanced.optimizer.enabled\" in the **Key** field and \"true\" in **Value** field.\n 2. To enable Spark execution enhancements:\n 1. Click **+ ADD PROPERTIES**.\n 2. Add \"spark.dataproc.enhanced.execution.enabled\" in the **Key** field and \"true\" in **Value** field.\n3. Complete filling in or confirming the other job submission fields, then click **Submit**.\n\ngcloud\n\n1. Run the following\n [gcloud dataproc jobs submit](/sdk/gcloud/reference/dataproc/jobs/submit)\n command locally in a terminal window or in\n [Cloud Shell](https://console.cloud.google.com/?cloudshell=true%22).\n\n ```\n gcloud dataproc jobs submit SPARK_JOB_TYPE \\\n --cluster=CLUSTER_NAME \\\n --region=REGION \\\n --properties=PROPERTIES\n ```\n\n Notes:\n - \u003cvar translate=\"no\"\u003eSPARK_JOB_TYPE\u003c/var\u003e: Specify `spark`, `pyspark`, `spark-sql` or `spark-r` .\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: The name of the job where the job will run.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: The region where the cluster is located.\n - \u003cvar translate=\"no\"\u003ePROPERTIES\u003c/var\u003e:\n\n - To enable Spark optimization enhancements, specify:\n\n spark.dataproc.enhanced.optimizer.enabled=true\n\n - To enable Spark execution enhancements, specify:\n\n spark.dataproc.enhanced.execution.enabled=true\n\n - To enable Spark optimization and execution enhancements, specify:\n\n spark.dataproc.enhanced.optimizer.enabled=true,spark.dataproc.enhanced.execution.enabled=true\n\nAPI\n\n1. Specify the following `properties` for a\n [SparkJob](/dataproc/docs/reference/rest/v1/SparkJob),\n [PySparkJob](/dataproc/docs/reference/rest/v1/PySparkJob),\n [SparkSqlJob](/dataproc/docs/reference/rest/v1/SparkSqlJob), or\n [SparkRJob](/dataproc/docs/reference/rest/v1/SparkRJob)\n as part of a\n [`jobs.submit`](/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit)\n request:\n\n - To enable Spark optimization enhancements, specify:\n\n \"spark.dataproc.enhanced.optimizer.enabled=true\"\n\n - To enable Spark execution enhancements, specify:\n\n \"spark.dataproc.enhanced.execution.enabled=true\"\n\n - To enable Spark optimization and execution enhancements, specify:\n\n \"spark.dataproc.enhanced.execution.enabled=true,spark.dataproc.enhanced.optimizer.enabled=true\"\n\n\u003cbr /\u003e\n\n| **Note:** You can click **Equivalent Command Line** or **Equivalent REST** at the bottom of the left panel of the [**Create a Dataproc cluster on Compute Engine**](https://console.cloud.google.com/dataproc/clustersAdd) page in the Google Cloud console to have the console construct an equivalent `gcloud` tool command or API REST request that you can use from the command line or in your code to create a cluster."]]