첫 번째 마스터 노드의 호스트 이름. 값은 표준 또는 단일 노드 클러스터에서는 [CLUSTER_NAME]-m이거나 고가용성 클러스터에서는 [CLUSTER_NAME]-m-0입니다. 여기서 [CLUSTER_NAME]은 클러스터의 이름입니다.
dataproc-master-additional
고가용성 클러스터에서 추가 마스터 노드의 호스트 이름을 쉼표로 구분한 목록입니다. 예를 들어 마스터 노드가 3개인 클러스터의 경우 [CLUSTER_NAME]-m-1,[CLUSTER_NAME]-m-2입니다.
SPARK_BQ_CONNECTOR_VERSION or SPARK_BQ_CONNECTOR_URL
Spark 애플리케이션에서 사용할 Spark BigQuery 커넥터 버전을 가리키는 버전 또는 URL입니다(예: 0.42.1 또는 gs://spark-lib/bigquery/spark-3.5-bigquery-0.42.1.jar). 기본 Spark BigQuery 커넥터 버전은 Dataproc 2.1 이상 이미지 버전 클러스터에 사전 설치됩니다. 자세한 내용은 Spark BigQuery 커넥터 사용을 참고하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-27(UTC)"],[[["\u003cp\u003eCustom metadata is accessible to processes within your cluster and can be utilized by initialization actions.\u003c/p\u003e\n"],["\u003cp\u003eLabels, while not directly available to cluster processes, are used for searching resources via the Dataproc API.\u003c/p\u003e\n"],["\u003cp\u003eIf data needs to be accessible to the cluster and also be a search parameter, it should be added as both metadata and a label.\u003c/p\u003e\n"],["\u003cp\u003eDataproc provides predefined metadata keys like \u003ccode\u003edataproc-bucket\u003c/code\u003e, \u003ccode\u003edataproc-region\u003c/code\u003e, \u003ccode\u003edataproc-worker-count\u003c/code\u003e, and others, to manage cluster operations.\u003c/p\u003e\n"],["\u003cp\u003eCustom metadata can be set during cluster creation using the \u003ccode\u003e--metadata\u003c/code\u003e flag with the gcloud CLI's \u003ccode\u003egcloud dataproc clusters create\u003c/code\u003e command.\u003c/p\u003e\n"]]],[],null,["| **Metadata compared to Labels**\n|\n| - Custom metadata is available to processes running on your cluster, and can be used by initialization actions.\n| - Labels are not readily available to processes running on your cluster, but can be used when searching through resources with the Dataproc API.\n| If you need a piece of data to be available to your cluster and also used as an API search parameter, then add it both as metadata and as a label to your cluster.\n\nDataproc sets special metadata values for the instances that run in your\ncluster:\n\n| Metadata key | Value |\n|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `dataproc-bucket` | Name of the cluster's [staging bucket](/dataproc/docs/concepts/configuring-clusters/staging-bucket) |\n| `dataproc-region` | [Region](/dataproc/docs/concepts/regional-endpoints) of the cluster's endpoint |\n| `dataproc-worker-count` | Number of worker nodes in the cluster. The value is `0` for [single node clusters](/dataproc/docs/concepts/configuring-clusters/single-node-clusters). |\n| `dataproc-cluster-name` | Name of the cluster |\n| `dataproc-cluster-uuid` | UUID of the cluster |\n| `dataproc-role` | Instance's role, either `Master` or `Worker` |\n| `dataproc-master` | Hostname of the first master node. The value is either `[CLUSTER_NAME]-m` in a standard or single node cluster, or `[CLUSTER_NAME]-m-0` in a [high-availability cluster](/dataproc/docs/concepts/configuring-clusters/high-availability), where `[CLUSTER_NAME]` is the name of your cluster. |\n| `dataproc-master-additional` | Comma-separated list of hostnames for the additional master nodes in a high-availability cluster, for example, `[CLUSTER_NAME]-m-1,[CLUSTER_NAME]-m-2` in a cluster that has 3 master nodes. |\n| `SPARK_BQ_CONNECTOR_VERSION or SPARK_BQ_CONNECTOR_URL` | The version or URL that points to a Spark BigQuery connector version to use in Spark applications, for example, `0.42.1` or `gs://spark-lib/bigquery/spark-3.5-bigquery-0.42.1.jar`. A default Spark BigQuery connector version is pre-installed in Dataproc `2.1` and later image version clusters. For more information, see [Use the Spark BigQuery connector](/dataproc/docs/tutorials/bigquery-connector-spark-example). |\n\nYou can use these values to customize the behavior of\n[initialization actions](/dataproc/docs/concepts/configuring-clusters/init-actions).\n\nYou can use the `--metadata` flag in the\n[gcloud dataproc clusters create](/sdk/gcloud/reference/dataproc/clusters/create)\ncommand to provide your own metadata: \n\n```\ngcloud dataproc clusters create CLUSTER_NAME \\\n --region=REGION \\\n --metadata=name1=value1,name2=value2... \\\n ... other flags ...\n```"]]