이 페이지에서는 Google API 탐색기 템플릿을 사용해 Dataproc 클러스터를 업데이트하여 클러스터의 작업자 수를 변경하는 방법을 보여줍니다. 대규모 작업을 처리하기 위해 추가 작업자가 필요할 때 더 많은 작업자를 포함하도록 클러스터를 확장하는 것이 일반적인 작업입니다.
클러스터가 있는 리전을 지정합니다('us-central1' 확인 또는 교체).
클러스터의 리전은 Google Cloud 콘솔의 Dataproc 클러스터 페이지에 나열됩니다.
업데이트할 기존 클러스터의 clusterName을 지정합니다('example-cluster' 확인 또는 교체).
updateMask: 'config.worker_config.num_instances'. 업데이트할 numInstances 매개변수에 대한 클러스터 리소스와 관련된 JSON PATH입니다(요청 본문 안내 참조).
요청 본문:
config.workerConfig.numInstances: ('3': 새 작업자 수). 이 값을 변경하여 더 적거나 많은 작업자를 추가할 수 있습니다. 예를 들어 표준 클러스터의 기본 작업자 수가 2개인 경우 '3'을 지정하면 작업자 1개가 추가되고 '4'를 지정하면 2개가 추가됩니다.
표준 Dataproc 클러스터에는 2개 이상의 작업자가 있어야 합니다.
실행을 클릭합니다. API 템플릿을 처음 실행하면 Google 계정을 선택하여 로그인한 다음 Google API 탐색기가 사용자 계정에 액세스할 수 있도록 승인하라는 메시지가 표시될 수 있습니다. 요청이 성공하면 JSON 응답은 클러스터 업데이트가 대기 중임을 나타냅니다.
클러스터의 작업자 수가 업데이트되었는지 확인하려면 Google Cloud 콘솔에서 Dataproc 클러스터 페이지를 열고 클러스터의 총 워커 노드 수 열을 확인합니다. 클러스터 업데이트가 완료된 후 업데이트된 값을 보려면 페이지 상단에서 '새로고침'을 클릭해야 할 수 있습니다.
삭제
이 페이지에서 사용한 리소스 비용이 Google Cloud 계정에 청구되지 않도록 하려면 다음 단계를 수행합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis guide details how to update a Dataproc cluster's worker count using the Google APIs Explorer template.\u003c/p\u003e\n"],["\u003cp\u003eUpdating the worker count is done through the \u003ccode\u003econfig.workerConfig.numInstances\u003c/code\u003e parameter, where you can specify the desired number of workers.\u003c/p\u003e\n"],["\u003cp\u003eBefore making an update, you must specify your project ID, the region of your cluster, and the name of the existing cluster to modify.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eupdateMask\u003c/code\u003e parameter with a value of "config.worker_config.num_instances" is required to successfully update the worker count.\u003c/p\u003e\n"],["\u003cp\u003eAfter executing the update, you can confirm the change by checking the "Total worker nodes" column on the Dataproc Clusters page.\u003c/p\u003e\n"]]],[],null,["Update a Dataproc cluster by using a template This page shows you how to use an [Google APIs Explorer](https://developers.google.com/apis-explorer/#p/) template to\nupdate a Dataproc cluster to change the number of workers in a\ncluster. [Scaling a cluster](/dataproc/docs/concepts/configuring-clusters/scaling-clusters)\nup to include more workers is a common task when additional workers are needed\nto process larger jobs.\n\nFor other ways to update a Dataproc cluster, see:\n\n- [Create a Dataproc cluster by using the Google Cloud console](/dataproc/docs/quickstarts/create-cluster-console#update_a_cluster)\n- [Create a Dataproc cluster by using the Google Cloud CLI](/dataproc/docs/quickstarts/create-cluster-gcloud#update_a_cluster)\n- [Create a Dataproc cluster by using client libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries)\n\nBefore you begin This quickstart assumes you have already created a Dataproc cluster. You can use the [APIs Explorer](/dataproc/docs/quickstarts/create-cluster-template), the [Google Cloud console](/dataproc/docs/quickstarts/update-cluster-console#create_a_cluster), the gcloud CLI [gcloud](/dataproc/docs/quickstarts/update-cluster-gcloud#create_a_cluster) command-line tool, or the [Quickstarts using Cloud Client Libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries) to create a cluster.\n\n\u003cbr /\u003e\n\nUpdate a cluster\n\nTo update the number of workers in your cluster, fill in and execute the\nGoogle APIs Explorer **Try this API** template.\n| **Note:** The `region`, `clusterName` and `updateMask` and `config.workerConfig.numInstances` parameter values are filled in for you. Confirm or replace the `region` and`clusterName` parameter values to match your cluster's region and name. The `updateMask` parameter value is required to update the number of workers in your cluster. You can accept or change the `config.workerConfig.numInstances` parameter value.\n\n1. **Request parameters:**\n\n 1. Insert your [**projectId**](https://console.cloud.google.com/).\n 2. Specify the [**region**](/compute/docs/regions-zones/regions-zones#available) where your cluster is located (confirm or replace \"us-central1\"). Your cluster's region is listed on the Dataproc [**Clusters**](https://console.cloud.google.com/dataproc/clusters) page in the Google Cloud console.\n 3. Specify the [**clusterName**](/dataproc/docs/reference/rest/v1/projects.regions.clusters/patch#body.PATH_PARAMETERS.cluster_name) of the existing cluster that you are updating (confirm or replace \"example-cluster\").\n 4. [**updateMask**](/dataproc/docs/reference/rest/v1/projects.regions.clusters/patch): \"config.worker_config.num_instances\". This is the JSON PATH relative to the [Cluster](/dataproc/docs/reference/rest/v1/projects.regions.clusters#resource:-cluster) resource to the `numInstances` parameter to be updated (see the Request body instructions).\n2. **Request body:**\n\n 1. [**config.workerConfig.numInstances**](/dataproc/docs/reference/rest/v1/ClusterConfig#InstanceGroupConfig.FIELDS.num_instances): (\"3\": the new number of workers). You can change this value to add fewer or more workers. For example, if your standard cluster has the default number of 2 workers, specifying \"3\" will add 1 worker; specifying \"4 will add 2). A standard Dataproc cluster must have at least 2 workers.\n3. Click **EXECUTE**. The first time you\n run the API template, you may be asked to choose and sign into\n your Google account, then authorize the Google APIs Explorer to access your\n account. If the request is successful, the JSON response\n shows that cluster update is pending.\n\n4. To confirm that the number of workers in the cluster has been updated,\n open the Dataproc\n [Clusters](https://console.cloud.google.com/dataproc/clusters) page in the Google Cloud console\n and view the cluster's **Total worker nodes** column. You may need\n to click REFRESH at the top of the page to view the updated value after the\n cluster update completes.\n\nClean up\n\n\nTo avoid incurring charges to your Google Cloud account for\nthe resources used on this page, follow these steps.\n\n1. If you don't need the cluster to explore the other quickstarts or to run other jobs, use the [APIs Explorer](/dataproc/docs/quickstarts/quickstart-explorer-delete), the [Google Cloud console](/dataproc/docs/quickstarts/update-cluster-console#delete_a_cluster), the gcloud CLI [gcloud](/dataproc/docs/quickstarts/update-cluster-gcloud#delete_a_cluster) command-line tool, or the [Quickstarts using Cloud Client Libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries) to delete the cluster.\n\nWhat's next\n\n- You can use this quickstart template to restore the cluster to its previous\n `workerConfig.numInstances` value.\n\n- Learn how to [write and run a Spark Scala job](/dataproc/docs/tutorials/spark-scala)."]]