job.sparkJob.jarFileUris: 'file:///usr/lib/spark/examples/jars/spark-examples.jar'. Spark Scala 작업 코드가 포함된 jar이 설치된 Dataproc 클러스터의 마스터 노드에 있는 로컬 파일 경로입니다.
job.sparkJob.mainClass: 'org.apache.spark.examples.SparkPi'. 작업의 pi 계산 Scala 애플리케이션의 기본 메서드입니다.
실행을 클릭합니다. API 템플릿을 처음 실행하면 Google 계정을 선택하여 로그인한 다음 Google API 탐색기가 사용자 계정에 액세스할 수 있도록 승인하라는 메시지가 표시될 수 있습니다. 요청이 성공하면 JSON 응답에 작업 제출 요청이 대기 중인 것으로 표시됩니다.
작업 출력을 보려면 Google Cloud 콘솔에서 Dataproc 작업 페이지를 연 다음 맨 위에 있는 최신 작업 ID를 클릭합니다.
'LINE WRAP'을 ON으로 클릭하여 오른쪽 여백을 초과하는 줄을 보이게 할 수 있습니다.
...
Pi is roughly 3.141804711418047
...
삭제
이 페이지에서 사용한 리소스 비용이 Google Cloud 계정에 청구되지 않도록 하려면 다음 단계를 수행합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis guide demonstrates how to submit a Spark job to an existing Dataproc cluster using a Google APIs Explorer template.\u003c/p\u003e\n"],["\u003cp\u003eBefore submitting a job, a Dataproc cluster must be created using methods like the APIs Explorer, Google Cloud console, gcloud CLI, or Cloud Client Libraries.\u003c/p\u003e\n"],["\u003cp\u003eThe Spark job example provided calculates a rough value for pi, and requires parameters such as projectId, region, clusterName, and specific job details like task count, jar file path, and main class.\u003c/p\u003e\n"],["\u003cp\u003eAfter submitting the job through the API, you can view the job output in the Dataproc Jobs page in the Google Cloud console.\u003c/p\u003e\n"],["\u003cp\u003eTo avoid incurring charges, delete the Dataproc cluster using one of the provided methods if it is no longer needed.\u003c/p\u003e\n"]]],[],null,["Submit a Spark job by using a template This page shows you how to use an [Google APIs Explorer](https://developers.google.com/apis-explorer/#p/) template to\nrun a simple Spark job on an existing Dataproc cluster.\n\nFor other ways to submit a job to a Dataproc cluster, see:\n\n- [Create a Dataproc cluster by using the Google Cloud console](/dataproc/docs/quickstarts/create-cluster-console#submit_a_job)\n- [Create a Dataproc cluster by using the Google Cloud CLI](/dataproc/docs/quickstarts/create-cluster-gcloud#submit_a_job)\n- [Create a Dataproc cluster by using client libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries)\n\nBefore you begin Before you can run a Dataproc job, you must create a cluster of one or more virtual machines (VMs) to run it on. You can use the [APIs Explorer](/dataproc/docs/quickstarts/create-cluster-template), the [Google Cloud console](/dataproc/docs/quickstarts/update-cluster-console#create_a_cluster), the gcloud CLI [gcloud](/dataproc/docs/quickstarts/update-cluster-gcloud#create_a_cluster) command-line tool, or the [Quickstarts using Cloud Client Libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries) to create a cluster.\n\n\u003cbr /\u003e\n\nSubmit a job\n\nTo submit a sample [Apache Spark](http://spark.apache.org/)\njob that calculates a rough value for\n[pi](https://en.wikipedia.org/wiki/Pi), fill in and\nexecute the Google APIs Explorer **Try this API** template.\n| **Note:** The `region`, `clusterName` and `job` parameter values are filled in for you. Confirm or replace the `region` and `clusterName` parameter values to match your cluster's region and name. The `job` parameter values are required to run the a Spark job that is pre-installed on the Dataproc cluster's master node.\n\n1. **Request parameters:**\n\n 1. Insert your [**projectId**](https://console.cloud.google.com/).\n 2. Specify the [**region**](/compute/docs/regions-zones/regions-zones#available) where your cluster is located (confirm or replace \"us-central1\"). Your cluster's region is listed on the Dataproc [**Clusters**](https://console.cloud.google.com/dataproc/clusters) page in the Google Cloud console.\n2. **Request body:**\n\n 1. [**job.placement.clusterName**](/dataproc/docs/reference/rest/v1/SparkJob#JobPlacement.FIELDS.cluster_name): The name of the cluster where the job will run (confirm or replace \"example-cluster\").\n 2. [**job.sparkJob.args**](/dataproc/docs/reference/rest/v1/SparkJob#FIELDS.args): \"1000\", the number of job tasks.\n 3. [**job.sparkJob.jarFileUris**](/dataproc/docs/reference/rest/v1/SparkJob#FIELDS.jar_file_uris): \"file:///usr/lib/spark/examples/jars/spark-examples.jar\". This is the local file path on the Dataproc cluster's master node where the jar that contains the Spark Scala job code is installed.\n 4. [**job.sparkJob.mainClass**](/dataproc/docs/reference/rest/v1/SparkJob#FIELDS.main_class): \"org.apache.spark.examples.SparkPi\". The is the main method of the job's pi calculation Scala application.\n3. Click **EXECUTE**. The first time you\n run the API template, you may be asked to choose and sign into\n your Google account, then authorize the Google APIs Explorer to access your\n account. If the request is successful, the JSON response\n shows that job submission request is pending.\n\n4. To view job output, open the\n [Dataproc Jobs](https://console.cloud.google.com/dataproc/jobs) page\n in the Google Cloud console, then click the top (most recent) Job ID.\n Click \"LINE WRAP\" to ON to bring lines that exceed the right margin into view.\n\n ```\n ...\n Pi is roughly 3.141804711418047\n ...\n ```\n\nClean up\n\n\nTo avoid incurring charges to your Google Cloud account for\nthe resources used on this page, follow these steps.\n\n1. If you don't need the cluster to explore the other quickstarts or to run other jobs, use the [APIs Explorer](/dataproc/docs/quickstarts/quickstart-explorer-delete), the [Google Cloud console](/dataproc/docs/quickstarts/update-cluster-console#delete_a_cluster), the gcloud CLI [gcloud](/dataproc/docs/quickstarts/update-cluster-gcloud#delete_a_cluster) command-line tool, or the [Cloud Client Libraries](/dataproc/docs/quickstarts/create-cluster-client-libraries) to delete the cluster.\n\nWhat's next\n\n- Learn how to [update a Dataproc cluster by using a template](/dataproc/docs/quickstarts/update-cluster-template)."]]