이 페이지에서는 기존 Dataproc 클러스터에 대해 Cloud Data Fusion에서 파이프라인을 실행하는 방법을 설명합니다.
기본적으로 Cloud Data Fusion은 파이프라인마다 임시 클러스터를 만듭니다. 파이프라인 실행 시작 시 클러스터를 만든 후 파이프라인 실행이 완료되면 삭제합니다. 이 동작은 리소스가 필요할 때만 생성되도록 설정하여 비용을 절약하지만, 다음 시나리오에서는 이러한 기본 동작이 적합하지 않을 수 있습니다.
모든 파이프라인의 새 클러스터를 만드는 데 걸리는 시간이 사용 사례에 실용적이지 않은 경우
조직에서 중앙 집중식으로 클러스터 생성을 관리해야 하는 경우, 예를 들어 모든 Dataproc 클러스터에 대해 특정 정책을 적용하려는 경우
이러한 시나리오에서는 대신 다음 단계를 수행하여 기존 클러스터에 대해 파이프라인을 실행합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis guide explains how to run Cloud Data Fusion pipelines against a pre-existing Dataproc cluster, instead of the default behavior of creating and deleting ephemeral clusters.\u003c/p\u003e\n"],["\u003cp\u003eUsing an existing cluster can be beneficial when cluster creation time is prohibitive or when centralized cluster management is required by the organization.\u003c/p\u003e\n"],["\u003cp\u003eTo use an existing Dataproc cluster, a Cloud Data Fusion instance and a pre-created Dataproc cluster are needed, and if running version 6.2 of Cloud Data Fusion, an older Dataproc image or an upgrade is required.\u003c/p\u003e\n"],["\u003cp\u003eConnecting to the existing cluster involves creating a new Compute Engine profile within Cloud Data Fusion and selecting the "Existing Dataproc" option, then providing the required information.\u003c/p\u003e\n"],["\u003cp\u003eAfter creating the custom profile, the pipeline must be configured in the Studio to use the custom profile, and then the pipeline will run against the designated Dataproc cluster.\u003c/p\u003e\n"]]],[],null,["# Run a pipeline against an existing Dataproc cluster\n\nThis page describes how to run a pipeline in Cloud Data Fusion against\nan existing Dataproc cluster.\n\nBy default, Cloud Data Fusion creates ephemeral clusters for each pipeline:\nit creates a cluster at the beginning of the pipeline run, and then deletes it\nafter the pipeline run completes. While this behavior saves costs by ensuring\nthat resources are only created when required, this default behavior might not\nbe desirable in the following scenarios:\n\n- If the time it takes to create a new cluster for every pipeline is\n prohibitive for your use case.\n\n- If your organization requires cluster creation to be managed centrally; for\n example, when you want to enforce certain policies for all\n Dataproc clusters.\n\nFor these scenarios, you instead run pipelines against an existing cluster with\nthe following steps.\n\nBefore you begin\n----------------\n\nYou need the following:\n\n- A Cloud Data Fusion instance.\n\n [Create a Cloud Data Fusion instance](/data-fusion/docs/how-to/create-instance)\n- An existing Dataproc cluster.\n\n [Create a Dataproc cluster](/dataproc/docs/guides/create-cluster)\n- If you run your pipelines in Cloud Data Fusion version 6.2, use an\n older [Dataproc image](/dataproc/docs/concepts/versioning/dataproc-versions)\n that runs with Hadoop 2.x (for example, 1.5-debian10), or [upgrade to the\n latest Cloud Data Fusion version](/data-fusion/docs/how-to/upgrading).\n\nConnect to the existing cluster\n-------------------------------\n\nIn Cloud Data Fusion versions 6.2.1 and later, you can connect to an\nexisting Dataproc cluster when you create a new Compute Engine\nprofile.\n\n1. Go to your instance:\n\n\n 1. In the Google Cloud console, go to the Cloud Data Fusion page.\n\n 2. To open the instance in the Cloud Data Fusion Studio,\n click **Instances** , and then click **View instance**.\n\n [Go to Instances](https://console.cloud.google.com/data-fusion/locations/-/instances)\n\n \u003cbr /\u003e\n\n2. Click **System admin**.\n\n3. Click the **Configuration** tab.\n\n4. Click\n expand_more\n **System compute profiles**.\n\n5. Click **Create new profile**. A page of provisioners opens.\n\n6. Click **Existing Dataproc**.\n\n7. Enter the profile, cluster, and monitoring information.\n\n8. Click **Create**.\n\nConfigure your pipeline to use the custom profile\n-------------------------------------------------\n\n1. Go to your instance:\n\n\n 1. In the Google Cloud console, go to the Cloud Data Fusion page.\n\n 2. To open the instance in the Cloud Data Fusion Studio,\n click **Instances** , and then click **View instance**.\n\n [Go to Instances](https://console.cloud.google.com/data-fusion/locations/-/instances)\n\n \u003cbr /\u003e\n\n2. Go to your pipeline on the **Studio** page.\n\n3. Click **Configure**.\n\n4. Click **Compute config**.\n\n5. Click the profile that you created.\n\n **Figure 1**: Click the custom profile\n6. Run the pipeline. It runs against the existing Dataproc\n cluster.\n\nWhat's next\n-----------\n\n- Learn more about [configuring clusters](/data-fusion/docs/concepts/configure-clusters).\n- Troubleshoot [deleting clusters](/data-fusion/docs/troubleshoot-deleting-clusters)."]]