このページでは、既存の Dataproc クラスタに対して Cloud Data Fusion でパイプラインを実行する方法について説明します。
デフォルトでは、Cloud Data Fusion はパイプラインごとにエフェメラル クラスタを作成します。パイプライン実行の開始時点でクラスタを作成し、パイプラインの実行が完了した後に削除します。この動作では、必要な場合にのみリソースが作成されるようにすることでコストを抑えられますが、次の状況では、このデフォルトの動作が望ましくない場合があります。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eThis guide explains how to run Cloud Data Fusion pipelines against a pre-existing Dataproc cluster, instead of the default behavior of creating and deleting ephemeral clusters.\u003c/p\u003e\n"],["\u003cp\u003eUsing an existing cluster can be beneficial when cluster creation time is prohibitive or when centralized cluster management is required by the organization.\u003c/p\u003e\n"],["\u003cp\u003eTo use an existing Dataproc cluster, a Cloud Data Fusion instance and a pre-created Dataproc cluster are needed, and if running version 6.2 of Cloud Data Fusion, an older Dataproc image or an upgrade is required.\u003c/p\u003e\n"],["\u003cp\u003eConnecting to the existing cluster involves creating a new Compute Engine profile within Cloud Data Fusion and selecting the "Existing Dataproc" option, then providing the required information.\u003c/p\u003e\n"],["\u003cp\u003eAfter creating the custom profile, the pipeline must be configured in the Studio to use the custom profile, and then the pipeline will run against the designated Dataproc cluster.\u003c/p\u003e\n"]]],[],null,["# Run a pipeline against an existing Dataproc cluster\n\nThis page describes how to run a pipeline in Cloud Data Fusion against\nan existing Dataproc cluster.\n\nBy default, Cloud Data Fusion creates ephemeral clusters for each pipeline:\nit creates a cluster at the beginning of the pipeline run, and then deletes it\nafter the pipeline run completes. While this behavior saves costs by ensuring\nthat resources are only created when required, this default behavior might not\nbe desirable in the following scenarios:\n\n- If the time it takes to create a new cluster for every pipeline is\n prohibitive for your use case.\n\n- If your organization requires cluster creation to be managed centrally; for\n example, when you want to enforce certain policies for all\n Dataproc clusters.\n\nFor these scenarios, you instead run pipelines against an existing cluster with\nthe following steps.\n\nBefore you begin\n----------------\n\nYou need the following:\n\n- A Cloud Data Fusion instance.\n\n [Create a Cloud Data Fusion instance](/data-fusion/docs/how-to/create-instance)\n- An existing Dataproc cluster.\n\n [Create a Dataproc cluster](/dataproc/docs/guides/create-cluster)\n- If you run your pipelines in Cloud Data Fusion version 6.2, use an\n older [Dataproc image](/dataproc/docs/concepts/versioning/dataproc-versions)\n that runs with Hadoop 2.x (for example, 1.5-debian10), or [upgrade to the\n latest Cloud Data Fusion version](/data-fusion/docs/how-to/upgrading).\n\nConnect to the existing cluster\n-------------------------------\n\nIn Cloud Data Fusion versions 6.2.1 and later, you can connect to an\nexisting Dataproc cluster when you create a new Compute Engine\nprofile.\n\n1. Go to your instance:\n\n\n 1. In the Google Cloud console, go to the Cloud Data Fusion page.\n\n 2. To open the instance in the Cloud Data Fusion Studio,\n click **Instances** , and then click **View instance**.\n\n [Go to Instances](https://console.cloud.google.com/data-fusion/locations/-/instances)\n\n \u003cbr /\u003e\n\n2. Click **System admin**.\n\n3. Click the **Configuration** tab.\n\n4. Click\n expand_more\n **System compute profiles**.\n\n5. Click **Create new profile**. A page of provisioners opens.\n\n6. Click **Existing Dataproc**.\n\n7. Enter the profile, cluster, and monitoring information.\n\n8. Click **Create**.\n\nConfigure your pipeline to use the custom profile\n-------------------------------------------------\n\n1. Go to your instance:\n\n\n 1. In the Google Cloud console, go to the Cloud Data Fusion page.\n\n 2. To open the instance in the Cloud Data Fusion Studio,\n click **Instances** , and then click **View instance**.\n\n [Go to Instances](https://console.cloud.google.com/data-fusion/locations/-/instances)\n\n \u003cbr /\u003e\n\n2. Go to your pipeline on the **Studio** page.\n\n3. Click **Configure**.\n\n4. Click **Compute config**.\n\n5. Click the profile that you created.\n\n **Figure 1**: Click the custom profile\n6. Run the pipeline. It runs against the existing Dataproc\n cluster.\n\nWhat's next\n-----------\n\n- Learn more about [configuring clusters](/data-fusion/docs/concepts/configure-clusters).\n- Troubleshoot [deleting clusters](/data-fusion/docs/troubleshoot-deleting-clusters)."]]