다음 샘플에서는 create_custom_training_job_from_component 메서드를 사용하여 Python 구성요소를 사용자 정의 Google Cloud 머신 리소스가 있는 커스텀 학습 작업으로 변환한 후 Vertex AI Pipelines에서 컴파일된 파이프라인을 실행하는 방법을 보여줍니다.
importkfpfromkfpimportdslfromgoogle_cloud_pipeline_components.v1.custom_jobimportcreate_custom_training_job_from_component# Create a Python component@dsl.componentdefmy_python_component():importtimetime.sleep(1)# Convert the above component into a custom training jobcustom_training_job=create_custom_training_job_from_component(my_python_component,display_name='DISPLAY_NAME',machine_type='MACHINE_TYPE',accelerator_type='ACCELERATOR_TYPE',accelerator_count='ACCELERATOR_COUNT',boot_disk_type:'BOOT_DISK_TYPE',boot_disk_size_gb:'BOOT_DISK_SIZE',network:'NETWORK',reserved_ip_ranges:'RESERVED_IP_RANGES',nfs_mounts:'NFS_MOUNTS'persistent_resource_id:'PERSISTENT_RESOURCE_ID')# Define a pipeline that runs the custom training job@dsl.pipeline(name="resource-spec-request",description="A simple pipeline that requests a Google Cloud machine resource",pipeline_root='PIPELINE_ROOT',)defpipeline():training_job_task=custom_training_job(project='PROJECT_ID',location='LOCATION',).set_display_name('training-job-task')
다음을 바꿉니다.
DISPLAY_NAME: 커스텀 작업의 이름입니다. 이름을 지정하지 않으면 기본적으로 구성요소 이름이 사용됩니다.
MACHINE_TYPE: 커스텀 작업을 실행할 머신 유형입니다(예: e2-standard-4). 머신 유형에 대한 자세한 내용은 머신 유형을 참조하세요. TPU를 accelerator_type으로 지정한 경우 cloud-tpu로 설정합니다.
자세한 내용은 machine_type 매개변수 참조를 참고하세요.
ACCELERATOR_TYPE: 머신에 연결된 가속기 유형입니다. 사용 가능한 GPU와 구성 방법에 대한 자세한 내용은 GPU를 참조하세요. 사용 가능한 TPU 유형과 구성 방법에 대한 자세한 내용은 TPU를 참조하세요.
자세한 내용은 accelerator_type 매개변수 참조를 참고하세요.
ACCELERATOR_COUNT: 커스텀 작업을 실행하는 머신에 연결된 가속기의 수입니다. 가속기 유형을 지정하면 기본적으로 가속기 수가 1로 설정됩니다.
NETWORK: 커스텀 작업이 비공개 서비스 액세스가 구성된 Compute Engine 네트워크에 피어링된 경우 네트워크의 전체 이름을 지정합니다. 자세한 내용은 network 매개변수 참조를 참고하세요.
RESERVED_IP_RANGES: 커스텀 작업을 배포하는 데 사용된 VPC 네트워크 아래의 예약된 IP 범위의 이름 목록입니다.
자세한 내용은 reserved_ip_ranges 매개변수 참조를 참고하세요.
NFS_MOUNTS: JSON 사전 형식의 NFS 마운트 리소스 목록입니다.
자세한 내용은 nfs_mounts 매개변수 참조를 참고하세요.
PERSISTENT_RESOURCE_ID(미리보기): 파이프라인을 실행할 영구 리소스의 ID입니다. 영구 리소스를 지정하면 파이프라인은 주문형 및 단기 머신 리소스 대신 영구 리소스와 연결된 기존 머신에서 실행됩니다. 파이프라인의 네트워크 및 CMEK 구성은 영구 리소스에 지정된 구성과 일치해야 합니다.
영구 리소스 및 만들기 방법에 관한 자세한 내용은 영구 리소스 만들기를 참고하세요.
PIPELINE_ROOT: 파이프라인 서비스 계정이 액세스할 수 있는 Cloud Storage URI를 지정합니다. 파이프라인 실행의 아티팩트는 파이프라인 루트 내에 저장됩니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-06-23(UTC)"],[],[],null,["# Learn how to request Google Cloud machine resources in Vertex AI Pipelines\n\nYou can run your Python component on Vertex AI Pipelines by using Google Cloud-specific machine resources offered by Vertex AI custom training.\n\nYou can use the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method from the [Google Cloud Pipeline Components](/vertex-ai/docs/pipelines/gcpc-list) to transform a Python component into a Vertex AI custom training job. [Learn how to create a custom job](/vertex-ai/docs/training/create-custom-job).\n\nCreate a custom training job from a component using Vertex AI Pipelines\n-----------------------------------------------------------------------\n\nThe following sample shows how to use the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method to transform a Python component into a custom training job with user-defined Google Cloud machine resources, and then run the compiled pipeline on Vertex AI Pipelines: \n\n\n import kfp\n from kfp import dsl\n from google_cloud_pipeline_components.v1.custom_job import create_custom_training_job_from_component\n\n # Create a Python component\n @dsl.component\n def my_python_component():\n import time\n time.sleep(1)\n\n # Convert the above component into a custom training job\n custom_training_job = create_custom_training_job_from_component(\n my_python_component,\n display_name = '\u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e',\n machine_type = '\u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e',\n accelerator_type='\u003cvar translate=\"no\"\u003eACCELERATOR_TYPE\u003c/var\u003e',\n accelerator_count='\u003cvar translate=\"no\"\u003eACCELERATOR_COUNT\u003c/var\u003e',\n boot_disk_type: '\u003cvar translate=\"no\"\u003eBOOT_DISK_TYPE\u003c/var\u003e',\n boot_disk_size_gb: '\u003cvar translate=\"no\"\u003eBOOT_DISK_SIZE\u003c/var\u003e',\n network: '\u003cvar translate=\"no\"\u003eNETWORK\u003c/var\u003e',\n reserved_ip_ranges: '\u003cvar translate=\"no\"\u003eRESERVED_IP_RANGES\u003c/var\u003e',\n nfs_mounts: '\u003cvar translate=\"no\"\u003eNFS_MOUNTS\u003c/var\u003e'\n persistent_resource_id: '\u003cvar translate=\"no\"\u003ePERSISTENT_RESOURCE_ID\u003c/var\u003e'\n )\n\n # Define a pipeline that runs the custom training job\n @dsl.pipeline(\n name=\"resource-spec-request\",\n description=\"A simple pipeline that requests a Google Cloud machine resource\",\n pipeline_root='\u003cvar translate=\"no\"\u003ePIPELINE_ROOT\u003c/var\u003e',\n )\n def pipeline():\n training_job_task = custom_training_job(\n project='\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e',\n location='\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e',\n ).set_display_name('training-job-task')\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eDISPLAY_NAME\u003c/var\u003e: The name of the custom job. If you don't specify the name, the component name is used, by default.\n\n- \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The type of the machine for running the custom job---for example, `e2-standard-4`. For more information about machine types, see [Machine types](/vertex-ai/docs/training/configure-compute#machine-types). If you specified a TPU as the `accelerator_type`, set this to `cloud-tpu`.\n For more information, see the [`machine_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.machine_type).\n\n- \u003cvar translate=\"no\"\u003eACCELERATOR_TYPE\u003c/var\u003e: The type of accelerator attached to the machine. For more information about the available GPUs and how to configure them, see [GPUs](/vertex-ai/docs/training/configure-compute#specifying_gpus). For more information about the available TPU types and how to configure them, see [TPUs](/vertex-ai/docs/training/configure-compute#tpu).\n For more information, see the [`accelerator_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.accelerator_type).\n\n- \u003cvar translate=\"no\"\u003eACCELERATOR_COUNT\u003c/var\u003e: The number of accelerators attached to the machine running the custom job. If you specify the accelerator type, the accelerator count is set to `1`, by default.\n\n- \u003cvar translate=\"no\"\u003eBOOT_DISK_TYPE\u003c/var\u003e: The type of boot disk.\n For more information, see the [`boot_disk_type` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.boot_disk_type).\n\n- \u003cvar translate=\"no\"\u003eBOOT_DISK_SIZE\u003c/var\u003e: The size of the boot disk in GB.\n For more information, see the [`boot_disk_size_gb` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.boot_disk_size_gb).\n\n- \u003cvar translate=\"no\"\u003eNETWORK\u003c/var\u003e: If the custom job is peered to a Compute Engine\n network that has private services access configured, specify the full name of the network. For more information, see the [`network` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.network).\n\n- \u003cvar translate=\"no\"\u003eRESERVED_IP_RANGES\u003c/var\u003e: A list of names for the reserved IP ranges\n under the VPC network used to deploy the custom job.\n For more information, see the [`reserved_ip_ranges` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.reserved_ip_ranges).\n\n- \u003cvar translate=\"no\"\u003eNFS_MOUNTS\u003c/var\u003e: A list of NFS mount resources in JSON dict format.\n For more information, see the [`nfs_mounts` parameter reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component.nfs_mounts).\n\n- \u003cvar translate=\"no\"\u003ePERSISTENT_RESOURCE_ID\u003c/var\u003e (preview): The ID of the persistent\n resource to run the pipeline. If you specify\n a persistent resource, the pipeline runs on existing machines\n associated to the persistent resource, instead of on-demand and short-lived\n machine resources. Note that the network and CMEK configuration for the\n pipeline must match the configuration specified for the persistent resource.\n For more information about persistent resources and how to create them, see\n [Create a persistent resource](/vertex-ai/docs/training/persistent-resource-create#create-persistent-resource-gcloud).\n\n- \u003cvar translate=\"no\"\u003ePIPELINE_ROOT\u003c/var\u003e: Specify a Cloud Storage URI that your pipelines service account can access. The artifacts of your pipeline runs are stored within the pipeline root.\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The Google Cloud project that this pipeline runs in.\n\n- \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: The location or region that this pipeline runs in.\n\nAPI Reference\n-------------\n\nFor a complete list of arguments supported by the [`create_custom_training_job_from_component`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/custom_job.html#v1.custom_job.create_custom_training_job_from_component) method, see the [Google Cloud Pipeline Components SDK Reference](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/index.html)."]]