이전 단계에서 job이라는 CustomTrainingJob을 만들었습니다. 모델을 만들고 학습시키려면 CustomTrainingJob 객체에서 run 메서드를 호출하고 다음 매개변수를 전달합니다.
dataset - 이 튜토리얼의 앞부분에서 만든 테이블 형식 데이터 세트입니다. 이 매개변수는 테이블 형식, 이미지, 동영상, 텍스트 데이터 세트일 수 있습니다.
model_display_name - 모델의 이름입니다.
bigquery_destination - BigQuery 데이터 세트의 위치를 지정하는 문자열입니다.
args - Python 학습 스크립트에 전달되는 명령줄 인수입니다.
데이터 학습을 시작하고 모델을 만들려면 노트북에서 다음 코드를 실행합니다.
MODEL_DISPLAY_NAME = "penguins_model_unique"
# Start the training and create your model
model = job.run(
dataset=dataset,
model_display_name=MODEL_DISPLAY_NAME,
bigquery_destination=f"bq://{project_id}",
args=CMDARGS,
)
다음 단계로 계속 진행하기 전에 job.run 명령어 출력에 다음이 표시되는지 확인하여 작업이 완료되었는지 확인합니다.
CustomTrainingJob run completed.
학습 작업이 완료된 후에는 모델을 배포할 수 있습니다.
모델 배포
모델을 배포할 때는 예측에 사용되는 Endpoint 리소스도 만듭니다. 모델을 배포하고 엔드포인트를 만들려면 노트북에서 다음 코드를 실행합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Train and deploy your model\n\nIn previous steps in this tutorial, you prepared your data for training and created a script that Vertex AI uses to train your model. You're now ready to use the Vertex AI SDK for Python to create a [`CustomTrainingJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob).\n\n\u003cbr /\u003e\n\nWhen you create a `CustomTrainingJob`, you define a training pipeline in the\nbackground. Vertex AI uses the training pipeline and the code in your\nPython training script to train and create your model. For more information, see\n[Create training pipelines](/vertex-ai/docs/training/create-training-pipeline).\n\nDefine your training pipeline\n-----------------------------\n\nTo create a training pipeline, you create a `CustomTrainingJob` object. In the\nnext step, you use the `CustomTrainingJob`'s `run` command to create and train\nyour model. To create a `CustomTrainingJob`, you pass the following parameters\nto its constructor:\n\n- `display_name` - The `JOB_NAME` variable you created when you defined the\n command arguments for the Python training script.\n\n- `script_path` - The path to the Python training script you created earlier in this\n tutorial.\n\n- `container_url` - The URI of a Docker container image that's used to train\n your model.\n\n- `requirements` - The list of the script's Python package dependencies.\n\n- `model_serving_container_image_uri` - The URI of a Docker container image that\n serves predictions for your model. This container can be prebuilt or your own\n custom image. This tutorial uses a prebuilt container.\n\nRun the following code to create your training pipeline. The\n[`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob)\nmethod uses the Python training script in the `task.py` file to construct a [`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob). \n\n job = aiplatform.CustomTrainingJob(\n display_name=JOB_NAME,\n script_path=\"task.py\",\n container_uri=\"us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest\",\n requirements=[\"google-cloud-bigquery\u003e=2.20.0\", \"db-dtypes\", \"protobuf\u003c3.20.0\"],\n model_serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest\",\n )\n\nCreate and train your model\n---------------------------\n\nIn the previous step you created a\n[`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob)\nnamed `job`. To create and train your model, call the `run` method on your\n`CustomTrainingJob` object and pass it the following parameters:\n\n- `dataset` - The [tabular dataset you created\n earlier](/vertex-ai/docs/tutorials/tabular-bq-prediction/create-dataset#create-tabular-dataset)\n in this tutorial. This parameter can be a tabular, image, video, or text\n dataset.\n\n- `model_display_name` - A name for your model.\n\n- `bigquery_destination` - A string that specifies the location of your\n BigQuery dataset.\n\n- `args` - The command-line arguments that are passed to the Python training script.\n\nTo start training your data and create your model, run the following code in\nyour notebook: \n\n MODEL_DISPLAY_NAME = \"penguins_model_unique\"\n\n # Start the training and create your model\n model = job.run(\n dataset=dataset,\n model_display_name=MODEL_DISPLAY_NAME,\n bigquery_destination=f\"bq://{project_id}\",\n args=CMDARGS,\n )\n\nBefore continuing with the next step, make sure the following appears in the\n`job.run` command's output to verify it's done:\n\n`CustomTrainingJob run completed`.\n\nAfter the training job completes, you can deploy your model.\n\nDeploy your model\n-----------------\n\nWhen you deploy your model, you also create an `Endpoint` resource that's used\nto make predictions. To deploy your model and create an endpoint, run the\nfollowing code in your notebook: \n\n DEPLOYED_NAME = \"penguins_deployed_unique\"\n\n endpoint = model.deploy(deployed_model_display_name=DEPLOYED_NAME)\n\nWait until your model deploys before you continue to the next step. After your\nmodel deploys, the output includes the text, `Endpoint model deployed`.\n\nTo view the status of your deployment in the Google Cloud console, do\nthe following:\n\n1. In the Google Cloud console, go to the **Endpoints** page.\n\n [Go to Endpoints](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints)\n2. Monitor the value under **Models** . The value is `0` after the endpoint is\n created and before the model is deployed. After the model deploys,\n the value updates to `1`.\n\n The following shows an endpoint after it's created and before a model is\n deployed to it.\n\n The following shows an endpoint after it's created and after a model is deployed\n to it."]]