MODEL_DISPLAY_NAME = "penguins_model_unique"
# Start the training and create your model
model = job.run(
dataset=dataset,
model_display_name=MODEL_DISPLAY_NAME,
bigquery_destination=f"bq://{project_id}",
args=CMDARGS,
)
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Train and deploy your model\n\nIn previous steps in this tutorial, you prepared your data for training and created a script that Vertex AI uses to train your model. You're now ready to use the Vertex AI SDK for Python to create a [`CustomTrainingJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob).\n\n\u003cbr /\u003e\n\nWhen you create a `CustomTrainingJob`, you define a training pipeline in the\nbackground. Vertex AI uses the training pipeline and the code in your\nPython training script to train and create your model. For more information, see\n[Create training pipelines](/vertex-ai/docs/training/create-training-pipeline).\n\nDefine your training pipeline\n-----------------------------\n\nTo create a training pipeline, you create a `CustomTrainingJob` object. In the\nnext step, you use the `CustomTrainingJob`'s `run` command to create and train\nyour model. To create a `CustomTrainingJob`, you pass the following parameters\nto its constructor:\n\n- `display_name` - The `JOB_NAME` variable you created when you defined the\n command arguments for the Python training script.\n\n- `script_path` - The path to the Python training script you created earlier in this\n tutorial.\n\n- `container_url` - The URI of a Docker container image that's used to train\n your model.\n\n- `requirements` - The list of the script's Python package dependencies.\n\n- `model_serving_container_image_uri` - The URI of a Docker container image that\n serves predictions for your model. This container can be prebuilt or your own\n custom image. This tutorial uses a prebuilt container.\n\nRun the following code to create your training pipeline. The\n[`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob)\nmethod uses the Python training script in the `task.py` file to construct a [`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob). \n\n job = aiplatform.CustomTrainingJob(\n display_name=JOB_NAME,\n script_path=\"task.py\",\n container_uri=\"us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest\",\n requirements=[\"google-cloud-bigquery\u003e=2.20.0\", \"db-dtypes\", \"protobuf\u003c3.20.0\"],\n model_serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest\",\n )\n\nCreate and train your model\n---------------------------\n\nIn the previous step you created a\n[`CustomTrainingJob`](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomTrainingJob)\nnamed `job`. To create and train your model, call the `run` method on your\n`CustomTrainingJob` object and pass it the following parameters:\n\n- `dataset` - The [tabular dataset you created\n earlier](/vertex-ai/docs/tutorials/tabular-bq-prediction/create-dataset#create-tabular-dataset)\n in this tutorial. This parameter can be a tabular, image, video, or text\n dataset.\n\n- `model_display_name` - A name for your model.\n\n- `bigquery_destination` - A string that specifies the location of your\n BigQuery dataset.\n\n- `args` - The command-line arguments that are passed to the Python training script.\n\nTo start training your data and create your model, run the following code in\nyour notebook: \n\n MODEL_DISPLAY_NAME = \"penguins_model_unique\"\n\n # Start the training and create your model\n model = job.run(\n dataset=dataset,\n model_display_name=MODEL_DISPLAY_NAME,\n bigquery_destination=f\"bq://{project_id}\",\n args=CMDARGS,\n )\n\nBefore continuing with the next step, make sure the following appears in the\n`job.run` command's output to verify it's done:\n\n`CustomTrainingJob run completed`.\n\nAfter the training job completes, you can deploy your model.\n\nDeploy your model\n-----------------\n\nWhen you deploy your model, you also create an `Endpoint` resource that's used\nto make predictions. To deploy your model and create an endpoint, run the\nfollowing code in your notebook: \n\n DEPLOYED_NAME = \"penguins_deployed_unique\"\n\n endpoint = model.deploy(deployed_model_display_name=DEPLOYED_NAME)\n\nWait until your model deploys before you continue to the next step. After your\nmodel deploys, the output includes the text, `Endpoint model deployed`.\n\nTo view the status of your deployment in the Google Cloud console, do\nthe following:\n\n1. In the Google Cloud console, go to the **Endpoints** page.\n\n [Go to Endpoints](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints)\n2. Monitor the value under **Models** . The value is `0` after the endpoint is\n created and before the model is deployed. After the model deploys,\n the value updates to `1`.\n\n The following shows an endpoint after it's created and before a model is\n deployed to it.\n\n The following shows an endpoint after it's created and after a model is deployed\n to it."]]