[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-11-21。"],[],[],null,["# Add pipeline run to experiment\n\nYou can use either the Google Cloud console or the Vertex AI SDK for Python to\nadd a pipeline run to either an experiment or an\n. \n\n### Google Cloud console\n\nUse the following instructions to run an ML pipeline and associate the pipeline with an experiment and, optionally, an experiment run using Google Cloud console. Experiment runs can only be created through the Vertex AI SDK for Python (see [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run#vertex-ai-sdk-for-python)).\n\n1. In the Google Cloud console, in the Vertex AI section, go to the **Pipelines** page. \n [Go to Pipelines](https://console.cloud.google.com/vertex-ai/pipelines)\n2. In the **Region** drop-down list, select the region that you want to create a pipeline run in.\n3. Click **add_box\n Create run** to open the **Create pipeline run** pane.\n4. Specify the following **Run** details.\n - In the **File** field, click **Choose** to open the file selector. Navigate to the compiled pipeline JSON file that you want to run, select the pipeline, and click **Open**.\n - The **Pipeline name** defaults to the name that you specified in the pipeline definition. Optionally, specify a different **Pipeline name**.\n - Specify a **Run name** to uniquely identify this pipeline run.\n5. To specify that this pipeline run uses a custom service account, a customer-managed encryption key, or a peered VPC network, click **Advanced options** (Optional). \n Use the following instructions to configure advanced options such as a custom service account.\n - To specify a , select a service account from the **Service account** drop-down list. \n If you do not specify a service account, Vertex AI Pipelines runs your pipeline using the default Compute Engine service account. \n Learn more about [configuring a service account for use with\n Vertex AI Pipelines](/vertex-ai/docs/pipelines/configure-project#service-account).\n - To use a (CMEK), select **Use a\n customer-managed encryption key** . The **Select a customer-managed\n key** drop-down list appears. In the **Select a customer-managed\n key** drop-down list, select the key that you want to use.\n - To use a peered VPC network in this pipeline run, enter the VPC network name in the **Peered VPC network** box.\n6. Click **Continue** . \n The **Cloud Storage** location and **Pipeline parameters** pane appears.\n - Required: Enter the Cloud Storage output directory, for example: gs://*location_of_directory*.\n - Optional: Specify the parameters that you want to use for this pipeline run.\n7. Click **Submit** to create your pipeline run.\n8. After the Pipeline is submitted, it appears in the Pipeline's Google Cloud console table.\n9. In the row associated with your pipeline click more_vert **View more \\\u003e Add to Experiment**\n - Select an existing Experiment or create a new one.\n - Optional: If Experiment runs are associated with the Experiment, they show up in the drop-down. Select an existing Experiment run.\n10. Click **Save**.\n\n#### Compare a pipeline run with experiment runs using the Google Cloud console\n\n1. In the Google Cloud console, go to the **Experiments** page. \n [Go to Experiments](https://console.cloud.google.com/vertex-ai/experiments). \n A list of experiments appears in the **Experiments** page.\n2. Select the experiment you want to add your pipeline run to. \n A list of runs appears.\n3. Select the runs you want to compare, then click **Compare** \n4. Click the **Add run** button. A list of runs appears\n5. Select the pipeline run you want to add. The run is added.\n\n### Vertex AI SDK for Python {:#sdk-add-pipeline-run}\n\nThe following samples use the [PipelineJob](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob) API. \n\n### Associate pipeline run with an experiment\n\nThis sample shows how to associate a pipeline run with an experiment. When you want to compare\nPipeline runs, you should associate your pipeline run(s) to an experiment. See\n[`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init)\nin the Vertex AI SDK for Python reference documentation.\n\n### Python\n\n from typing import Any, Dict, Optional\n\n from google.cloud import aiplatform\n\n\n def log_pipeline_job_to_experiment_sample(\n experiment_name: str,\n pipeline_job_display_name: str,\n template_path: str,\n pipeline_root: str,\n project: str,\n location: str,\n parameter_values: Optional[Dict[str, Any]] = None,\n ):\n aiplatform.init(project=project, location=location)\n\n pipeline_job = aiplatform.PipelineJob(\n display_name=pipeline_job_display_name,\n template_path=template_path,\n pipeline_root=pipeline_root,\n parameter_values=parameter_values,\n )\n\n pipeline_job.submit(experiment=experiment_name)\n\n- `experiment_name`: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `pipeline_job_display_name`: The user-defined name of this Pipeline.\n- `template_path`: The path of PipelineJob or PipelineSpec JSON or YAML file. It can be a local path or a Cloud Storage URI. Example: \"gs://project.name\"\n- `pipeline_root`: The root of the pipeline outputs. Default to be staging bucket.\n- `parameter_values`: The mapping from runtime parameter names to its values that control the pipeline run.\n- `project`: . You can find these IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available\n locations](/vertex-ai/docs/general/locations).\n\n### Associate pipeline run with experiment run\n\nThe sample provided includes associating a pipeline run with an\n.\n\n\nUse cases:\n\n- When doing local model training and then running evaluation on that model (evaluation is done by using a pipeline). In this case you'd want to write the eval metrics from your pipeline run to an ExperimentRun\n- When re-running the same pipeline multiple times. For example, if you change the input parameters, or if one component fails and you need to run it again.\n\n\nWhen associating a pipeline run to an experiment run, parameters and metrics are not\nautomatically surfaced and need to be logged manually using the\n[logging APIs](/vertex-ai/docs/experiments/log-data).\n\nNote: When the optional `resume` parameter is specified as `TRUE`,\nthe previously started run resumes. When not specified, `resume` defaults to\n`FALSE` and a new run is created.\n\nSee\n[`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init),\n[`start_run`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatformm#google_cloud_aiplatform_start_run), and\n[`log`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_log)\nin the Vertex AI SDK for Python reference documentation.\n\n### Python\n\n from google.cloud import aiplatform\n\n\n def log_pipeline_job_sample(\n experiment_name: str,\n run_name: str,\n pipeline_job: aiplatform.PipelineJob,\n project: str,\n location: str,\n ):\n aiplatform.init(experiment=experiment_name, project=project, location=location)\n\n aiplatform.start_run(run=run_name, resume=True)\n\n aiplatform.log(pipeline_job=pipeline_job)\n\n- `experiment_name`: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `run_name`: Specify a run name.\n- `pipeline_job`: A Vertex AI PipelineJob\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations)\n\nView list of pipeline runs in Google Cloud console\n--------------------------------------------------\n\n1. In the Google Cloud console, in the Vertex AI section, go to the\n **Pipelines** page.\n\n [Go to the Pipelines page](https://console.cloud.google.com/vertex-ai/pipelines)\n2. Check to be sure you are in the correct project.\n\n3. A list of experiments and runs associated with your project's pipeline\n runs appears in the **Experiment** and **Experiment run** columns,\n respectively. \n\nCodelab\n-------\n\n- [Make the Most of Experimentation: Manage Machine Learning Experiments with Vertex AI](https://codelabs.developers.google.com/vertex_experiments_pipelines_intro#0)\n\n This codelab involves using Vertex AI to build a pipeline that trains\n a custom Keras Model in TensorFlow. Vertex AI Experiments is used to\n track and compare experiment runs in order to identify which combination of\n hyperparameters results in the best performance.\n\nWhat's next\n-----------\n\n- [Log data to an experiment run](/vertex-ai/docs/experiments/log-data)\n\nRelevant notebook sample\n------------------------\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)"]]