Google Cloud 控制台會集中顯示實驗、實驗執行的橫切面檢視畫面,以及每次執行的詳細資料。Python 適用的 Vertex AI SDK 提供 API,可供您使用實驗、實驗執行作業、實驗執行作業參數、指標和構件。
Vertex AI Experiments 和 Vertex ML Metadata 可協助您找出實驗中追蹤的構件。這樣您就能快速查看構件的沿襲,以及執行作業中各步驟所取用和產生的構件。
支援範圍
Vertex AI Experiments 支援使用 Vertex AI 自訂訓練、Vertex AI Workbench 筆記本、Notebooks,以及大多數機器學習架構中的所有 Python 機器學習架構,開發模型。對於 TensorFlow 等部分 ML 架構,Vertex AI Experiments 深入整合了架構,讓使用者體驗變得非常簡單。對於其他機器學習架構,Vertex AI Experiments 提供架構中立的 Python 適用的 Vertex AI SDK,可供您使用。(請參閱:TensorFlow、scikit-learn、PyTorch、XGBoost 的預先建構容器)。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Introduction to Vertex AI Experiments\n\n| To see an example of getting started with Vertex AI Experiments,\n| run the \"Get started with Vertex AI Experiments\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n\nVertex AI Experiments is a tool that helps you track and analyze different\nmodel architectures, hyperparameters, and training environments,\nletting you track the steps, inputs, and outputs of\nan experiment run. Vertex AI Experiments can also evaluate how your model\nperformed in aggregate,\nagainst test datasets, and during the training run. You can then use this\ninformation to select the best model for your particular use case.\n\nExperiment runs don't incur additional charges. You're only charged for\nresources that you use during your experiment as described in\n[Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing).\n\nTrack steps, inputs, and outputs\n--------------------------------\n\nVertex AI Experiments lets you track:\n\n- steps of an , for example, preprocessing, training,\n- inputs, for example, algorithm, parameters, datasets,\n- outputs of those steps, for example, models, checkpoints, metrics.\n\nYou can then figure out what worked and what didn't, and identify further\navenues for experimentation.\n\nFor user journey examples, check out:\n\n- [Model training](/vertex-ai/docs/experiments/user-journey/uj-model-training)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nAnalyze model performance\n-------------------------\n\nVertex AI Experiments lets you track and evaluate how\nthe model performed in aggregate, against test datasets, and during\nthe training run. This ability helps to understand the performance\ncharacteristics of the models -- how well a particular model works overall,\nwhere it fails, and where the model excels.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nCompare model performance\n-------------------------\n\nVertex AI Experiments lets you group and compare multiple models\nacross\n.\nEach model has its own specified parameters, modeling techniques, architectures,\nand input. This approach helps select the best model.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nSearch experiments\n------------------\n\nThe Google Cloud console provides a centralized view of experiments,\na cross-sectional view of the experiment runs, and the details for each run.\nThe Vertex AI SDK for Python provides APIs to consume experiments, experiment runs,\nexperiment run parameters, metrics, and artifacts.\n\nVertex AI Experiments, along with\n[Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction), provides a way\nto find the artifacts tracked in an experiment. This lets you quickly view the\nartifact's lineage and the artifacts consumed and produced by steps in a run.\n\nScope of support\n----------------\n\nVertex AI Experiments supports development of models using\nVertex AI custom training, Vertex AI Workbench\nnotebooks, Notebooks, and all Python ML Frameworks across most ML Frameworks.\nFor some ML frameworks, such as TensorFlow, Vertex AI Experiments\nprovides deep integrations into the framework that makes the user experience\nautomagical. For other ML frameworks, Vertex AI Experiments provides\na framework neutral Vertex AI SDK for Python that you can use.\n(see: [Prebuilt containers](/vertex-ai/docs/training/pre-built-containers) for\nTensorFlow, scikit-learn, PyTorch, XGBoost).\n\nData models and concepts\n------------------------\n\nVertex AI Experiments is a\nin [Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction) where an experiment\ncan contain *n* experiment runs in addition to *n* pipeline runs. An experiment\nrun consists of parameters, summary metrics, time series metrics, and\n[`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob), [`Artifact`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Artifact),\nand [`Execution`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Execution) Vertex AI resources.\n[Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-introduction), a\nmanaged version of open source TensorBoard, is used for time-series metrics\nstorage. Executions and of a pipeline run are viewable\nin the [Google Cloud console](/vertex-ai/docs/pipelines/visualize-pipeline#visualize_pipeline_runs_using).\n\nVertex AI Experiments terms\n---------------------------\n\n### Experiment, experiment run, and pipeline run\n\n\n##### **experiment**\n\n- An experiment is a context that can contain a set of n experiment runs in addition to pipeline runs where a user can investigate, as a group, different configurations such as input artifacts or hyperparameters.\n\nSee [Create an experiment](/vertex-ai/docs/experiments/create-experiment).\n\n\u003cbr /\u003e\n\n\n##### **experiment run**\n\n- A specific, trackable execution within a Vertex AI Experiment, which logs inputs (like algorithm, parameters, and datasets) and outputs (like models, checkpoints, and metrics) to monitor and compare ML development iterations. For more information, see [Create and manage experiment runs](https://cloud.google.com/vertex-ai/docs/experiments/create-manage-exp-run).\n\nSee [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run).\n\n\u003cbr /\u003e\n\n\n##### **pipeline run**\n\n- One or more Vertex PipelineJobs can be associated with an experiment where each PipelineJob is represented as a single run. In this context, the parameters of the run are inferred by the parameters of the PipelineJob. The metrics are inferred from the system.Metric artifacts produced by that PipelineJob. The artifacts of the run are inferred from artifacts produced by that PipelineJob.\n\nOne or more Vertex AI [`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob) resource can be associated with an [`ExperimentRun`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun) resource. In this context, the parameters, metrics, and artifacts are not inferred.\n\n\u003cbr /\u003e\n\nSee [Associate a pipeline with an experiment](/vertex-ai/docs/experiments/add-pipelinerun-experiment).\n\n### Parameters and metrics\n\n\nSee [Log parameters](/vertex-ai/docs/experiments/log-data#parameters).\n\n\n##### **summary metrics**\n\n- Summary metrics are a single value for each metric key in an experiment run. For example, the test accuracy of an experiment is the accuracy calculated against a test dataset at the end of training that can be captured as a single value summary metric.\n\n\u003cbr /\u003e\n\nSee [Log summary metrics](/vertex-ai/docs/experiments/log-data#summary-metrics).\n\n\n##### **time series metrics**\n\n- Time series metrics are longitudinal metric values where each value represents a step in the training routine portion of a run. Time series metrics are stored in Vertex AI TensorBoard. Vertex AI Experiments stores a reference to the Vertex TensorBoard resource.\n\n\u003cbr /\u003e\n\nSee [Log time series metrics](/vertex-ai/docs/experiments/log-data#time-series-metrics).\n\n### Resource types\n\n\n##### **pipeline job**\n\n- A pipeline job or a pipeline run corresponds to the PipelineJob resource in the Vertex AI API. It's an execution instance of your ML pipeline definition, which is defined as a set of ML tasks interconnected by input-output dependencies.\n\n\u003cbr /\u003e\n\n\n##### **artifact**\n\n- An artifact is a discrete entity or piece of data produced and consumed by a machine learning workflow. Examples of artifacts include datasets, models, input files, and training logs.\n\n\u003cbr /\u003e\n\nVertex AI Experiments lets you use a schema to define the type of\nartifact. For example, supported schema types include `system.Dataset`,\n`system.Model`, and `system.Artifact`. For more information, see\n[System schemas](/vertex-ai/docs/ml-metadata/system-schemas).\n\nNotebook tutorial\n-----------------\n\n- [Get started with Vertex AI Experiments](/vertex-ai/docs/experiments/user-journey/uj-get-started-vertex-ai-experiments)\n\nWhat's next\n-----------\n\n- [Set up to get started with Vertex AI Experiments](/vertex-ai/docs/experiments/setup)"]]