Vertex AI Experiments は、さまざまなモデル アーキテクチャ、ハイパーパラメータ、トレーニング環境を追跡および分析し、テストの実行の手順、入力、出力を追跡する際に役立つツールです。Vertex AI Experiments では、テスト データセットに対して、トレーニングの実行中に、モデルの集計のパフォーマンスを評価することもできます。この情報を使用して、特定のユースケースに最適なモデルを選択できます。
テストの実行に追加料金はかかりません。Vertex AI の料金で説明されているように、テスト中に使用するリソースに対してのみ課金されます。
Vertex AI Experiments によって、テスト データセットに対して、トレーニングの実行中に、モデルの集約のパフォーマンスを追跡および評価できます。この機能によって、モデルのパフォーマンス特性(特定のモデルが全体的にどの程度機能しているか、どこがうまく機能しないか、モデルのどこが優れているか)を把握できます。
Google Cloud コンソールには、テストの一元化ビュー、テスト実行のセクション横断的なビュー、各実行の詳細が表示されます。Vertex AI SDK for Python には、テスト、テスト実行、テスト実行パラメータ、指標、アーティファクトを使用するための API が用意されています。
Vertex AI Experiments は、Vertex ML Metadata とともに、テストで追跡されるアーティファクトを見つける手段を提供します。これにより、アーティファクトのリネージと、実行の手順で使用および生成されたアーティファクトをすばやく表示できます。
サポート範囲
Vertex AI Experiments では、Vertex AI カスタム トレーニング、Vertex AI Workbench ノートブック、Notebooks、およびほとんどの ML フレームワークにわたるすべての Python ML フレームワークを使用したモデルの開発がサポートされています。TensorFlow などの一部の ML フレームワークの場合、Vertex AI Experiments はフレームワークと緊密に統合し、ユーザー エクスペリエンスを魔法のように自動的なものにします。他の ML フレームワークの場合、Vertex AI Experiments はフレームワークに依存しない Vertex AI SDK for Python を提供しており、ユーザーはそれを使用できます。(TensorFlow、scikit-learn、PyTorch、XGBoost のビルド済みコンテナを参照。)
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Introduction to Vertex AI Experiments\n\n| To see an example of getting started with Vertex AI Experiments,\n| run the \"Get started with Vertex AI Experiments\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n\nVertex AI Experiments is a tool that helps you track and analyze different\nmodel architectures, hyperparameters, and training environments,\nletting you track the steps, inputs, and outputs of\nan experiment run. Vertex AI Experiments can also evaluate how your model\nperformed in aggregate,\nagainst test datasets, and during the training run. You can then use this\ninformation to select the best model for your particular use case.\n\nExperiment runs don't incur additional charges. You're only charged for\nresources that you use during your experiment as described in\n[Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing).\n\nTrack steps, inputs, and outputs\n--------------------------------\n\nVertex AI Experiments lets you track:\n\n- steps of an , for example, preprocessing, training,\n- inputs, for example, algorithm, parameters, datasets,\n- outputs of those steps, for example, models, checkpoints, metrics.\n\nYou can then figure out what worked and what didn't, and identify further\navenues for experimentation.\n\nFor user journey examples, check out:\n\n- [Model training](/vertex-ai/docs/experiments/user-journey/uj-model-training)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nAnalyze model performance\n-------------------------\n\nVertex AI Experiments lets you track and evaluate how\nthe model performed in aggregate, against test datasets, and during\nthe training run. This ability helps to understand the performance\ncharacteristics of the models -- how well a particular model works overall,\nwhere it fails, and where the model excels.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nCompare model performance\n-------------------------\n\nVertex AI Experiments lets you group and compare multiple models\nacross\n.\nEach model has its own specified parameters, modeling techniques, architectures,\nand input. This approach helps select the best model.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nSearch experiments\n------------------\n\nThe Google Cloud console provides a centralized view of experiments,\na cross-sectional view of the experiment runs, and the details for each run.\nThe Vertex AI SDK for Python provides APIs to consume experiments, experiment runs,\nexperiment run parameters, metrics, and artifacts.\n\nVertex AI Experiments, along with\n[Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction), provides a way\nto find the artifacts tracked in an experiment. This lets you quickly view the\nartifact's lineage and the artifacts consumed and produced by steps in a run.\n\nScope of support\n----------------\n\nVertex AI Experiments supports development of models using\nVertex AI custom training, Vertex AI Workbench\nnotebooks, Notebooks, and all Python ML Frameworks across most ML Frameworks.\nFor some ML frameworks, such as TensorFlow, Vertex AI Experiments\nprovides deep integrations into the framework that makes the user experience\nautomagical. For other ML frameworks, Vertex AI Experiments provides\na framework neutral Vertex AI SDK for Python that you can use.\n(see: [Prebuilt containers](/vertex-ai/docs/training/pre-built-containers) for\nTensorFlow, scikit-learn, PyTorch, XGBoost).\n\nData models and concepts\n------------------------\n\nVertex AI Experiments is a\nin [Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction) where an experiment\ncan contain *n* experiment runs in addition to *n* pipeline runs. An experiment\nrun consists of parameters, summary metrics, time series metrics, and\n[`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob), [`Artifact`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Artifact),\nand [`Execution`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Execution) Vertex AI resources.\n[Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-introduction), a\nmanaged version of open source TensorBoard, is used for time-series metrics\nstorage. Executions and of a pipeline run are viewable\nin the [Google Cloud console](/vertex-ai/docs/pipelines/visualize-pipeline#visualize_pipeline_runs_using).\n\nVertex AI Experiments terms\n---------------------------\n\n### Experiment, experiment run, and pipeline run\n\n\n##### **experiment**\n\n- An experiment is a context that can contain a set of n experiment runs in addition to pipeline runs where a user can investigate, as a group, different configurations such as input artifacts or hyperparameters.\n\nSee [Create an experiment](/vertex-ai/docs/experiments/create-experiment).\n\n\u003cbr /\u003e\n\n\n##### **experiment run**\n\n- A specific, trackable execution within a Vertex AI Experiment, which logs inputs (like algorithm, parameters, and datasets) and outputs (like models, checkpoints, and metrics) to monitor and compare ML development iterations. For more information, see [Create and manage experiment runs](https://cloud.google.com/vertex-ai/docs/experiments/create-manage-exp-run).\n\nSee [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run).\n\n\u003cbr /\u003e\n\n\n##### **pipeline run**\n\n- One or more Vertex PipelineJobs can be associated with an experiment where each PipelineJob is represented as a single run. In this context, the parameters of the run are inferred by the parameters of the PipelineJob. The metrics are inferred from the system.Metric artifacts produced by that PipelineJob. The artifacts of the run are inferred from artifacts produced by that PipelineJob.\n\nOne or more Vertex AI [`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob) resource can be associated with an [`ExperimentRun`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun) resource. In this context, the parameters, metrics, and artifacts are not inferred.\n\n\u003cbr /\u003e\n\nSee [Associate a pipeline with an experiment](/vertex-ai/docs/experiments/add-pipelinerun-experiment).\n\n### Parameters and metrics\n\n\nSee [Log parameters](/vertex-ai/docs/experiments/log-data#parameters).\n\n\n##### **summary metrics**\n\n- Summary metrics are a single value for each metric key in an experiment run. For example, the test accuracy of an experiment is the accuracy calculated against a test dataset at the end of training that can be captured as a single value summary metric.\n\n\u003cbr /\u003e\n\nSee [Log summary metrics](/vertex-ai/docs/experiments/log-data#summary-metrics).\n\n\n##### **time series metrics**\n\n- Time series metrics are longitudinal metric values where each value represents a step in the training routine portion of a run. Time series metrics are stored in Vertex AI TensorBoard. Vertex AI Experiments stores a reference to the Vertex TensorBoard resource.\n\n\u003cbr /\u003e\n\nSee [Log time series metrics](/vertex-ai/docs/experiments/log-data#time-series-metrics).\n\n### Resource types\n\n\n##### **pipeline job**\n\n- A pipeline job or a pipeline run corresponds to the PipelineJob resource in the Vertex AI API. It's an execution instance of your ML pipeline definition, which is defined as a set of ML tasks interconnected by input-output dependencies.\n\n\u003cbr /\u003e\n\n\n##### **artifact**\n\n- An artifact is a discrete entity or piece of data produced and consumed by a machine learning workflow. Examples of artifacts include datasets, models, input files, and training logs.\n\n\u003cbr /\u003e\n\nVertex AI Experiments lets you use a schema to define the type of\nartifact. For example, supported schema types include `system.Dataset`,\n`system.Model`, and `system.Artifact`. For more information, see\n[System schemas](/vertex-ai/docs/ml-metadata/system-schemas).\n\nNotebook tutorial\n-----------------\n\n- [Get started with Vertex AI Experiments](/vertex-ai/docs/experiments/user-journey/uj-get-started-vertex-ai-experiments)\n\nWhat's next\n-----------\n\n- [Set up to get started with Vertex AI Experiments](/vertex-ai/docs/experiments/setup)"]]