Vertex AI Experiments는 여러 다른 모델 아키텍처, 하이퍼파라미터, 학습 환경을 추적하고 분석하여 실험 실행의 단계, 입력, 출력을 추적할 수 있게 해주는 도구입니다. 또한 Vertex AI Experiments는 테스트 데이터 세트를 기준으로 학습 실행 중에 모델 수행 방식을 전반적으로 평가할 수 있습니다. 그런 후 이 정보를 사용해서 특정 사용 사례에 적합한 모델을 선택할 수 있습니다.
실험 실행에는 추가 요금이 발생하지 않습니다. Vertex AI 가격 책정에 설명된 대로 실험 중에 사용하는 리소스의 요금만 청구됩니다.
Vertex AI Experiments를 사용하면 테스트 데이터 세트를 기준으로 학습 실행 중에 모델 수행 방식을 전반적으로 추적 및 평가할 수 있습니다. 이 기능은 모델의 성능 특성, 즉 특정 모델의 전반적인 효과와 모델이 실패하거나 우수한 성능을 보이는 부분을 이해하는 데 도움이 됩니다.
Google Cloud 콘솔은 중앙 집중식 실험 보기, 실험 실행의 교차 섹션 보기, 각 실행의 세부정보를 제공합니다.
Vertex AI SDK for Python은 실험, 실험 실행, 실험 실행 매개변수, 측정항목, 아티팩트 사용을 위한 API를 제공합니다.
Vertex AI Experiments는 Vertex ML Metadata와 함께 실험에서 추적한 아티팩트를 찾는 방법을 제공하며 이 방법을 사용하면 아티팩트 계보와 실행 단계에서 사용되고 생성된 아티팩트를 빠르게 볼 수 있습니다.
지원 범위
Vertex AI Experiments는 대부분의 ML Framework에서 Vertex AI 커스텀 학습, Vertex AI Workbench 노트북, Notebooks, 모든 Python ML 프레임워크를 사용하는 모델 개발을 지원합니다.
TensorFlow와 같은 일부 ML 프레임워크의 경우 Vertex AI Experiments가 사용자 환경을 자동으로 만드는 프레임워크에 긴밀하게 통합됩니다. 다른 ML 프레임워크의 경우 Vertex AI Experiments는 개발자가 사용할 수 있는 프레임워크 중립적 Vertex AI SDK for Python을 제공합니다.
TensorFlow, scikit-learn, PyTorch, XGBoost를 위한 사전 빌드된 컨테이너를 참조하세요.
하나 이상의 Vertex PipelineJob이 각 파이프라인이 단일 실행으로 표시되는 실험과 연결될 수 있습니다. 이 컨텍스트에서 실행 매개변수는 PipelineJob의 매개변수에 의해 추론됩니다. 측정항목은 해당 PipelineJob에서 생성된 system.Metric 아티팩트에서 추론됩니다. 실행 아티팩트는 해당 PipelineJob에서 생성된 아티팩트에서 추론됩니다.
Vertex AI PipelineJob 리소스 하나 이상이 ExperimentRun 리소스와 연결될 수 있습니다
이 컨텍스트에서 매개변수, 측정항목, 아티팩트는 추론되지 않습니다.
파이프라인 작업 또는 파이프라인 실행은 Vertex AI API의 PipelineJob 리소스에 해당합니다. 입력-출력 종속 항목으로 상호 연결된 ML 태스크 집합으로 정의되는 ML 파이프라인 정의의 실행 인스턴스입니다.
아티팩트
아티팩트는 머신러닝 워크플로에 의해 만들어지고 사용되는 개별 항목 또는 데이터입니다. 아티팩트의 예로는 데이터 세트, 모델, 입력 파일, 학습 로그가 포함됩니다.
Vertex AI Experiments를 사용하면 스키마를 사용하여 아티팩트 유형을 정의할 수 있습니다. 예를 들어 지원되는 스키마 유형에는 system.Dataset, system.Model, system.Artifact가 있습니다. 자세한 내용은 시스템 스키마를 참조하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-07-09(UTC)"],[],[],null,["# Introduction to Vertex AI Experiments\n\n| To see an example of getting started with Vertex AI Experiments,\n| run the \"Get started with Vertex AI Experiments\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fexperiments%2Fget_started_with_vertex_experiments.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/experiments/get_started_with_vertex_experiments.ipynb)\n\nVertex AI Experiments is a tool that helps you track and analyze different\nmodel architectures, hyperparameters, and training environments,\nletting you track the steps, inputs, and outputs of\nan experiment run. Vertex AI Experiments can also evaluate how your model\nperformed in aggregate,\nagainst test datasets, and during the training run. You can then use this\ninformation to select the best model for your particular use case.\n\nExperiment runs don't incur additional charges. You're only charged for\nresources that you use during your experiment as described in\n[Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing).\n\nTrack steps, inputs, and outputs\n--------------------------------\n\nVertex AI Experiments lets you track:\n\n- steps of an , for example, preprocessing, training,\n- inputs, for example, algorithm, parameters, datasets,\n- outputs of those steps, for example, models, checkpoints, metrics.\n\nYou can then figure out what worked and what didn't, and identify further\navenues for experimentation.\n\nFor user journey examples, check out:\n\n- [Model training](/vertex-ai/docs/experiments/user-journey/uj-model-training)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nAnalyze model performance\n-------------------------\n\nVertex AI Experiments lets you track and evaluate how\nthe model performed in aggregate, against test datasets, and during\nthe training run. This ability helps to understand the performance\ncharacteristics of the models -- how well a particular model works overall,\nwhere it fails, and where the model excels.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nCompare model performance\n-------------------------\n\nVertex AI Experiments lets you group and compare multiple models\nacross\n.\nEach model has its own specified parameters, modeling techniques, architectures,\nand input. This approach helps select the best model.\n\nFor user journey examples, check out:\n\n- [Compare pipeline runs](/vertex-ai/docs/experiments/user-journey/uj-compare-pipeline-runs)\n- [Compare models](/vertex-ai/docs/experiments/user-journey/uj-compare-models)\n\nSearch experiments\n------------------\n\nThe Google Cloud console provides a centralized view of experiments,\na cross-sectional view of the experiment runs, and the details for each run.\nThe Vertex AI SDK for Python provides APIs to consume experiments, experiment runs,\nexperiment run parameters, metrics, and artifacts.\n\nVertex AI Experiments, along with\n[Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction), provides a way\nto find the artifacts tracked in an experiment. This lets you quickly view the\nartifact's lineage and the artifacts consumed and produced by steps in a run.\n\nScope of support\n----------------\n\nVertex AI Experiments supports development of models using\nVertex AI custom training, Vertex AI Workbench\nnotebooks, Notebooks, and all Python ML Frameworks across most ML Frameworks.\nFor some ML frameworks, such as TensorFlow, Vertex AI Experiments\nprovides deep integrations into the framework that makes the user experience\nautomagical. For other ML frameworks, Vertex AI Experiments provides\na framework neutral Vertex AI SDK for Python that you can use.\n(see: [Prebuilt containers](/vertex-ai/docs/training/pre-built-containers) for\nTensorFlow, scikit-learn, PyTorch, XGBoost).\n\nData models and concepts\n------------------------\n\nVertex AI Experiments is a\nin [Vertex ML Metadata](/vertex-ai/docs/ml-metadata/introduction) where an experiment\ncan contain *n* experiment runs in addition to *n* pipeline runs. An experiment\nrun consists of parameters, summary metrics, time series metrics, and\n[`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob), [`Artifact`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Artifact),\nand [`Execution`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Execution) Vertex AI resources.\n[Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-introduction), a\nmanaged version of open source TensorBoard, is used for time-series metrics\nstorage. Executions and of a pipeline run are viewable\nin the [Google Cloud console](/vertex-ai/docs/pipelines/visualize-pipeline#visualize_pipeline_runs_using).\n\nVertex AI Experiments terms\n---------------------------\n\n### Experiment, experiment run, and pipeline run\n\n\n##### **experiment**\n\n- An experiment is a context that can contain a set of n experiment runs in addition to pipeline runs where a user can investigate, as a group, different configurations such as input artifacts or hyperparameters.\n\nSee [Create an experiment](/vertex-ai/docs/experiments/create-experiment).\n\n\u003cbr /\u003e\n\n\n##### **experiment run**\n\n- A specific, trackable execution within a Vertex AI Experiment, which logs inputs (like algorithm, parameters, and datasets) and outputs (like models, checkpoints, and metrics) to monitor and compare ML development iterations. For more information, see [Create and manage experiment runs](https://cloud.google.com/vertex-ai/docs/experiments/create-manage-exp-run).\n\nSee [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run).\n\n\u003cbr /\u003e\n\n\n##### **pipeline run**\n\n- One or more Vertex PipelineJobs can be associated with an experiment where each PipelineJob is represented as a single run. In this context, the parameters of the run are inferred by the parameters of the PipelineJob. The metrics are inferred from the system.Metric artifacts produced by that PipelineJob. The artifacts of the run are inferred from artifacts produced by that PipelineJob.\n\nOne or more Vertex AI [`PipelineJob`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.PipelineJob) resource can be associated with an [`ExperimentRun`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun) resource. In this context, the parameters, metrics, and artifacts are not inferred.\n\n\u003cbr /\u003e\n\nSee [Associate a pipeline with an experiment](/vertex-ai/docs/experiments/add-pipelinerun-experiment).\n\n### Parameters and metrics\n\n\nSee [Log parameters](/vertex-ai/docs/experiments/log-data#parameters).\n\n\n##### **summary metrics**\n\n- Summary metrics are a single value for each metric key in an experiment run. For example, the test accuracy of an experiment is the accuracy calculated against a test dataset at the end of training that can be captured as a single value summary metric.\n\n\u003cbr /\u003e\n\nSee [Log summary metrics](/vertex-ai/docs/experiments/log-data#summary-metrics).\n\n\n##### **time series metrics**\n\n- Time series metrics are longitudinal metric values where each value represents a step in the training routine portion of a run. Time series metrics are stored in Vertex AI TensorBoard. Vertex AI Experiments stores a reference to the Vertex TensorBoard resource.\n\n\u003cbr /\u003e\n\nSee [Log time series metrics](/vertex-ai/docs/experiments/log-data#time-series-metrics).\n\n### Resource types\n\n\n##### **pipeline job**\n\n- A pipeline job or a pipeline run corresponds to the PipelineJob resource in the Vertex AI API. It's an execution instance of your ML pipeline definition, which is defined as a set of ML tasks interconnected by input-output dependencies.\n\n\u003cbr /\u003e\n\n\n##### **artifact**\n\n- An artifact is a discrete entity or piece of data produced and consumed by a machine learning workflow. Examples of artifacts include datasets, models, input files, and training logs.\n\n\u003cbr /\u003e\n\nVertex AI Experiments lets you use a schema to define the type of\nartifact. For example, supported schema types include `system.Dataset`,\n`system.Model`, and `system.Artifact`. For more information, see\n[System schemas](/vertex-ai/docs/ml-metadata/system-schemas).\n\nNotebook tutorial\n-----------------\n\n- [Get started with Vertex AI Experiments](/vertex-ai/docs/experiments/user-journey/uj-get-started-vertex-ai-experiments)\n\nWhat's next\n-----------\n\n- [Set up to get started with Vertex AI Experiments](/vertex-ai/docs/experiments/setup)"]]