Estos ejemplos implican obtener métricas de ejecución, parámetros de ejecución, métricas de serie del entorno de ejecución, artefactos y métricas de clasificación para una ejecución de experimento en particular.
Si usas el SDK de Vertex AI para Python, puedes recuperar los datos asociados a tu experimento. Los datos de las ejecuciones del experimento se muestran en un DataFrame.
Usa la Google Cloud consola para ver los detalles de las
ejecuciones de experimentos
y compararlas.
Visualiza datos de ejecución de experimento
En la consola de Google Cloud , ve a la página Experiments. Ir a Experimentos
Aparecerá una lista de experimentos asociados con un proyecto.
Selecciona el experimento que contiene la ejecución que deseas verificar.
Aparecerá una lista de ejecuciones, gráficos de datos de series temporales y una tabla de datos de métricas y parámetros. Ten en cuenta que, en este caso, se seleccionan tres ejecuciones, pero solo aparecen dos líneas en los gráficos de datos de series temporales.
No hay tercera línea porque la tercera ejecución del experimento no tiene datos de series temporales para mostrar.
Haz clic en el nombre de la ejecución para navegar a la página de detalles correspondiente.
Aparecerán la barra de navegación y los gráficos de datos de series temporales.
Para ver las métricas, los parámetros, los artefactos y los detalles de la ejecución seleccionada, haz clic en los botones correspondientes en la barra de navegación.
Métricas
Parámetros
Artefactos
Para ver el linaje de los artefactos, haz clic en el vínculo Abrir artefacto en el almacén de metadatos. Aparecerá el grafo de linaje asociado con la ejecución.
Detalles
Para compartir los datos con otras personas, usa las URLs asociadas con las vistas. Por ejemplo, comparte la lista de ejecuciones de experimentos asociadas con un experimento:
Compara las ejecuciones de experimentos
Puedes seleccionar las ejecuciones que deseas comparar en un experimento y entre experimentos.
En la consola de Google Cloud , ve a la página Experiments. Ir a Experimentos
Aparecerá una lista de experimentos.
Selecciona el experimento que contiene la ejecución que deseas comparar. Aparecerá una lista de ejecuciones.
Selecciona las ejecuciones del experimento que deseas comparar. Haz clic en Comparar.
De forma predeterminada, los gráficos aparecen comparando las métricas de series temporales de las ejecuciones de experimentos seleccionadas.
Para agregar ejecuciones adicionales de cualquier experimento a tu proyecto, haz clic en Agregar ejecución.
Para compartir los datos con otras personas, usa las URLs asociadas con las vistas. Por ejemplo, comparte la vista de comparación de los datos de métricas de series temporales:
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Compare and analyze runs\n\nYou can use the Vertex AI SDK for Python to view Vertex AI Experiments\nruns data and compare the runs.\n\n- [Get runs data](/vertex-ai/docs/experiments/compare-analyze-runs#api-analyze-runs)\n- [Compare runs](/vertex-ai/docs/experiments/compare-analyze-runs#api-compare-runs)\n\nThe Google Cloud console provides a visualization of the data\nassociated with these runs.\n\n- [View experiment run data](/vertex-ai/docs/experiments/compare-analyze-runs#view-experiment-run-data)\n- [Compare experiment runs](/vertex-ai/docs/experiments/compare-analyze-runs#compare-experiment-runs)\n\nGet experiment runs data\n------------------------\n\nThese samples involve getting run metrics, run parameters, runtime series\nmetrics, artifacts, and classification metrics for a particular experiment run. \n\n### Summary metrics\n\n### Python\n\n from typing import Dict, Union\n\n from google.cloud import aiplatform\n\n\n def get_experiment_run_metrics_sample(\n run_name: str,\n experiment: Union[str, aiplatform.Experiment],\n project: str,\n location: str,\n ) -\u003e Dict[str, Union[float, int]]:\n experiment_run = aiplatform.ExperimentRun(\n run_name=run_name, experiment=experiment, project=project, location=location\n )\n\n return experiment_run.get_metrics()\n\n- `run_name`: Specify the appropriate run name for this session.\n- `experiment`: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\n### Parameters\n\n### Python\n\n from typing import Dict, Union\n\n from google.cloud import aiplatform\n\n\n def get_experiment_run_params_sample(\n run_name: str,\n experiment: Union[str, aiplatform.Experiment],\n project: str,\n location: str,\n ) -\u003e Dict[str, Union[float, int, str]]:\n experiment_run = aiplatform.ExperimentRun(\n run_name=run_name, experiment=experiment, project=project, location=location\n )\n\n return experiment_run.get_params()\n\n- `run_name`: Specify the appropriate run name for this session.\n- `experiment`: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\n### Time series metrics\n\n### Python\n\n from typing import Union\n\n from google.cloud import aiplatform\n\n\n def get_experiment_run_time_series_metric_data_frame_sample(\n run_name: str,\n experiment: Union[str, aiplatform.Experiment],\n project: str,\n location: str,\n ) -\u003e \"pd.DataFrame\": # noqa: F821\n experiment_run = aiplatform.ExperimentRun(\n run_name=run_name, experiment=experiment, project=project, location=location\n )\n\n return experiment_run.get_time_series_data_frame()\n\n- `run_name`: Specify the appropriate run name for this session.\n- `experiment`: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\n### Artifacts\n\n### Python\n\n from typing import List, Union\n\n from google.cloud import aiplatform\n from google.cloud.aiplatform.metadata import artifact\n\n\n def get_experiment_run_artifacts_sample(\n run_name: str,\n experiment: Union[str, aiplatform.Experiment],\n project: str,\n location: str,\n ) -\u003e List[artifact.Artifact]:\n experiment_run = aiplatform.ExperimentRun(\n run_name=run_name,\n experiment=experiment,\n project=project,\n location=location,\n )\n\n return experiment_run.get_artifacts()\n\n- `run_name`: Specify the appropriate run name for this session.\n- `experiment`: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\n### Classification metrics\n\n### Python\n\n from typing import Dict, List, Union\n\n from google.cloud import aiplatform\n\n\n def get_experiment_run_classification_metrics_sample(\n run_name: str,\n experiment: Union[str, aiplatform.Experiment],\n project: str,\n location: str,\n ) -\u003e List[Dict[str, Union[str, List]]]:\n experiment_run = aiplatform.ExperimentRun(\n run_name=run_name, experiment=experiment, project=project, location=location\n )\n\n return experiment_run.get_classification_metrics()\n\n- `run_name`: Specify the appropriate run name for this session.\n- `experiment`: The name or instance of this experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\nCompare runs\n------------\n\nUsing the Vertex AI SDK for Python, you can retrieve the data associated with\nyour experiment. The data for the experiment runs is returned in a DataFrame. \n\n### Compare runs\n\n\nThe data for the experiment runs is returned in a DataFrame.\n\n### Python\n\n from google.cloud import aiplatform\n\n\n def get_experiments_data_frame_sample(\n experiment: str,\n project: str,\n location: str,\n ):\n aiplatform.init(experiment=experiment, project=project, location=location)\n\n experiments_df = aiplatform.get_experiment_df()\n\n return experiments_df\n\n- `experiment_name`: Provide a name for the experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations).\n\nGoogle Cloud console\n--------------------\n\nUse the Google Cloud console to view details of your\n\nand compare the experiment runs to each other.\n\n### View experiment run data\n\n1. In the Google Cloud console, go to the **Experiments** page. \n [Go to Experiments](https://console.cloud.google.com/vertex-ai/experiments). \n A list of experiments associated with a project appears.\n2. Select the experiment containing the run that you want to check. \n A list of runs, timeseries data charts, and a metrics and parameters data table appear. Notice, in this case, three runs are selected, but only two lines appear in the timeseries data charts. There is no third line because the third experiment run does not have any timeseries data to display. \n\n3. Click the name of the run to navigate to its details page. \n\n The navigation bar and timeseries data charts appear. \n4. To view metrics, parameters, artifacts, and details for your selected run, click the respective buttons in the navigation bar.\n - Metrics \n - Parameters \n - Artifacts \n\n To view artifact lineage, click the **Open artifact in Metadata Store** link. The lineage graph associated with the run appears. \n - Details \n\nTo share the data with others, use the URLs associated with the views. For example, share\nthe list of experiment runs associated with an experiment:\n\n### Compare experiment runs\n\nYou can select runs to compare both within an experiment and across experiments.\n\n1. In the Google Cloud console, go to the **Experiments** page. \n [Go to Experiments](https://console.cloud.google.com/vertex-ai/experiments). \n A list of experiments appears.\n2. Select the experiment containing the runs that you want to compare. A list of runs appears.\n3. Select the experiment runs that you want to compare. Click **Compare** . \n\n By default, charts appear comparing timeseries metrics of the selected experiment runs.\n4. To add additional runs from any experiment in your project, click **Add run** .\n\nTo share the data with others, use the URLs associated with the views. For example, share\nthe comparison view of timeseries metrics data:\n\n\nSee [Create and manage experiment runs](/vertex-ai/docs/experiments/create-manage-exp-run)\nfor how to update the status of a run.\n\nWhat's next\n-----------\n\n- [Track executions and artifacts](/vertex-ai/docs/experiments/track-executions-artifacts)"]]