Daten in einer Testausführung automatisch protokollieren
Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
Autologging ist ein Feature im Vertex AI SDK, das Parameter und Messwerte automatisch aus Modelltrainingsläufen in Vertex AI Experiments protokolliert. Dies spart Zeit und Aufwand, da diese Daten nicht manuell protokolliert werden müssen. Das automatische Logging unterstützt nur das Parameter- und Messwert-Logging.
AutoLog-Daten
Es gibt zwei Optionen für das automatische Logging von Daten in Vertex AI Experiments.
Das Vertex AI SDK erstellt automatisch ExperimentRun-Ressourcen für Sie.
Geben Sie die ExperimentRun-Ressource an, in die automatisch geloggte Parameter und Messwerte geschrieben werden sollen
Für das automatische Logging im Vertex AI SDK wird das automatische Logging von MLflow verwendet.
Bewertungsmesswerte und -parameter aus folgenden Frameworks werden in Ihrem ExperimentRun protokolliert, wenn das automatische Logging aktiviert ist.
Fastai
Gluon
Keras
LightGBM
Pytorch Lightning
Scikit-learn
Spark
Statsmodels
XGBoost
Automatisch protokollierte Parameter und Messwerte anzeigen
Verwenden Sie das Vertex AI SDK für Python, um Ausführungen zu vergleichen und Ausführungsdaten abzurufen.
Die Google Cloud -Konsole bietet eine einfache Möglichkeit zum Vergleichen dieser Ausführungen.
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],["Zuletzt aktualisiert: 2025-09-10 (UTC)."],[],[],null,["# Autolog data to an experiment run\n\nAutologging is a feature in the Vertex AI SDK that automatically\nlogs parameters and metrics from model-training runs to\nVertex AI Experiments. This can save time and effort by eliminating the\nneed to manually log this data. Autologging only supports parameter\nand metric logging.\n\nAutolog data\n------------\n\nThere are two options for autologging data to Vertex AI Experiments.\n\n1. Let the Vertex AI SDK automatically create [ExperimentRun](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun?_gl=1*m6thp1*_ga*NDY0ODI5NjQ5LjE2ODIxMjU5Nzc.*_ga_4LYFWVHBEB*MTY4MjM2MDE5NC40LjEuMTY4MjM2MDcxNy4wLjAuMA..#google_cloud_aiplatform_ExperimentRun) resources for you. \n2. Specify the ExperimentRun resource that you'd like autologged parameters and metrics to be written to. \n\n### Auto-created\n\n\nThe Vertex AI SDK for Python handles creating ExperimentRun resources for you.\nAutomatically created ExperimentRun resources will have a run name in the following format:\n`{ml-framework-name}-{timestamp}-{uid}`,\nfor example: \"tensorflow-2023-01-04-16-09-20-86a88\". \n\nThe following sample uses the [`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init) method,\nfrom the `aiplatform` [Package\nfunctions](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#functions).\n\n### Python\n\n from typing import Optional, Union\n\n from google.cloud import aiplatform\n\n\n def autologging_with_auto_run_creation_sample(\n experiment_name: str,\n project: str,\n location: str,\n experiment_tensorboard: Optional[Union[str, aiplatform.Tensorboard]] = None,\n ):\n aiplatform.init(\n experiment=experiment_name,\n project=project,\n location=location,\n experiment_tensorboard=experiment_tensorboard,\n )\n\n aiplatform.autolog()\n\n # Your model training code goes here\n\n aiplatform.autolog(disable=True)\n\n- `experiment_name`: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `experiment_tensorboard`: (Optional) Provide a name for your Vertex AI TensorBoard instance.\n- `project`: . You can find these Project IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations)\n\n### User-specified\n\n\nProvide your own ExperimentRun names and have metrics and parameters\nfrom multiple model-training runs logged to the same ExperimentRun. Any metrics from model\nto the current run set by calling `aiplatform.start_run(\"your-run-name\")` until\n`aiplatform.end_run()` is called. \n\nThe following sample uses the [`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init) method,\nfrom the `aiplatform` [Package functions](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#functions).\n\n### Python\n\n from typing import Optional, Union\n\n from google.cloud import aiplatform\n\n\n def autologging_with_manual_run_creation_sample(\n experiment_name: str,\n run_name: str,\n project: str,\n location: str,\n experiment_tensorboard: Optional[Union[str, aiplatform.Tensorboard]] = None,\n ):\n aiplatform.init(\n experiment=experiment_name,\n project=project,\n location=location,\n experiment_tensorboard=experiment_tensorboard,\n )\n\n aiplatform.autolog()\n\n aiplatform.start_run(run=run_name)\n\n # Your model training code goes here\n\n aiplatform.end_run()\n\n aiplatform.autolog(disable=True)\n\n- `experiment_name`: Provide the name of your experiment.\n- `run_name`: Provide a name for your experiment run. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these Project IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations)\n- `experiment_tensorboard`: (Optional) Provide a name for your Vertex AI TensorBoard instance.\n\nVertex AI SDK autologging uses MLFlow's autologging in its implementation.\nEvaluation metrics and parameters from the following frameworks are logged\nto your ExperimentRun when autologging is enabled.\n\n- Fastai\n- Gluon\n- Keras\n- LightGBM\n- Pytorch Lightning\n- Scikit-learn\n- Spark\n- Statsmodels\n- XGBoost\n\nView autologged parameters and metrics\n--------------------------------------\n\nUse the Vertex AI SDK for Python\nto [compare runs](/vertex-ai/docs/experiments/compare-analyze-runs#compare-runs) and get\nruns data.\nThe [Google Cloud console](/vertex-ai/docs/experiments/compare-analyze-runs#console-compare-analyze-runs)\nprovides an easy way to compare these runs.\n\nRelevant notebook sample\n------------------------\n\n- [Autolog data](/vertex-ai/docs/experiments/user-journey/uj-autologging)\n\nBlog post\n---------\n\n- [How you can automate ML experiment tracking with Vertex AI Experiments\n autologging](https://cloud.google.com/blog/products/ai-machine-learning/effortless-tracking-of-your-vertex-ai-model-training)"]]