[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Autolog data to an experiment run\n\nAutologging is a feature in the Vertex AI SDK that automatically\nlogs parameters and metrics from model-training runs to\nVertex AI Experiments. This can save time and effort by eliminating the\nneed to manually log this data. Autologging only supports parameter\nand metric logging.\n\nAutolog data\n------------\n\nThere are two options for autologging data to Vertex AI Experiments.\n\n1. Let the Vertex AI SDK automatically create [ExperimentRun](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.ExperimentRun?_gl=1*m6thp1*_ga*NDY0ODI5NjQ5LjE2ODIxMjU5Nzc.*_ga_4LYFWVHBEB*MTY4MjM2MDE5NC40LjEuMTY4MjM2MDcxNy4wLjAuMA..#google_cloud_aiplatform_ExperimentRun) resources for you. \n2. Specify the ExperimentRun resource that you'd like autologged parameters and metrics to be written to. \n\n### Auto-created\n\n\nThe Vertex AI SDK for Python handles creating ExperimentRun resources for you.\nAutomatically created ExperimentRun resources will have a run name in the following format:\n`{ml-framework-name}-{timestamp}-{uid}`,\nfor example: \"tensorflow-2023-01-04-16-09-20-86a88\". \n\nThe following sample uses the [`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init) method,\nfrom the `aiplatform` [Package\nfunctions](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#functions).\n\n### Python\n\n from typing import Optional, Union\n\n from google.cloud import aiplatform\n\n\n def autologging_with_auto_run_creation_sample(\n experiment_name: str,\n project: str,\n location: str,\n experiment_tensorboard: Optional[Union[str, aiplatform.Tensorboard]] = None,\n ):\n aiplatform.init(\n experiment=experiment_name,\n project=project,\n location=location,\n experiment_tensorboard=experiment_tensorboard,\n )\n\n aiplatform.autolog()\n\n # Your model training code goes here\n\n aiplatform.autolog(disable=True)\n\n- `experiment_name`: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `experiment_tensorboard`: (Optional) Provide a name for your Vertex AI TensorBoard instance.\n- `project`: . You can find these Project IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations)\n\n### User-specified\n\n\nProvide your own ExperimentRun names and have metrics and parameters\nfrom multiple model-training runs logged to the same ExperimentRun. Any metrics from model\nto the current run set by calling `aiplatform.start_run(\"your-run-name\")` until\n`aiplatform.end_run()` is called. \n\nThe following sample uses the [`init`](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#google_cloud_aiplatform_init) method,\nfrom the `aiplatform` [Package functions](/python/docs/reference/aiplatform/latest/google.cloud.aiplatform#functions).\n\n### Python\n\n from typing import Optional, Union\n\n from google.cloud import aiplatform\n\n\n def autologging_with_manual_run_creation_sample(\n experiment_name: str,\n run_name: str,\n project: str,\n location: str,\n experiment_tensorboard: Optional[Union[str, aiplatform.Tensorboard]] = None,\n ):\n aiplatform.init(\n experiment=experiment_name,\n project=project,\n location=location,\n experiment_tensorboard=experiment_tensorboard,\n )\n\n aiplatform.autolog()\n\n aiplatform.start_run(run=run_name)\n\n # Your model training code goes here\n\n aiplatform.end_run()\n\n aiplatform.autolog(disable=True)\n\n- `experiment_name`: Provide the name of your experiment.\n- `run_name`: Provide a name for your experiment run. You can find your list of experiments in the Google Cloud console by selecting **Experiments** in the section nav.\n- `project`: . You can find these Project IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page.\n- `location`: See [List of available locations](/vertex-ai/docs/general/locations)\n- `experiment_tensorboard`: (Optional) Provide a name for your Vertex AI TensorBoard instance.\n\nVertex AI SDK autologging uses MLFlow's autologging in its implementation.\nEvaluation metrics and parameters from the following frameworks are logged\nto your ExperimentRun when autologging is enabled.\n\n- Fastai\n- Gluon\n- Keras\n- LightGBM\n- Pytorch Lightning\n- Scikit-learn\n- Spark\n- Statsmodels\n- XGBoost\n\nView autologged parameters and metrics\n--------------------------------------\n\nUse the Vertex AI SDK for Python\nto [compare runs](/vertex-ai/docs/experiments/compare-analyze-runs#compare-runs) and get\nruns data.\nThe [Google Cloud console](/vertex-ai/docs/experiments/compare-analyze-runs#console-compare-analyze-runs)\nprovides an easy way to compare these runs.\n\nRelevant notebook sample\n------------------------\n\n- [Autolog data](/vertex-ai/docs/experiments/user-journey/uj-autologging)\n\nBlog post\n---------\n\n- [How you can automate ML experiment tracking with Vertex AI Experiments\n autologging](https://cloud.google.com/blog/products/ai-machine-learning/effortless-tracking-of-your-vertex-ai-model-training)"]]