[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[],[],null,["| To see an example of TensorBoard integration with pipelines,\n| run the \"Vertex AI TensorBoard integration with Vertex AI Pipelines\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/tensorboard/tensorboard_vertex_ai_pipelines_integration.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Ftensorboard%2Ftensorboard_vertex_ai_pipelines_integration.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Ftensorboard%2Ftensorboard_vertex_ai_pipelines_integration.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/tensorboard/tensorboard_vertex_ai_pipelines_integration.ipynb)\n\nYour training code can be packaged into a custom training component and run\nin a pipeline job. TensorBoard logs are automatically streamed to your\nVertex AI TensorBoard experiment.\nYou can use this integration to monitor your training in near real time\nas Vertex AI TensorBoard streams in Vertex AI TensorBoard\nlogs as they are written to Cloud Storage.\n\nFor initial setup see\n[Set up for Vertex AI TensorBoard](/vertex-ai/docs/experiments/tensorboard-setup).\n\nChanges to your training script\n\nYour training script must be configured to write TensorBoard logs to the\nCloud Storage bucket, the location of which the Vertex AI Training\nService will automatically make available through a predefined environment\nvariable **AIP_TENSORBOARD_LOG_DIR**.\n\nThis can usually be done by providing `os.environ['AIP_TENSORBOARD_LOG_DIR']`\nas the log directory to the open source TensorBoard log writing APIs. The location\nof the `AIP_TENSORBOARD_LOG_DIR` is typically set with the `staging_bucket`\nvariable.\n\nTo configure your training script in TensorFlow 2.x, create a TensorBoard\ncallback and set the `log_dir` variable to `os.environ['AIP_TENSORBOARD_LOG_DIR']`\nThe TensorBoard callback is then included in the TensorFlow `model.fit` callbacks\nlist. \n\n```transact-sql\n tensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=os.environ['AIP_TENSORBOARD_LOG_DIR'],\n histogram_freq=1\n )\n \n model.fit(\n x=x_train,\n y=y_train,\n epochs=epochs,\n validation_data=(x_test, y_test),\n callbacks=[tensorboard_callback],\n )\n \n```\n\nLearn more about [how Vertex AI](/vertex-ai/docs/training/code-requirements#environment-variables)\nsets environment variables in your custom training environment.\n\nBuild and run a pipeline\n\nThe following example shows how to build and run a pipeline using Kubeflow Pipelines DSL package.\nFor more examples and additional details, see [Vertex AI Pipelines documentation](/vertex-ai/docs/pipelines).\n\nCreate a training component\n\nPackage your training code into a custom component, making sure that the code\nis configured to write TensorBoard logs to a Cloud Storage bucket.\nFor more examples see [Build your own pipeline components](/vertex-ai/docs/pipelines/build-own-components). \n\n from kfp.v2.dsl import component\n\n @component(\n base_image=\"tensorflow/tensorflow:latest\",\n packages_to_install=[\"tensorflow_datasets\"],\n )\n def train_tensorflow_model_with_tensorboard():\n import datetime, os\n import tensorflow as tf\n\n (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n x_train, x_test = x_train / 255.0, x_test / 255.0\n\n def create_model():\n return tf.keras.models.Sequential(\n [\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(512, activation=\"relu\"),\n ]\n )\n\n model = create_model()\n model.compile(\n optimizer=\"adam\",\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"]\n )\n\n tensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=os.environ['AIP_TENSORBOARD_LOG_DIR'],\n histogram_freq=1\n )\n\n model.fit(\n x=x_train,\n y=y_train,\n epochs=5,\n validation_data=(x_test, y_test),\n callbacks=[tensorboard_callback],\n )\n\nBuild and compile a pipeline\n\nCreate a custom training job from the component you've created by specifying\nthe component spec in `create_custom_training_job_op_from_component`.\nSet the `tensorboard_resource_name` to your TensorBoard instance,\nand the `staging_bucket` to the location to stage artifacts during\nAPI calls (including TensorBoard logs).\n\nThen, build a pipeline to include this job and compile the pipeline to a\nJSON file.\n\nFor more examples and information, see\n[Custom job components](/vertex-ai/docs/pipelines/customjob-component) and\n[Build a pipeline](/vertex-ai/docs/pipelines/build-pipeline). \n\n from kfp.v2 import compiler\n from google_cloud_pipeline_components.v1.custom_job.utils import \\\n create_custom_training_job_op_from_component\n from kfp.v2 import dsl\n\n def create_tensorboard_pipeline_sample(\n project, location, staging_bucket, display_name, service_account, experiment, tensorboard_resource_name\n ):\n\n @dsl.pipeline(\n pipeline_root=f\"{staging_bucket}/pipeline_root\",\n name=display_name,\n )\n def pipeline():\n custom_job_op = create_custom_training_job_op_from_component(\n component_spec=train_tensorflow_model_with_tensorboard,\n tensorboard=tensorboard_resource_name,\n base_output_directory=staging_bucket,\n service_account=service_account,\n )\n custom_job_op(project=project, location=location)\n\n compiler.Compiler().compile(\n pipeline_func=pipeline, package_path=f\"{display_name}.json\"\n )\n\nSubmit a Vertex AI pipeline\n\nSubmit your pipeline using the Vertex AI SDK for Python. For more information,\nsee [Run a pipeline](/vertex-ai/docs/pipelines/run-pipeline#vertex-ai-sdk-for-python).\n\nPython \n\n from typing import Any, Dict, Optional\n\n from google.cloud import aiplatform\n\n\n def log_pipeline_job_to_experiment_sample(\n experiment_name: str,\n pipeline_job_display_name: str,\n template_path: str,\n pipeline_root: str,\n project: str,\n location: str,\n parameter_values: Optional[Dict[str, Any]] = None,\n ):\n aiplatform.init(project=project, location=location)\n\n pipeline_job = aiplatform.PipelineJob(\n display_name=pipeline_job_display_name,\n template_path=template_path,\n pipeline_root=pipeline_root,\n parameter_values=parameter_values,\n )\n\n pipeline_job.submit(experiment=experiment_name)\n\n- `experiment_name`: Provide a name for your experiment.\n- `pipeline_job_display_name`: The display name for the pipeline job.\n- `template_path`: The path to the compiled pipeline template.\n- `pipeline_root`: Specify a Cloud Storage URI that your pipelines service account can access. The artifacts of your pipeline runs are stored within the pipeline root.\n- `parameter_values`: The pipeline parameters to pass to this run. For example, create a `dict()` with the parameter names as the dictionary keys and the parameter values as the dictionary values.\n- `project`: . The Google Cloud project to run the pipeline in. You can find your IDs in the Google Cloud console [welcome](https://console.cloud.google.com/welcome) page. \n- `location`: The location to run the pipeline in. This should be the same location as the TensorBoard instance you're using.\n\nWhat's next\n\n- View your results: [View TensorBoard for Vertex AI Pipelines](/vertex-ai/docs/experiments/tensorboard-view#vertex-ai-pipelines).\n- Learn how to optimize the performance of your custom training jobs using [Cloud Profiler](/vertex-ai/docs/training/tensorboard-profiler)."]]