[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Train a model with Tabular Workflow for Forecasting\n\n| To see an example of how to train a forecasting model,\n| run the \"Tabular Workflow for Forecasting\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fautoml%2Fautoml_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fautoml%2Fautoml_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl_forecasting_on_vertex_pipelines.ipynb)\n\nThis page shows you how to train a forecasting model from a\ntabular dataset with Tabular Workflow for Forecasting.\n\nTo learn about the service accounts this workflow uses, see\n[Service accounts for Tabular Workflows](/vertex-ai/docs/tabular-data/tabular-workflows/service-accounts#forecasting).\n\nIf you receive a quota error while running Tabular Workflow for\nForecasting, you might need to request a higher quota. To learn more, see\n[Manage quotas for Tabular Workflows](/vertex-ai/docs/tabular-data/tabular-workflows/quotas).\n\nTabular Workflow for Forecasting doesn't support model export.\n\nWorkflow APIs\n-------------\n\nThis workflow uses the following APIs:\n\n- Vertex AI\n- Dataflow\n- Compute Engine\n- Cloud Storage\n\nGet the URI of the previous hyperparameter tuning result\n--------------------------------------------------------\n\nIf you previously completed a Tabular Workflow for Forecasting run, you\ncan use the hyperparameter tuning result from the previous run to save training\ntime and resources. Find the previous hyperparameter tuning result by\nusing the Google Cloud console or by loading it programmatically with the API. \n\n### Google Cloud console\n\nTo find the hyperparameter tuning result URI by using the Google Cloud console,\nperform the following steps:\n\n1. In the Google Cloud console, in the Vertex AI section, go to\n the **Pipelines** page.\n\n [Go to the Pipelines page](https://console.cloud.google.com/vertex-ai/pipelines)\n2. Select the **Runs** tab.\n\n3. Select the pipeline run you want to use.\n\n4. Select **Expand Artifacts**.\n\n5. Click component **exit-handler-1**.\n\n6. Click component **stage_1_tuning_result_artifact_uri_empty**.\n\n7. Find component **automl-forecasting-stage-1-tuner**.\n\n8. Click the associated artifact **tuning_result_output**.\n\n9. Select the **Node Info** tab.\n\n10. Copy the URI for use in the [Train a model](#train-model) step.\n\n### API: Python\n\nThe following sample code demonstrates how you load the hyperparameter\ntuning result by using the API. The variable `job` refers to the previous\nmodel training pipeline run. \n\n\n def get_task_detail(\n task_details: List[Dict[str, Any]], task_name: str\n ) -\u003e List[Dict[str, Any]]:\n for task_detail in task_details:\n if task_detail.task_name == task_name:\n return task_detail\n\n pipeline_task_details = job.gca_resource.job_detail.task_details\n\n stage_1_tuner_task = get_task_detail(\n pipeline_task_details, \"automl-forecasting-stage-1-tuner\"\n )\n stage_1_tuning_result_artifact_uri = (\n stage_1_tuner_task.outputs[\"tuning_result_output\"].artifacts[0].uri\n )\n\nTrain a model\n-------------\n\nThe following sample code demonstrates how you run a model training pipeline: \n\n job = aiplatform.PipelineJob(\n ...\n template_path=template_path,\n parameter_values=parameter_values,\n ...\n )\n job.run(service_account=SERVICE_ACCOUNT)\n\nThe optional `service_account` parameter in `job.run()` lets you set the\nVertex AI Pipelines service account to an account of your choice.\n\nVertex AI supports the following methods for training your model:\n\n- **Time series Dense Encoder (TiDE)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_time_series_dense_encoder_forecasting_pipeline_and_parameters(...)\n\n- **Temporal Fusion Transformer (TFT)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_temporal_fusion_transformer_forecasting_pipeline_and_parameters(...)\n\n- **AutoML (L2L)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_learn_to_learn_forecasting_pipeline_and_parameters(...)\n\n- **Seq2Seq+**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_sequence_to_sequence_forecasting_pipeline_and_parameters(...)\n\nTo learn more, see [Model training methods](/vertex-ai/docs/tabular-data/forecasting-parameters#training-methods).\n\nThe training data can be either a CSV file in Cloud Storage or a table in\nBigQuery.\n\nThe following is a subset of model training parameters:\n\n**Transformations**\n\nYou can provide a dictionary mapping of auto- or type-resolutions to feature\ncolumns. The supported types are: auto, numeric, categorical, text, and timestamp.\n\nThe following code provides a helper function for populating the `transformations`\nparameter. It also demonstrates how you can use this function to apply automatic\ntransformations to a set of columns defined by a `features` variable. \n\n def generate_transformation(\n auto_column_names: Optional[List[str]]=None,\n numeric_column_names: Optional[List[str]]=None,\n categorical_column_names: Optional[List[str]]=None,\n text_column_names: Optional[List[str]]=None,\n timestamp_column_names: Optional[List[str]]=None,\n ) -\u003e List[Dict[str, Any]]:\n if auto_column_names is None:\n auto_column_names = []\n if numeric_column_names is None:\n numeric_column_names = []\n if categorical_column_names is None:\n categorical_column_names = []\n if text_column_names is None:\n text_column_names = []\n if timestamp_column_names is None:\n timestamp_column_names = []\n return {\n \"auto\": auto_column_names,\n \"numeric\": numeric_column_names,\n \"categorical\": categorical_column_names,\n \"text\": text_column_names,\n \"timestamp\": timestamp_column_names,\n }\n\n transformations = generate_transformation(auto_column_names=features)\n\nTo learn more about transformations, see [Data types and transformations](/vertex-ai/docs/datasets/data-types-tabular).\n\n**Workflow customization options**\n\nYou can customize the Tabular Workflow for Forecasting by defining argument\nvalues that are passed in during pipeline definition. You can customize your\nworkflow in the following ways:\n\n- Configure hardware\n- Skip architecture search\n\n**Configure hardware**\n\nThe following model training parameter lets you configure the machine types and\nthe number of machines for training. This option is a good choice if you have a\nlarge dataset and want to optimize the machine hardware accordingly.\n\nThe following code demonstrates how to set `n1-standard-8` machine type for the\nTensorFlow chief node and `n1-standard-4` machine type for the\nTensorFlow evaluator node: \n\n worker_pool_specs_override = [\n {\"machine_spec\": {\"machine_type\": \"n1-standard-8\"}}, # override for TF chief node\n {}, # override for TF worker node, since it's not used, leave it empty\n {}, # override for TF ps node, since it's not used, leave it empty\n {\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-4\" # override for TF evaluator node\n }\n }\n ]\n\n**Skip architecture search**\n\nThe following model training parameter lets you run the pipeline without the\narchitecture search and provide a set of\n[hyperparameters from a previous pipeline run](#previous-result) instead.\n\nWhat's next\n-----------\n\n- Learn about [batch inferences](/vertex-ai/docs/tabular-data/tabular-workflows/forecasting-batch-predictions) for forecasting models.\n- Learn about [pricing for model training](/vertex-ai/docs/tabular-data/tabular-workflows/pricing)."]]