默认情况下,Vertex AI 会最大限度降低均方根误差 (RMSE)。如果您要为预测模型使用其他优化目标,请选择预测模型的优化目标中的一个选项。如果您选择最大限度地减少分位数损失,则还必须为 quantiles 指定值。
enable_probabilistic_inference
布尔值
如果设置为 true,则 Vertex AI 会对预测的概率分布建模。概率推理可以通过处理噪声数据并量化不确定性来提高模型质量。如果指定了 quantiles,则 Vertex AI 还会返回分布的分位数。概率推理仅与 Time series Dense Encoder (TiDE) 和 AutoML (L2L) 训练方法兼容。概率推理与 minimize-quantile-loss 优化目标不兼容。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Train a model with Tabular Workflow for Forecasting\n\n| To see an example of how to train a forecasting model,\n| run the \"Tabular Workflow for Forecasting\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fautoml%2Fautoml_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fautoml%2Fautoml_forecasting_on_vertex_pipelines.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl_forecasting_on_vertex_pipelines.ipynb)\n\nThis page shows you how to train a forecasting model from a\ntabular dataset with Tabular Workflow for Forecasting.\n\nTo learn about the service accounts this workflow uses, see\n[Service accounts for Tabular Workflows](/vertex-ai/docs/tabular-data/tabular-workflows/service-accounts#forecasting).\n\nIf you receive a quota error while running Tabular Workflow for\nForecasting, you might need to request a higher quota. To learn more, see\n[Manage quotas for Tabular Workflows](/vertex-ai/docs/tabular-data/tabular-workflows/quotas).\n\nTabular Workflow for Forecasting doesn't support model export.\n\nWorkflow APIs\n-------------\n\nThis workflow uses the following APIs:\n\n- Vertex AI\n- Dataflow\n- Compute Engine\n- Cloud Storage\n\nGet the URI of the previous hyperparameter tuning result\n--------------------------------------------------------\n\nIf you previously completed a Tabular Workflow for Forecasting run, you\ncan use the hyperparameter tuning result from the previous run to save training\ntime and resources. Find the previous hyperparameter tuning result by\nusing the Google Cloud console or by loading it programmatically with the API. \n\n### Google Cloud console\n\nTo find the hyperparameter tuning result URI by using the Google Cloud console,\nperform the following steps:\n\n1. In the Google Cloud console, in the Vertex AI section, go to\n the **Pipelines** page.\n\n [Go to the Pipelines page](https://console.cloud.google.com/vertex-ai/pipelines)\n2. Select the **Runs** tab.\n\n3. Select the pipeline run you want to use.\n\n4. Select **Expand Artifacts**.\n\n5. Click component **exit-handler-1**.\n\n6. Click component **stage_1_tuning_result_artifact_uri_empty**.\n\n7. Find component **automl-forecasting-stage-1-tuner**.\n\n8. Click the associated artifact **tuning_result_output**.\n\n9. Select the **Node Info** tab.\n\n10. Copy the URI for use in the [Train a model](#train-model) step.\n\n### API: Python\n\nThe following sample code demonstrates how you load the hyperparameter\ntuning result by using the API. The variable `job` refers to the previous\nmodel training pipeline run. \n\n\n def get_task_detail(\n task_details: List[Dict[str, Any]], task_name: str\n ) -\u003e List[Dict[str, Any]]:\n for task_detail in task_details:\n if task_detail.task_name == task_name:\n return task_detail\n\n pipeline_task_details = job.gca_resource.job_detail.task_details\n\n stage_1_tuner_task = get_task_detail(\n pipeline_task_details, \"automl-forecasting-stage-1-tuner\"\n )\n stage_1_tuning_result_artifact_uri = (\n stage_1_tuner_task.outputs[\"tuning_result_output\"].artifacts[0].uri\n )\n\nTrain a model\n-------------\n\nThe following sample code demonstrates how you run a model training pipeline: \n\n job = aiplatform.PipelineJob(\n ...\n template_path=template_path,\n parameter_values=parameter_values,\n ...\n )\n job.run(service_account=SERVICE_ACCOUNT)\n\nThe optional `service_account` parameter in `job.run()` lets you set the\nVertex AI Pipelines service account to an account of your choice.\n\nVertex AI supports the following methods for training your model:\n\n- **Time series Dense Encoder (TiDE)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_time_series_dense_encoder_forecasting_pipeline_and_parameters(...)\n\n- **Temporal Fusion Transformer (TFT)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_temporal_fusion_transformer_forecasting_pipeline_and_parameters(...)\n\n- **AutoML (L2L)**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_learn_to_learn_forecasting_pipeline_and_parameters(...)\n\n- **Seq2Seq+**. To use this model training method,\n define your pipeline and parameter values by using the following function:\n\n template_path, parameter_values = automl_forecasting_utils.get_sequence_to_sequence_forecasting_pipeline_and_parameters(...)\n\nTo learn more, see [Model training methods](/vertex-ai/docs/tabular-data/forecasting-parameters#training-methods).\n\nThe training data can be either a CSV file in Cloud Storage or a table in\nBigQuery.\n\nThe following is a subset of model training parameters:\n\n**Transformations**\n\nYou can provide a dictionary mapping of auto- or type-resolutions to feature\ncolumns. The supported types are: auto, numeric, categorical, text, and timestamp.\n\nThe following code provides a helper function for populating the `transformations`\nparameter. It also demonstrates how you can use this function to apply automatic\ntransformations to a set of columns defined by a `features` variable. \n\n def generate_transformation(\n auto_column_names: Optional[List[str]]=None,\n numeric_column_names: Optional[List[str]]=None,\n categorical_column_names: Optional[List[str]]=None,\n text_column_names: Optional[List[str]]=None,\n timestamp_column_names: Optional[List[str]]=None,\n ) -\u003e List[Dict[str, Any]]:\n if auto_column_names is None:\n auto_column_names = []\n if numeric_column_names is None:\n numeric_column_names = []\n if categorical_column_names is None:\n categorical_column_names = []\n if text_column_names is None:\n text_column_names = []\n if timestamp_column_names is None:\n timestamp_column_names = []\n return {\n \"auto\": auto_column_names,\n \"numeric\": numeric_column_names,\n \"categorical\": categorical_column_names,\n \"text\": text_column_names,\n \"timestamp\": timestamp_column_names,\n }\n\n transformations = generate_transformation(auto_column_names=features)\n\nTo learn more about transformations, see [Data types and transformations](/vertex-ai/docs/datasets/data-types-tabular).\n\n**Workflow customization options**\n\nYou can customize the Tabular Workflow for Forecasting by defining argument\nvalues that are passed in during pipeline definition. You can customize your\nworkflow in the following ways:\n\n- Configure hardware\n- Skip architecture search\n\n**Configure hardware**\n\nThe following model training parameter lets you configure the machine types and\nthe number of machines for training. This option is a good choice if you have a\nlarge dataset and want to optimize the machine hardware accordingly.\n\nThe following code demonstrates how to set `n1-standard-8` machine type for the\nTensorFlow chief node and `n1-standard-4` machine type for the\nTensorFlow evaluator node: \n\n worker_pool_specs_override = [\n {\"machine_spec\": {\"machine_type\": \"n1-standard-8\"}}, # override for TF chief node\n {}, # override for TF worker node, since it's not used, leave it empty\n {}, # override for TF ps node, since it's not used, leave it empty\n {\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-4\" # override for TF evaluator node\n }\n }\n ]\n\n**Skip architecture search**\n\nThe following model training parameter lets you run the pipeline without the\narchitecture search and provide a set of\n[hyperparameters from a previous pipeline run](#previous-result) instead.\n\nWhat's next\n-----------\n\n- Learn about [batch inferences](/vertex-ai/docs/tabular-data/tabular-workflows/forecasting-batch-predictions) for forecasting models.\n- Learn about [pricing for model training](/vertex-ai/docs/tabular-data/tabular-workflows/pricing)."]]