[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eThis step involves modifying the \u003ccode\u003econfig.json\u003c/code\u003e file in the Cortex Framework Data Foundation repository to configure the deployment according to your specific needs, and is designed specifically for deployments from the official GitHub repository.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003econfig.json\u003c/code\u003e file controls the deployment's behavior, allowing you to set global configurations and workload-specific parameters such as deploying test data, SAP, Salesforce, marketing sources, Oracle EBS, and Data Mesh.\u003c/p\u003e\n"],["\u003cp\u003eYou can enable \u003ccode\u003eturboMode\u003c/code\u003e in the \u003ccode\u003econfig.json\u003c/code\u003e file for faster deployment by executing all view builds in parallel, which is recommended when using test data or after resolving mismatches between reporting columns and source data.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003ereporting_settings.yaml\u003c/code\u003e files, located within each workload's directory, manage the creation of BigQuery objects (tables, views, and scripts) for reporting datasets, allowing customization of objects in two categories: independent and dependent objects, which can also specify materialization details.\u003c/p\u003e\n"],["\u003cp\u003eTable partitioning and clustering can be configured within the \u003ccode\u003ereporting_settings.yaml\u003c/code\u003e files to improve query performance for large datasets, by specifying the \u003ccode\u003epartition_details\u003c/code\u003e and \u003ccode\u003ecluster_details\u003c/code\u003e properties, respectively, applicable to \u003ccode\u003eSAP cdc_settings.yaml\u003c/code\u003e and all \u003ccode\u003ereporting_settings.yaml\u003c/code\u003e files.\u003c/p\u003e\n"]]],[],null,["# Step 5: Configure deployment\n============================\n\nThis page describes the fifth step to deploy Cortex Framework Data Foundation,\nthe core of Cortex Framework. In this step, you modify the configuration\nfile in the Cortex Framework Data Foundation repository to match your requirements.\n| **Note:** The steps outlined in this page are specifically designed for deploying Cortex Framework Data Foundation from the [official GitHub repository](https://github.com/GoogleCloudPlatform/cortex-data-foundation).\n\nConfiguration file\n------------------\n\nThe behavior of the deployment is controlled by the configuration file `config.json`\nin the [Cortex Framework Data Foundation](https://github.com/GoogleCloudPlatform/cortex-data-foundation). This file contains global configuration, specific configuration to each workload.\nEdit the `config.json` file according to your needs with the following steps:\n\n1. Open the file [`config.json`](https://github.com/GoogleCloudPlatform/cortex-data-foundation/blob/main/config/config.json) from Cloud Shell.\n2. Edit the `config.json` file according to the following parameters:\n\n | **Note:** See more parameters specifically for [Cortex for Meridan](/cortex/docs/meridian) on [Additional deployment parameters for Meridian](/cortex/docs/meridian#additional_deployment_parameters_for_merdian).\n\n3. Configure your required workload(s) as needed. You don't need to configure\n them if the deployment parameter (for example, `deploySAP` or `deployMarketing`)\n for the workload is set to `False`. For more information, see\n [Step 3: Determine integration mechanism](/cortex/docs/deployment-step-three).\n\n| **Note:** When installing multiple Cortex Framework workloads, you need to increase the `Build and Operation Get requests per minute` and `Build and Operation Get requests per minute per user` quotas. For more information, see [Quotas and system limits](https://cloud.google.com/cortex/quotas).\n\nFor a better customization of your deployment, see the following **optional steps**:\n\n- [Telemetry Opt Out](/cortex/docs/optional-step-telemetry).\n- [External datasets configuration for K9](/cortex/docs/optional-step-external-datasets).\n- [Check for `CORTEX-CUSTOMER` tags](/cortex/docs/deployment-step-six#customize_and_prepare_for_upgrade).\n\nPerformance optimization for reporting views\n--------------------------------------------\n\nReporting artifacts can be created as views or as tables refreshed\nregularly through DAGs. On one hand, views compute the data on\neach execution of a query, which keep the results always fresh.\nOn the other hand, the table runs the computations once, and the\nresults can be queried multiple times without incurring higher computing\ncosts and achieving faster runtime. Each customer creates their own\nconfiguration according to their needs.\n\nMaterialized results are updated into a table. These tables can be\nfurther fine-tuned by adding [Partitioning](#table_partition) and\n[Clustering](#cluster_settings) to these tables.\n\nThe configuration files for each workload are located in the following paths\nwithin the Cortex Framework Data Foundation repository:\n\nCustomizing reporting settings file\n-----------------------------------\n\nThe `reporting_settings` files drives how the BigQuery objects\n(tables or views) are created for reporting datasets. Customize your file with\nthe following parameters descriptions. Consider that this file contains two sections:\n\n1. `bq_independent_objects`: All BigQuery objects that can be created independently, without any other dependencies. When [`Turbo mode`](/cortex/docs/optional-step-turbo-mode) is enabled, these BigQuery objects are created in parallel during the deployment time, speeding up the deployment process.\n2. `bq_dependent_objects`: All BigQuery objects that need to be created in a specific order due to dependencies on other BigQuery objects. [`Turbo mode`](/cortex/docs/optional-step-turbo-mode) doesn´t apply to this section.\n\nThe deployer first creates all the BigQuery objects listed\nin `bq_independent_objects`, and then all the objects listed in\n`bq_dependent_objects`. Define The following properties for each object:\n\n1. `sql_file`: Name of the SQL file that creates a given object.\n2. `type`: Type of BigQuery object. Possible values:\n - `view` : If you want the object to be a BigQuery view.\n - `table`: If you want the object to be a BigQuery table.\n - `script`: This is to create other types of objects (for example, BigQuery functions and stored processes).\n3. If `type` is set to `table`, the following optional properties can be defined:\n - `load_frequency`: Frequency at which a Composer DAG is executed to refresh this table. See [Airflow documentation](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/cron.html) for details on possible values.\n - `partition_details`: How the table should be partitioned. This value is **optional** . For more information, see section [Table partition](#table_partition).\n - `cluster_details`: How the table should be clustered. This value is **optional** . For more information, see section [Cluster settings](#cluster_settings).\n\n| **Note:** If you enabled Task dependent DAGs by setting the `enableTaskDependencies` field to `True`, make sure to create a dedicated reporting settings file with the suffix `_task_dep.yaml` for each data source requiring task dependencies. For more information, see [Task dependent DAGs](/cortex/docs/optional-step-task-dependent-dags)\n\n### Table partition\n\nCertain settings files let you configure materialized tables with custom\nclustering and partitioning options. This can significantly improve query\nperformance for large datasets. This option applies only for `SAP cdc_settings.yaml`\nand all `reporting_settings.yaml` files.\n\nTable Partitioning can be enabled by specifying the following`partition_details`: \n\n - base_table: vbap\n load_frequency: \"@daily\"\n partition_details: {\n column: \"erdat\", partition_type: \"time\", time_grain: \"day\" }\n\nUse the following parameters to control partitioning details for a given table:\n\nFor more information about options and related limitations, see [BigQuery Table Partition](/bigquery/docs/partitioned-tables).\n\n### Cluster settings\n\nTable clustering can be enabled by specifying `cluster_details`: \n\n - base_table: vbak\n load_frequency: \"@daily\"\n cluster_details: {columns: [\"vkorg\"]}\n\nUse the following parameters to control cluster details for a given table:\n\nFor more information about options and related limitations,\nsee [Table cluster documentation](/bigquery/docs/clustered-tables).\n\nNext steps\n----------\n\nAfter you complete this step, move on to the following deployment step:\n\n1. [Establish workloads](/cortex/docs/deployment-step-one).\n2. [Clone repository](/cortex/docs/deployment-step-two).\n3. [Determine integration mechanism](/cortex/docs/deployment-step-three).\n4. [Set up components](/cortex/docs/deployment-step-four).\n5. [Configure deployment](/cortex/docs/deployment-step-five) (this page).\n6. [Execute deployment](/cortex/docs/deployment-step-six)."]]