[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-03 (世界標準時間)。"],[[["\u003cp\u003eCloud Composer 3 environments distribute resources between a customer project, where environments are created, and a Google-managed tenant project, which provides access control and data security.\u003c/p\u003e\n"],["\u003cp\u003eAn environment's bucket, located in the customer project, stores DAGs, plugins, data dependencies, and Airflow logs, and synchronizes uploaded DAGs with the environment's Airflow components.\u003c/p\u003e\n"],["\u003cp\u003eAirflow components, including the web server, schedulers, triggerers, DAG processors, and workers, run within the tenant project in Cloud Composer 3, with the Airflow database hosted in a Cloud SQL instance within the same tenant project.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 3 integrates with Cloud Logging and Cloud Monitoring in the customer project to centralize Airflow and DAG logs and provide insights through dashboards and charts.\u003c/p\u003e\n"],["\u003cp\u003eThe customer project in Cloud Composer 3 hosts the environment's bucket and provides access to manage the environment through Google Cloud Console, Monitoring, and Logging, while also allowing the attachment of a custom VPC network for the environment.\u003c/p\u003e\n"]]],[],null,["\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n**Cloud Composer 3** \\| [Cloud Composer 2](/composer/docs/composer-2/environment-architecture \"View this page for Cloud Composer 2\") \\| [Cloud Composer 1](/composer/docs/composer-1/environment-architecture \"View this page for Cloud Composer 1\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page describes the architecture of Cloud Composer environments.\n\nEnvironment architecture configurations\n\nCloud Composer 3 environments have a single configuration that doesn't\ndepend on the networking type:\n\n- [Cloud Composer 3 architecture](#composer-3-architecture)\n\nCustomer and tenant projects\n\nWhen you create an environment, Cloud Composer distributes the\nenvironment's resources between a tenant and a customer project:\n\n- *Customer project* is a Google Cloud project where you create your\n environments. You can create more than one environment in a single customer\n project.\n\n- *Tenant project* is a Google-managed [tenant project](/service-infrastructure/docs/glossary#tenant) and\n belongs to the Google.com organization. The tenant project provides unified\n access control and an additional layer of data security to your\n environment. Each Cloud Composer environment has its own tenant\n project.\n\nEnvironment components\n\nA Cloud Composer environment consists of environment components.\n\nAn *environment component* is an element of a managed Airflow infrastructure\nthat runs on Google Cloud, as a part of your environment. Environment\ncomponents run either in the tenant or in the customer project of\nyour environment.\n\nEnvironment's bucket\n\n*Environment's bucket* is a [Cloud Storage bucket](/composer/docs/composer-3/cloud-storage)\nthat stores DAGs, plugins, data dependencies, and Airflow logs. Environment's\nbucket is located in the customer project.\n\nWhen you [upload your DAG files](/composer/docs/composer-3/manage-dags) to the `/dags` folder in your\nenvironment's bucket, Cloud Composer synchronizes the DAGs to Airflow components of your environment.\n\nAirflow web server\n\n*Airflow web server* runs the Airflow UI of your environment.\n\nCloud Composer provides access to the interface based on user\nidentities and IAM policy bindings defined for users.\n\n\nAirflow database\n\n*Airflow database* is a [Cloud SQL instance](/sql/docs/introduction)\nthat runs in the tenant project of your environment. It hosts the Airflow\nmetadata database.\n\nTo protect sensitive connection and workflow information,\nCloud Composer allows database access only to\nthe [service account](/composer/docs/composer-3/access-control#service-account) of your environment.\n\nOther airflow components\n\nOther Airflow components that run in your environment are:\n\n- *Airflow schedulers* parse DAG definition files, schedule DAG runs\n based on the schedule interval, and queues tasks for execution by\n Airflow workers.\n\n- *Airflow triggerers* asynchronously monitor all deferred tasks in your\n environment. If you set the number of triggerers in your environment above\n zero, then you can use\n [deferrable operators in your DAGs](/composer/docs/composer-3/use-deferrable-operators).\n\n- *Airflow DAG processors* process DAG files and turns them into\n DAG objects.\n In Cloud Composer 3, DAG processors run as separate environment\n components.\n\n- *Airflow workers* execute tasks that are scheduled by Airflow\n schedulers.\n\n The minimum and maximum number of workers in your environment changes\n dynamically depending on the number of tasks in the queue.\n\nCloud Composer 3 environment architecture [](/static/composer/docs/images/composer-3-architecture.png) **Figure 1.** Cloud Composer 3 environment architecture (click to enlarge)\n\nIn Cloud Composer 3 environments:\n\n- The tenant project hosts a Cloud SQL instance with the Airflow database.\n- All Airflow resources run in the tenant project.\n- The customer project hosts the environment's bucket.\n- A custom VPC network attachment in the customer project can be used to attach the environment to a custom VPC network. You can use an existing attachment or Cloud Composer can create it automatically on demand. It is also possible to detach an environment from a VPC network.\n- Google Cloud console, Monitoring, and Logging in the customer project provide ways to manage the environment, DAGs and DAG runs, and to access environment's metrics and logs. You can also use Airflow UI, Google Cloud CLI, Cloud Composer API and Terraform for the same purposes.\n\nIn highly-resilient Cloud Composer 3 environments:\n\n- The Cloud SQL instance of your environment is configured for high\n availability (is a regional instance). Within a regional instance, the\n configuration is made up of a primary instance and a standby instance.\n\n- Your environment runs the following Airflow components in separate zones:\n\n - Two Airflow schedulers\n - Two web servers\n - At least two DAG processors (up to 10 total)\n - If triggerers are used, at least two triggerers (up to 10 total)\n\n - The minimum number of workers is set to two, and your environment's cluster\n distributes worker instances between zones. In case of a zonal outage,\n affected worker instances are rescheduled in a different zone.\n\nIntegration with Cloud Logging and Cloud Monitoring\n\nCloud Composer integrates with Cloud Logging and\nCloud Monitoring of your Google Cloud project, so that you have a\ncentral place to view [Airflow and DAG logs](/composer/docs/composer-3/view-logs).\n\nCloud Monitoring collects and ingests metrics, events, and metadata\nfrom Cloud Composer to\n[generate insights through dashboards and charts](/composer/docs/composer-3/monitor-environments).\n\nBecause of the streaming nature of Cloud Logging, you can view logs emitted by Airflow components immediately instead of waiting for Airflow logs to appear in the Cloud Storage bucket of your environment.\n\nTo limit the number of logs in your Google Cloud project,\nyou can [stop all logs ingestion](/logging/docs/exclusions#stop-logs). Do not\ndisable Logging.\n\nWhat's next\n\n- [Create an environment](/composer/docs/composer-3/create-environments)\n- [Versioning overview](/composer/docs/composer-versioning-overview)"]]