Cloud Composer environment architecture

Cloud Composer 1 | Cloud Composer 2

This page describes architecture of Cloud Composer 1 environments.

Environment architecture configurations

Cloud Composer 1 environments can have the following architecture configurations:

Each configuration slightly alters the architecture of environment resources.

Customer and tenant projects

When you create an environment, Cloud Composer distributes the environment's resources between a tenant and a customer project.

Customer project is a Google Cloud project where you create your environments. You can create more than one environment in a single customer project.

Tenant project is a Google-managed tenant project. Tenant project provides unified access control and an additional layer of data security for your environment. Each Cloud Composer environment has its own tenant project.

Environment components

A Cloud Composer environment consists of environment components.

An environment component is an element of a managed Airflow infrastructure that runs on Google Cloud, as a part of your environment.

Environment components run either in the tenant or in the customer project of your environment.

Some of your environment's components are based on standalone Google Cloud products. Quotas and limits for these products also apply to your environments. For example, Cloud Composer environments use VPC peerings. Quotas on the maximum number of VPC peerings apply to your customer project, so once your project reaches this maximum number of peerings, you cannot create additional environments.

Environment's cluster

Environment's cluster is a Standard mode VPC-native or Routes-based Google Kubernetes Engine cluster of your environment:

  • Environment nodes are VMs in the environment's cluster.

  • Pods in the environment's cluster run containers with other environment components, such as Airflow workers and schedulers. Pods run on environment nodes.

  • Workload resources of your environment's cluster manage sets of pods in your environment's cluster. Many components of your environment are implemented as different types of workload resources. For example, Airflow workers run as Deployments. In addition to Deployments, your environment also has StatefulSets, DaemonSets, and Jobs workload types.

By default, Cloud Composer enables node auto-upgrades and node auto-repair to protect your environment's cluster from security vulnerabilities. These operations happen during maintenance windows that you specify for your environment.

Airflow schedulers, workers and Redis queue

Airflow schedulers control the scheduling of DAG runs and individual tasks from DAGs. Schedulers distribute tasks to Airflow workers by using a Redis queue, which runs as an application in your environment's cluster. Airflow schedulers run as Deployments in your environment's cluster.

Airflow workers execute individual tasks from DAGs by taking them from the Redis queue. Airflow workers run as Deployments in your environment's cluster.

Redis queue holds a queue of individual tasks from your DAGs. Airflow schedulers fill the queue; Airflow workers take their tasks from it. Redis queue runs as a StatefulSet application in your environment's cluster, so that messages persist across container restarts.

Airflow web server

Airflow web server runs the Airflow UI of your environment.

In Cloud Composer 1, Airflow web server is an App Engine Flex instance that runs in the tenant project of your environment.

The Airflow web server is integrated with Identity-Aware Proxy. Cloud Composer hides the IAP integration details, and provides access to the web server based on user identities and IAM policy bindings defined for users.

The Airflow web server run on a different service account than Airflow workers and Airflow schedulers. The service account for web server is auto-generated during the environment creation and is derived from the web server domain. For example, if the domain is example.appspot.com, the service account is example@appspot.gserviceaccount.com.

Airflow database

Airflow database is a Cloud SQL instance that runs in the tenant project of your environment. It hosts the Airflow metadata database.

To protect sensitive connection and workflow information, Cloud Composer allows database access only to the service account of your environment.

Environment's bucket

Environment's bucket is a Cloud Storage bucket that stores DAGs, plugins, data dependencies, and Airflow logs. Environment's bucket resides in the customer project.

When you upload your DAG files to the /dags folder in your environment's bucket, Cloud Composer synchronizes the DAGs to workers, schedulers, and the web server of your environment. You can store your workflow artifacts in the data/ and logs/ folders without worrying about size limitations, and retain full access control of your data.

Other environment components

A Cloud Composer environment has several additional environment components:

  • Cloud SQL Storage. Stores the Airflow database backups. Cloud Composer backs up the Airflow metadata daily to minimize potential data loss.

    Cloud SQL Storage runs in the tenant project of your environment. You cannot access the Cloud SQL Storage contents.

  • Cloud SQL Proxy. Connects other components of your environment to the Airflow database.

    Your Public IP environment can have one or more Cloud SQL Proxy instances depending on the volume of the traffic towards Airflow database.

    In the case of Public IP environment, a Cloud SQL proxy runs as a Deployment in your environment's cluster.

    When deployed in your environment's cluster, Cloud SQL Proxy also authorizes access to your Cloud SQL instance from an application, client, or other Google Cloud service.

- HAProxy. Load balances traffic to the Cloud SQL instance between two Cloud SQL Proxy instances that run in the tenant project. In the case of Cloud Composer 1, this component is used in Private IP environments and runs as a container in the Cloud SQL Proxy deployment.
  • Airflow monitoring. Reports environment metrics to Cloud Monitoring and triggers the airflow_monitoring DAG. The airflow_monitoring DAG reports the environment health data, which is later used, for example, on the monitoring dashboard of your environment. Airflow monitoring runs as a Deployment in your environment's cluster.

  • Composer Agent performs environment operations such as creating, updating, upgrading, and deleting environments. In general, this component is responsible for introducing changes to your environment. Runs as a Job in your environment's cluster.

  • Airflow InitDB creates a Cloud SQL instance and initial database schema. Airflow InitDB runs as a Job in your environment's cluster.

  • FluentD. Collects logs from all environment components and uploads the logs to Cloud Logging. Runs as a DaemonSet in your environment's cluster.

  • Pub/Sub subscriptions. Your environment communicates with its GKE service agent through Pub/Sub subscriptions. It relies on Pub/Sub's default behavior to manage messages. Do not delete .*-composer-.* Pub/Sub topics. Pub/Sub supports a maximum of 10,000 topics per project.

  • Bucket syncing. This process syncs environment buckets in the customer and tenant projects. This component is used in the Private IP with DRS environment architecture. This component runs as a container in the pods of other components that use environment buckets.

    Public IP environment architecture

    Public IP Cloud Composer environment resources in the tenant project and the customer project
    Figure 1. Public IP environment architecture (click to enlarge)

    In a Public IP environment architecture for Cloud Composer 1:

    • The tenant project hosts a Cloud SQL instance, Cloud SQL storage, and a App Engine Flex instance that runs the Airflow web server.
    • The customer project hosts all other components of the environment.
    • Airflow schedulers and workers in the customer project communicate with the Airflow database through a Cloud SQL proxy instances located in the customer project.
    • Airflow web server in the tenant project communicates with the Airflow database through a Cloud SQL proxy instance located in the tenant project.

    Private IP environment architecture

    Private IP Cloud Composer environment resources in the tenant project and the customer project
    Figure 2. Private IP environment architecture (click to enlarge)

    In a Private IP environment architecture:

    • The tenant project hosts a Cloud SQL instance, Cloud SQL storage, and two App Engine instances that run the Airflow web server.
    • The customer project hosts all other components of the environment.
    • Airflow schedulers and workers connect to the Airflow database through the HAProxy process in the environment's cluster.
    • The HAProxy process load balances traffic to the Cloud SQL instance between two Cloud SQL Proxy instances that are located in the tenant project. Private IP environments use two Cloud SQL Proxy instances because the customer project does not access the database directly due to network limitations. Two instances are needed to ensure that components of your environment have access to the database at all times.

    Private IP with DRS

    Private IP with DRS Cloud Composer environment resources in the tenant project and the customer project (click to enlarge)
    Figure 3. Private IP environment architecture (click to enlarge)

    If the Domain Restricted Sharing (DRS) organizational policy is turned on in your project, then Cloud Composer uses the Private IP with DRS environment architecture.

    In the Private IP with DRS environment architecture:

    • The tenant project hosts a Cloud SQL instance, Cloud SQL storage, and two App Engine instances that run the Airflow web server.

    • The tenant project hosts an additional environment's bucket. Airflow web server accesses this bucket directly.

    • The customer project hosts all other components of the environment.

    • The customer project hosts the Bucket Syncing process in the environment's cluster. This process synchronizes two environment buckets.

    • Airflow schedulers and workers connect to the Airflow database through the HAProxy process in the environment's cluster.

    • The HAProxy process load balances traffic to the Cloud SQL instance between two Cloud SQL Proxy instances that are located in the tenant project. Private IP environments use two Cloud SQL Proxy instances because the customer project does not access the database directly due to network limitations. Two instances are needed to ensure that components of your environment have access to the database at all times.

    Integration with Cloud Logging and Cloud Monitoring

    Cloud Composer integrates with Cloud Logging and Cloud Monitoring of your Google Cloud project , so that you have a central place to view the Airflow service and workflow logs.

    Cloud Monitoring collects and ingests metrics, events, and metadata from Cloud Composer to generate insights through dashboards and charts.

    Because of the streaming nature of Cloud Logging, you can view the logs that the Airflow scheduler and workers emit immediately instead of waiting for Airflow logs to appear in the Cloud Storage bucket of your environment. Because the Cloud Logging logs for Cloud Composer are based on google-fluentd, you have access to all logs produced by Airflow schedulers and workers.

    To limit the number of logs in your Google Cloud project, you can stop all logs ingestion. Do not disable Logging.

    What's next