Cloud Composer 1 | Cloud Composer 2 | Cloud Composer 3
This page explains how to create a Cloud Composer environment.
- For more information about environments, see Environment architecture.
- For more information about creating an environment with Terraform, see Create environments (Terraform).
Before you begin
Enable the Cloud Composer API. For the full list of services used by Cloud Composer, see Services required by Cloud Composer.
The approximate time to create an environment is 25 minutes.
If you create an environment with Terraform, the service account used by Terraform must have a role with the
composer.environments.create
permission enabled.For more information about the service account for Terraform, see Google Provider Configuration Reference.
For more information about using Terraform to create a Cloud Composer environment, see Terraform documentation.
For more information about additional parameters, see Terraform Argument Reference.
VPC SC: To deploy Cloud Composer environments inside a security perimeter, see Configuring VPC SC. When used with Cloud Composer, VPC Service Controls have several known limitations.
Step 1. Create or choose an environment's service account
When you create an environment, you specify a service account. This service account is called environment's service account. Your environment uses this service account to perform most of the operations.
The service account for your environment is not a user account. A service account is a special kind of account used by an application or a virtual machine (VM) instance, not a person.
You can't change the service account of your environment later.
If you don't have a service account for Cloud Composer environments in your project yet, create it.
See Create environments (Terraform) for an extended example of creating a service account for your environment in Terraform.
To create a new service account for your environment:
Create a new service account as described in the Identity and Access Management documentation.
Grant a role to it, as described in the Identity and Access Management documentation. The required role is Composer Worker (
composer.worker
).To access other resources in your Google Cloud project, grant extra permissions to access those resources to this service account. The Composer Worker (
composer.worker
) role provides this required set of permissions in most cases. Add extra permissions to this service account only when it's necessary for the operation of your DAGs.
Step 2. Basic setup
This step creates a Cloud Composer environment with default parameters in the specified location.
Console
In the Google Cloud console, go to the Create environment page.
In the Name field, enter a name for your environment.
The name must start with a lowercase letter followed by up to 62 lowercase letters, numbers, or hyphens, and can't end with a hyphen. The environment name is used to create subcomponents for the environment, so you must provide a name that is also valid as a Cloud Storage bucket name. See Bucket naming guidelines for a list of restrictions.
In the Location drop-down list, choose a location for your environment.
A location is the region where the environment is located.
In the Image version drop-down list, select a Cloud Composer image with the required version of Airflow.
In the Service account drop-down list, select a service account for your environment.
If you don't have a service account for your environment yet, see Create or choose an environment's service account.
gcloud
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version IMAGE_VERSION \
--service-account "SERVICE_ACCOUNT"
Replace:
ENVIRONMENT_NAME
with the name of the environment.The name must start with a lowercase letter followed by up to 62 lowercase letters, numbers, or hyphens, and can't end with a hyphen. The environment name is used to create subcomponents for the environment, so you must provide a name that is also valid as a Cloud Storage bucket name. See Bucket naming guidelines for a list of restrictions.
LOCATION
with the region for the environment.A location is the region where the environment is located.
SERVICE_ACCOUNT
with the service account for your environment.IMAGE_VERSION
with the name of a Cloud Composer image.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
"
API
Construct an environments.create
API request. Specify the
configuration in the Environment
resource.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"softwareConfig": {
"imageVersion": "IMAGE_VERSION"
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
PROJECT_ID
with the Project ID.LOCATION
with the region for the environment.A location is the region where the environment is located.
ENVIRONMENT_NAME
with the environment name.The name must start with a lowercase letter followed by up to 62 lowercase letters, numbers, or hyphens, and can't end with a hyphen. The environment name is used to create subcomponents for the environment, so you must provide a name that is also valid as a Cloud Storage bucket name. See Bucket naming guidelines for a list of restrictions.
IMAGE_VERSION
with the name of a Cloud Composer image.SERVICE_ACCOUNT
with the service account for your environment.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"softwareConfig": {
"imageVersion": "composer-3-airflow-2.9.3-build.7"
},
"nodeConfig": {
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
To create an environment with default parameters is a specified location,
add the following resource block to your Terraform configuration and run
terraform apply
.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
software_config {
image_version = "IMAGE_VERSION"
}
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
ENVIRONMENT_NAME
with the name of the environment.The name must start with a lowercase letter followed by up to 62 lowercase letters, numbers, or hyphens, and can't end with a hyphen. The environment name is used to create subcomponents for the environment, so you must provide a name that is also valid as a Cloud Storage bucket name. See Bucket naming guidelines for a list of restrictions.
LOCATION
with the region for the environment.A location is the region where the environment is located.
IMAGE_VERSION
with the name of a Cloud Composer image.SERVICE_ACCOUNT
with the service account for your environment.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
software_config {
image_version = "composer-3-airflow-2.9.3-build.7"
}
node_config {
service_account = "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Step 3. (Optional) Configure environment scale and performance parameters
To specify the scale and performance configuration for your environment, select the environment size and workloads configuration.
You can change all performance and scale parameters after you create an environment.
Following parameters control the scale and performance:
Environment size. Controls the performance parameters of the managed Cloud Composer infrastructure that includes the Airflow database. Consider selecting a larger environment size if you want to run a large number of DAGs and tasks with higher infrastructure performance. For example, larger environment's size increases the amount of Airflow task log entries that your environment can process with minimal delay.
Workloads configuration. Controls the scale and performance of your environment components that run in a GKE cluster: Airflow schedulers, Airflow web server, and Airflow workers.
Airflow scheduler. Parses DAG definition files, schedules DAG runs based on the schedule interval, and queues tasks for execution by Airflow workers.
Your environment can run more than one Airflow scheduler at the same time. Use multiple schedulers to distribute load between several scheduler instances for better performance and reliability.
Increasing the number of schedulers does not always improve Airflow performance. For example, having only one scheduler might provide better performance than having two. This might happen when the extra scheduler is not utilized, and thus consumes resources of your environment without contributing to overall performance. The actual scheduler performance depends on the number of Airflow workers, the number of DAGs and tasks that run in your environment, and the configuration of both Airflow and the environment.
We recommend starting with two schedulers and then monitoring the performance of your environment. If you change the number of schedulers, you can always scale your environment back to the original number of schedulers.
For more information about configuring multiple schedulers, see Airflow documentation.
Airflow triggerer. Asynchronously monitors all deferred tasks in your environment. If you have at least one triggerer instance in your environment (or at least two in highly resilient environments), you can use deferrable operators in your DAGs.
Airflow DAG processor. Processes DAG files and turns them into DAG objects. In Cloud Composer 3, this part of the scheduler runs as a separate environment component.
Airflow web server. Runs the Airflow web interface where you can monitor, manage, and visualize your DAGs.
Airflow workers. Execute tasks that are scheduled by Airflow schedulers. The minimum and maximum number of workers in your environment changes dynamically depending on the number of tasks in the queue.
Console
You can select a preset for your environment. When you select a preset, the scale and performance parameters for that preset are automatically selected. You also have an option to select a custom preset and specify all scale and performance parameters for your environment.
To select the scale and performance configuration for your environment, on the Create environment page:
To use predefined values, in the Environment resources section, click Small, Medium, or Large.
To specify custom values for the scale and performance parameters:
In the Environment resources section, click Custom.
In the Scheduler section, set the number of schedulers you want to use, and the resource allocation for their CPU, memory, and storage.
In the Triggerer section, use the Number of triggerers field to enter the number of triggerers in your environment. You can set this number to 0 if you don't want to use deferrable operators in your DAGs.
If you set at least one triggerer for your environment, use the the CPU, and Memory fields to configure resource allocation for your triggerers.
In the DAG processor section, specify the number of DAG processors in your enviueornment and the amount of CPUs, memory, and storage for each DAG processor.
In the Web server section, specify the amount of CPUs, memory, and storage for the web server.
In the Worker section, specify:
- The minimum and maximum number of workers for autoscaling limits in your environment.
- The CPU, memory, and storage allocation for your workers
In the Core infrastructure section, in the Environment size drop-down list, select the environment size.
gcloud
When you create an environment, the following arguments control the scale and performance parameters of your environment.
--environment-size
specifies the environment size.--scheduler-count
specifies the number of schedulers.--scheduler-cpu
specifies the number of CPUs for an Airflow scheduler.--scheduler-memory
specifies the amount of memory for an Airflow scheduler.--scheduler-storage
specifies the amount of disk space for an Airflow scheduler.--triggerer-count
specifies the number of Airflow triggerers in your environment. The default value for this flag is0
. You need triggerers if you want to use deferrable operators in your DAGs.- For standard resilience environments, use a value between
0
and10
. - For highly resilient environments, use
0
or a value between2
and10
.
- For standard resilience environments, use a value between
--triggerer-cpu
specifies the number of CPUs for an Airflow triggerer, in vCPU units. Allowed values:0.5
,0.75
,1
. The default value is0.5
.--triggerer-memory
specifies the amount of memory for an Airflow triggerer, in GB. The default value is0.5
.The minimum required memory is equal to the number of CPUs allocated for the triggerers. The maximum allowed value is equal to the number of triggerer CPUs multiplied by 6.5.
For example, if you set the
--triggerer-cpu
flag to1
, the minimum value for--triggerer-memory
is1
and the maximum value is6.5
.--dag-processor-cpu
specifies the number of CPUs for the DAG processor.--dag-processor-memory
specifies the amount of memory for the DAG processor.--dag-processor-storage
specifies the amount of disk space for the DAG processor.--web-server-cpu
specifies the number of CPUs for the Airflow web server.--web-server-memory
specifies the amount of memory for the Airflow web server.--web-server-storage
specifies the amount of disk space for the Airflow web server.--worker-cpu
specifies the number of CPUs for an Airflow worker.--worker-memory
specifies the amount of memory for an Airflow worker.--worker-storage
specifies the amount of disk space for an Airflow worker.--min-workers
specifies the minimum number of Airflow workers. Your environment's cluster runs at least this number of workers.--max-workers
specifies the maximum number of Airflow workers. Your environment's cluster runs at most this number of workers.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--environment-size ENVIRONMENT_SIZE \
--scheduler-count SCHEDULER_COUNT \
--scheduler-cpu SCHEDULER_CPU \
--scheduler-memory SCHEDULER_MEMORY \
--scheduler-storage SCHEDULER_STORAGE \
--triggerer-count TRIGGERER_COUNT \
--triggerer-cpu TRIGGERER_CPU \
--triggerer-memory TRIGGERER_MEMORY \
--dag-processor-cpu DAG_PROCESSOR_CPU \
--dag-processor-memory DAG_PROCESSOR_MEMORY \
--dag-processor-storage DAG_PROCESSOR_STORAGE \
--web-server-cpu WEB_SERVER_CPU \
--web-server-memory WEB_SERVER_MEMORY \
--web-server-storage WEB_SERVER_STORAGE \
--worker-cpu WORKER_CPU \
--worker-memory WORKER_MEMORY \
--worker-storage WORKER_STORAGE \
--min-workers WORKERS_MIN \
--max-workers WORKERS_MAX
Replace:
ENVIRONMENT_SIZE
withsmall
,medium
, orlarge
.SCHEDULER_COUNT
with the number of schedulers.SCHEDULER_CPU
with the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
with the amount of memory for a scheduler.SCHEDULER_STORAGE
with the disk size for a scheduler.TRIGGERER_COUNT
with the number of triggerers.TRIGGERER_CPU
with the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORY
with the amount of memory for a triggerer, in GB.DAG_PROCESSOR_CPU
with the number of CPUs for the DAG processor.DAG_PROCESSOR_MEMORY
with the amount of memory for the DAG processor.DAG_PROCESSOR_STORAGE
with the amount of disk space for the DAG processor.WEB_SERVER_CPU
with the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORY
with the amount of memory for the web server.WEB_SERVER_STORAGE
with the amount of memory for the web server.WORKER_CPU
with the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
with the amount of memory for a worker.WORKER_STORAGE
with the disk size for a worker.WORKERS_MIN
with the minimum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a lower number of workers can handle the load.WORKERS_MAX
with the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--environment-size small \
--scheduler-count 1 \
--scheduler-cpu 0.5 \
--scheduler-memory 2.5GB \
--scheduler-storage 2GB \
--triggerer-count 1 \
--triggerer-cpu 0.5 \
--triggerer-memory 0.5GB \
--dag-processor-cpu 0.5 \
--dag-processor-memory 2GB \
--dag-processor-storage 1GB \
--web-server-cpu 1 \
--web-server-memory 2.5GB \
--web-server-storage 2GB \
--worker-cpu 1 \
--worker-memory 2GB \
--worker-storage 2GB \
--min-workers 2 \
--max-workers 4
API
When you create an environment, in the Environment > EnvironmentConfig > WorkloadsConfig resource, specify environment scale and performance parameters.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"workloadsConfig": {
"scheduler": {
"cpu": SCHEDULER_CPU,
"memoryGb": SCHEDULER_MEMORY,
"storageGb": SCHEDULER_STORAGE,
"count": SCHEDULER_COUNT
},
"triggerer": {
"count": TRIGGERER_COUNT,
"cpu": TRIGGERER_CPU,
"memoryGb": TRIGGERER_MEMORY
},
"dagProcessor": {
"count": 1,
"cpu": DAG_PROCESSOR_CPU,
"memoryGb": DAG_PROCESSOR_MEMORY,
"storageGb": DAG_PROCESSOR_STORAGE
},
"webServer": {
"cpu": WEB_SERVER_CPU,
"memoryGb": WEB_SERVER_MEMORY,
"storageGb": WEB_SERVER_STORAGE
},
"worker": {
"cpu": WORKER_CPU,
"memoryGb": WORKER_MEMORY,
"storageGb": WORKER_STORAGE,
"minCount": WORKERS_MIN,
"maxCount": WORKERS_MAX
}
},
"environmentSize": "ENVIRONMENT_SIZE",
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
SCHEDULER_CPU
with the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
with the amount of memory for a scheduler, in GB.SCHEDULER_STORAGE
with the disk size for a scheduler, in GB.SCHEDULER_COUNT
with the number of schedulers.TRIGGERER_COUNT
with the number of triggerers. The default value is0
. You need triggerers if you want to use deferrable operators in your DAGs.- For standard resilience environments, use a value between
0
and10
. - For highly resilient environments, use
0
or a value between2
and10
.
If you use at least one triggerer, you must also specify the
TRIGGERER_CPU
, andTRIGGERER_MEMORY
parameters:- For standard resilience environments, use a value between
TRIGGERER_CPU
specifies the number of CPUs for a triggerer, in vCPU units. Allowed values:0.5
,0.75
,1
.TRIGGERER_MEMORY
configures the amount of memory for a triggerer. The minimum required memory is equal to the number of CPUs allocated for the triggerers. The maximum allowed value is equal to the number of triggerer CPUs multiplied by 6.5.For example, if you set the
TRIGGERER_CPU
to1
, the minimum value forTRIGGERER_MEMORY
is1
and the maximum value is6.5
.DAG_PROCESSOR_CPU
with the number of CPUs for the DAG processor, in vCPU units.DAG_PROCESSOR_MEMORY
with the amount of memory for the DAG processor, in GB.DAG_PROCESSOR_STORAGE
with the amount of disk space for the DAG processor, in GB.WEB_SERVER_CPU
with the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORY
with the amount of memory for the web server, in GB.WEB_SERVER_STORAGE
with the disk size for the web server, in GB.WORKER_CPU
with the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
with the amount of memory for a worker, in GB.WORKER_STORAGE
with the disk size for a worker, in GB.WORKERS_MIN
with the minimum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a lower number of workers can handle the load.WORKERS_MAX
with the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.ENVIRONMENT_SIZE
with the environment size,ENVIRONMENT_SIZE_SMALL
,ENVIRONMENT_SIZE_MEDIUM
, orENVIRONMENT_SIZE_LARGE
.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"workloadsConfig": {
"scheduler": {
"cpu": 2.5,
"memoryGb": 2.5,
"storageGb": 2,
"count": 1
},
"triggerer": {
"cpu": 0.5,
"memoryGb": 0.5,
"count": 1
},
"dagProcessor": {
"count": 1,
"cpu": 0.5,
"memoryGb": 2,
"storageGb": 1
},
"webServer": {
"cpu": 1,
"memoryGb": 2.5,
"storageGb": 2
},
"worker": {
"cpu": 1,
"memoryGb": 2,
"storageGb": 2,
"minCount": 2,
"maxCount": 4
}
},
"environmentSize": "ENVIRONMENT_SIZE_SMALL",
"nodeConfig": {
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
When you create an environment, following arguments control the scale and performance parameters of your environment.
In the
config
block:- The
environment_size
field controls the environment size.
- The
In the
workloads_config
block:- The
scheduler.cpu
field specifies the number of CPUs for an Airflow scheduler. - The
scheduler.memory_gb
field specifies the amount of memory for an Airflow scheduler. - The
scheduler.storage_gb
field specifies the amount of disk space for a scheduler. - The
scheduler.count
field specifies the number of schedulers in your environment. - The
triggerer.cpu
field specifies the number of CPUs for an Airflow triggerer. - The
triggerer.memory_gb
field specifies the amount of memory for an Airflow triggerer. The
triggerer.count
field specifies the number of triggerers in your environment.The
dag_processor.cpu
field specifies the number of CPUs for a DAG processor.The
dag_processor.memory_gb
field specifies the amount of memory for a DAG processor.The
dag_processor.storage_gb
field specifies the amount of disk space for a DAG processor.The
dag_processor.count
field specifies the number of DAG processors.The
web_server.cpu
field specifies the number of CPUs for the Airflow web server.The
web_server.memory_gb
field specifies the amount of memory for the Airflow web server.The
web_server.storage_gb
field specifies the amount of disk space for the Airflow web server.The
worker.cpu
field specifies the number of CPUs for an Airflow worker.The
worker.memory_gb
field specifies the amount of memory for an Airflow worker.The
worker.storage_gb
field specifies the amount of disk space for an Airflow worker.The
worker.min_count
field specifies the minimum number of workers in your environment.The
worker.max_count
field specifies the maximum number of workers in your environment.
- The
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
scheduler {
cpu = SCHEDULER_CPU
memory_gb = SCHEDULER_MEMORY
storage_gb = SCHEDULER_STORAGE
count = SCHEDULER_COUNT
}
triggerer {
count = TRIGGERER_COUNT
cpu = TRIGGERER_CPU
memory_gb = TRIGGERER_MEMORY
}
web_server {
cpu = WEB_SERVER_CPU
memory_gb = WEB_SERVER_MEMORY
storage_gb = WEB_SERVER_STORAGE
}
worker {
cpu = WORKER_CPU
memory_gb = WORKER_MEMORY
storage_gb = WORKER_STORAGE
min_count = WORKERS_MIN
max_count = WORKERS_MAX
}
}
environment_size = "ENVIRONMENT_SIZE"
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
ENVIRONMENT_NAME
with the name of the environment.LOCATION
with the region where the environment is located.SERVICE_ACCOUNT
with the service account for your environment.SCHEDULER_CPU
with the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
with the amount of memory for a scheduler, in GB.SCHEDULER_STORAGE
with the disk size for a scheduler, in GB.SCHEDULER_COUNT
with the number of schedulers.TRIGGERER_COUNT
with the number of triggerers.TRIGGERER_CPU
with the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORY
with the amount of memory for a triggerer, in GB.WEB_SERVER_CPU
with the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORY
with the amount of memory for the web server, in GB.WEB_SERVER_STORAGE
with the disk size for the web server, in GB.WORKER_CPU
with the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
with the amount of memory for a worker, in GB.WORKER_STORAGE
with the disk size for a worker, in GB.WORKERS_MIN
with the minimum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a lower number of workers can handle the load.WORKERS_MAX
with the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.ENVIRONMENT_SIZE
with the environment size,ENVIRONMENT_SIZE_SMALL
,ENVIRONMENT_SIZE_MEDIUM
, orENVIRONMENT_SIZE_LARGE
.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
scheduler {
cpu = 2.5
memory_gb = 2.5
storage_gb = 2
count = 1
}
triggerer {
count = 1
cpu = 0.5
memory_gb = 0.5
}
web_server {
cpu = 1
memory_gb = 2.5
storage_gb = 2
}
worker {
cpu = 1
memory_gb = 2
storage_gb = 2
min_count = 2
max_count = 4
}
}
environment_size = "ENVIRONMENT_SIZE_SMALL"
node_config {
service_account = "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Step 4. (Optional) Specify a zone for the environment's database
You can specify a preferred Cloud SQL zone for your environment.
Console
On the Create environment page:
In the Advanced configuration section, expand the Show advanced configuration item.
In the Airflow database zone list, select a preferred Cloud SQL zone.
gcloud
When you create an environment, the --cloud-sql-preferred-zone
argument
specifies a preferred Cloud SQL zone.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--cloud-sql-preferred-zone SQL_ZONE
Replace the following:
SQL_ZONE
: preferred Cloud SQL zone. This zone must be located in the region where the environment is located.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--cloud-sql-preferred-zone us-central1-a
API
When you create an environment, in the Environment > DatabaseConfig resource, specify the preferred Cloud SQL zone.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"databaseConfig": {
"zone": "SQL_ZONE"
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace the following:
SQL_ZONE
: preferred Cloud SQL zone. This zone must be located in the region where the environment is located.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"databaseConfig": {
"zone": "us-central1-a"
},
"nodeConfig": {
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
When you create an environment, the zone
field in the database_config
block specifies the preferred Cloud SQL zone.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
database_config {
zone = "SQL_ZONE"
}
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace the following:
SQL_ZONE
: preferred Cloud SQL zone. This zone must be located in the region where the environment is located.
Step 5. (Optional) Configure your environment's networking
You can configure Cloud Composer 3 networking in the following ways:
- In a Public IP environment, your environment's Airflow components can access the internet.
- In a Private IP environment, your environment's Airflow components do not have access to the internet.
- Private IP and Public IP environments can connect to your VPC network as a separate option.
- You can specify the internal IP range of your environment. This range can't be changed later.
You can enable access to the internet when installing PyPI packages. For example, your Private IP environment can still install PyPI packages from Python Package Index if you enable this option.
For a Shared VPC environment, you must do additional networking setup for the host project, then create a Public or a Private IP environment in a service project. Follow the instructions on the Configuring Shared VPC page.
Console
Make sure that your networking is configured for the type of environment that you want to create.
In the Network configuration section, expand the Show network configuration item.
If you want to connect your environment to a VPC network, in the Network attachment field, select a network attachment. You can also create a new network attachment. For more information, see Connect an environment to a VPC network.
If you want to create a Private IP environment, in the Networking type section, select the Private IP environment option.
If you want to add network tags, see Add network tags for more information.
gcloud
Make sure that your networking is configured for the type of environment that you want to create.
When you create an environment, the following arguments control the networking parameters. If you omit a parameter, the default value is used.
--enable-private-environment
enables a Private IP environment.--network
specifies your VPC network ID.--subnetwork
specifies your VPC subnetwork ID.
--composer-internal-ipv4-cidr-block
specifies the environment's internal IP range. This range is used by Cloud Composer in the tenant project of your environment.
Example (Private IP environment with a connected VPC network)
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--enable-private-environment \
--network NETWORK_ID \
--subnetwork SUBNETWORK_ID \
Replace:
NETWORK_ID
with your VPC network ID.SUBNETWORK_ID
with your VPC subnetwork ID.
Step 6. (Optional) Add network tags
Network tags are applied to all node VMs in your environment's cluster. Tags are used to identify valid sources or targets for network firewalls. Each tag within the list must comply with RFC 1035.
For example, you might want to add network tags if you plan to restrict traffic for a Private IP environment with firewall rules.
Console
On the Create environment page:
- Locate the Network configuration section.
- In the Network tags field, enter network tags for your environment.
gcloud
When you create an environment, following arguments control network tags:
--tags
specifies a comma-separated list of network tags applied to all node VMs.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--tags TAGS
Replace:
TAGS
with a comma-separated list of network tags.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--tags group1,production
API
When you create an environment, in the Environment > EnvironmentConfig resource, specify network tags for your environment.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"nodeConfig": {
"tags": [
"TAG"
],
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
TAG
with a network tag.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"nodeConfig": {
"tags": [
"group1",
"production"
],
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
When you create an environment, following fields define network tags for your environment:
tags
field in thenode_config
block specifies a comma-separated list of network tags applied to all node VMs.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
node_config {
tags = ["TAGS"]
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
TAGS
with a comma-separated list of network tags.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
node_config {
tags = ["group1","production"]
service_account = "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Step 7. (Optional) Configure web server network access
The Airflow web server access parameters do not depend on the type of your environment. Instead, you can configure web server access separately. For example, a Private IP environment can still have the Airflow UI accessible from the internet.
You can't configure the allowed IP ranges using private IP addresses.
Console
On the Create environment page:
In the Network configuration section, expand the Show network configuration item.
In the Web server network access control section:
To provide access to the Airflow web server from all IP addresses, select Allow access from all IP addresses.
To restrict access only to specific IP ranges, select Allow access only from specific IP addresses. In the IP range field, specify an IP range in the CIDR notation. In the Description field, specify an optional description for this range. If you want to specify more than one range, click Add IP range.
To forbid access for all IP addresses, select Allow access only from specific IP addresses and click Delete item next to the empty range entry.
gcloud
When you create an environment, following arguments control web server access level:
--web-server-allow-all
provides access to Airflow from all IP addresses. This is the default option.--web-server-allow-ip
restricts access only to specific source IP ranges. To specify several IP ranges, use this argument multiple times.--web-server-deny-all
forbids access for all IP addresses.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--web-server-allow-ip ip_range=WS_IP_RANGE,description=WS_RANGE_DESCRIPTION
Replace:
WS_IP_RANGE
with the IP range, in the CIDR notation, that can access Airflow UI.WS_RANGE_DESCRIPTION
with the description of the IP range.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--web-server-allow-ip ip_range=192.0.2.0/24,description="office net 1" \
--web-server-allow-ip ip_range=192.0.4.0/24,description="office net 3"
API
When you create an environment, in the Environment > EnvironmentConfig resource, specify web server access parameters.
To provide access to the Airflow web server from all IP addresses, omit
webServerNetworkAccessControl
.To restrict access only to specific IP ranges, specify one or more ranges in
allowedIpRanges
.To forbid access for all IP addresses, add
allowedIpRanges
and make it an empty list. Do not specify IP ranges in it.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"webServerNetworkAccessControl": {
"allowedIpRanges": [
{
"value": "WS_IP_RANGE",
"description": "WS_RANGE_DESCRIPTION"
}
]
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
WS_IP_RANGE
with the IP range, in the CIDR notation, that can access Airflow UI.WS_RANGE_DESCRIPTION
with the description of the IP range.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"webServerNetworkAccessControl": {
"allowedIpRanges": [
{
"value": "192.0.2.0/24",
"description": "office net 1"
},
{
"value": "192.0.4.0/24",
"description": "office net 3"
}
]
},
"nodeConfig": {
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
When you create an environment, the allowed_ip_range
block in the
web_server_network_access_control
block contains IP ranges that can access
web server.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
web_server_network_access_control {
allowed_ip_range {
value = "WS_IP_RANGE"
description = "WS_RANGE_DESCRIPTION"
}
}
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
WS_IP_RANGE
with the IP range, in the CIDR notation, that can access Airflow UI.WS_RANGE_DESCRIPTION
with the description of the IP range.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
web_server_network_access_control {
allowed_ip_range {
value = "192.0.2.0/24"
description = "office net 1"
},
allowed_ip_range {
value = "192.0.4.0/24"
description = "office net 3"
}
}
node_config {
service_account = "
example-account@example-project.iam.gserviceaccount.com
"
}
}
Step 8. (Optional) Specify Airflow configuration overrides and environment variables
You can set up Airflow configuration overrides and environment variables when you create an environment. As an alternative, you can do it later, after your environment is created.
Some Airflow configuration options are blocked and you can't override them.
For the list of available Airflow configuration options, see Configuration reference for Airflow 2 and Airflow 1.10.*
To specify Airflow configuration overrides and environment variables:
Console
On the Create environment page:
In the Environment variables section, click Add environment variable.
Enter the Name and Value for the environment variable.
In the Airflow configuration overrides section, click Add Airflow configuration override.
Enter the Section, Key, and Value for the configuration option override.
For example:
Section Key Value webserver
dag_orientation
TB
gcloud
When you create an environment, following arguments control environment variables and Airflow configuration overrides:
--env-variables
specifies a comma-separated list of environment variables.Variable names may contain upper and lowercase letters, digits, and underscores, but they may not begin with a digit.
--airflow-configs
specifies a comma-separated list of keys and values for Airflow configuration overrides.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--env-variables ENV_VARS \
--airflow-configs CONFIG_OVERRIDES
Replace:
ENV_VARS
with a list of comma-separatedNAME=VALUE
pairs for environment variables.CONFIG_OVERRIDES
with a list of comma-separatedSECTION-KEY=VALUE
pairs for configuration overrides. Separate the name of the configuration section with a-
symbol, followed by the key name. For example:core-dags_are_paused_at_creation
.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--env-variables SENDGRID_MAIL_FROM=user@example.com,SENDGRID_API_KEY=example-key \
--airflow-configs core-dags_are_paused_at_creation=True,webserver-dag_orientation=TB
API
When you create an environment, in the Environment > EnvironmentConfig resource, specify environment variables and Airflow configuration overrides.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"softwareConfig": {
"airflowConfigOverrides": {
"SECTION-KEY": "OVERRIDE_VALUE"
},
"envVariables": {
"VAR_NAME": "VAR_VALUE",
}
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
SECTION
with the section in the configuration file where the Airflow configuration option is located.KEY
with the name of the Airflow configuration option.OVERRIDE_VALUE
with a value of the Airflow configuration option.VAR_NAME
with the name of the environment variable.VAR_VALUE
with the value of the environment variable.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"softwareConfig": {
"airflowConfigOverrides": {
"core-dags_are_paused_at_creation": "True",
"webserver-dag_orientation": "TB"
},
"envVariables": {
"SENDGRID_MAIL_FROM": "user@example.com",
"SENDGRID_API_KEY": "example-key"
}
},
"nodeConfig": {
"serviceAccount": "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Terraform
When you create an environment, following blocks control environment variables and Airflow configuration overrides:
env_variables
block in thesoftware_config
block specifies environment variables.Variable names may contain upper and lowercase letters, digits, and underscores, but they may not begin with a digit.
airflow_config_overrides
block in thesoftware_config
block specifies Airflow configuration overrides.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
software_config {
airflow_config_overrides = {
SECTION-KEY = "OVERRIDE_VALUE"
}
env_variables = {
VAR_NAME = "VAR_VALUE"
}
}
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
SECTION
with the section in the configuration file where the Airflow configuration option is located.KEY
with the name of the Airflow configuration option.OVERRIDE_VALUE
with a value of the Airflow configuration option.VAR_NAME
with the name of the environment variable.VAR_VALUE
with the value of the environment variable.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
software_config {
airflow_config_overrides = {
core-dags_are_paused_at_creation = "True"
webserver-dag_orientation = "TB"
}
env_variables = {
SENDGRID_MAIL_FROM = "user@example.com"
SENDGRID_API_KEY = "example-key"
}
}
node_config {
service_account = "
example-account@example-project.iam.gserviceaccount.com
"
}
}
}
Step 9. (Optional) Specify maintenance windows
Default maintenance windows in Cloud Composer 3 are defined in the following way:
- All times are in the local time zone of the region where your environment is located, but with daylight saving time ignored.
- On Tuesday, Wednesday, Thursday, and Friday maintenance windows are from 00:00:00 to 02:00:00.
- On Saturday, Sunday, and Monday maintenance windows are from 00:00:00 to 04:00:00.
To specify custom maintenance windows for your environment:
Console
On the Create environment page
Locate the Maintenance windows section.
In the Timezone drop-down list, choose a time zone for maintenance windows.
Set Start time, Days, and Length, so that combined time for the specified schedule is at least 12 hours in a 7-day rolling window. For example, a period of 4 hours every Monday, Wednesday, and Friday provides the required amount of time.
gcloud
The following arguments define maintenance windows parameters:
--maintenance-window-start
sets the start time of a maintenance window.--maintenance-window-end
sets the end time of a maintenance window.--maintenance-window-recurrence
sets the maintenance window recurrence.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--maintenance-window-start 'DATETIME_START' \
--maintenance-window-end 'DATETIME_END' \
--maintenance-window-recurrence 'MAINTENANCE_RECURRENCE'
Replace:
ENVIRONMENT_NAME
with the name of the environment.DATETIME_START
with the start date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored.DATETIME_END
with the end date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored. The specified date and time must be after the start date.MAINTENANCE_RECURRENCE
with an RFC 5545 RRULE for maintenance windows recurrence. Cloud Composer supports two formats:The
FREQ=DAILY
format specifies a daily recurrence.The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SA
format specifies a recurrence on selected days of the week.
The following example specifies a 6-hour maintenance window between 01:00 and 07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023 date is ignored.
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--maintenance-window-start '2023-01-01T01:00:00Z' \
--maintenance-window-end '2023-01-01T07:00:00Z' \
--maintenance-window-recurrence 'FREQ=WEEKLY;BYDAY=SU,WE,SA'
API
When you create an environment, in the Environment > EnvironmentConfig resource, specify maintenance windows parameters:
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"config": {
"maintenanceWindow": {
"startTime": "DATETIME_START",
"endTime": "DATETIME_END",
"recurrence": "MAINTENANCE_RECURRENCE"
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Replace:
DATETIME_START
with the start date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored.DATETIME_END
with the end date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored. The specified date and time must be after the start date.MAINTENANCE_RECURRENCE
with an RFC 5545 RRULE for maintenance windows recurrence. Cloud Composer supports two formats:The
FREQ=DAILY
format specifies a daily recurrence.The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SA
format specifies a recurrence on selected days of the week.
The following example specifies a 6-hour maintenance window between 01:00 and 07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023 date is ignored.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"config": {
"maintenanceWindow": {
"startTime": "2023-01-01T01:00:00Z",
"endTime": "2023-01-01T07:00:00Z",
"recurrence": "FREQ=WEEKLY;BYDAY=SU,WE,SA"
},
"nodeConfig": {
"serviceAccount": "SERVICE_ACCOUNT"
}
}
}
Terraform
The maintenance_window
block specifies the maintenance windows for your
environment:
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
maintenance_window {
start_time = "DATETIME_START"
end_time = "DATETIME_END"
recurrence = "MAINTENANCE_RECURRENCE"
}
node_config {
service_account = "SERVICE_ACCOUNT"
}
}
}
Replace:
DATETIME_START
with the start date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored.DATETIME_END
with the end date and time in the date/time input format. Only the specified time of the day is used, the specified date is ignored. The specified date and time must be after the start date.MAINTENANCE_RECURRENCE
with an RFC 5545 RRULE for maintenance windows recurrence. Cloud Composer supports two formats:- The
FREQ=DAILY
format specifies a daily recurrence. - The
FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SA
format specifies a recurrence on selected days of the week.
- The
The following example specifies a 6-hour maintenance window between 01:00 and 07:00 (UTC) on Wednesdays, Saturdays, and Sundays. The 1 January, 2023 date is ignored.
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
maintenance_window {
start_time = "2023-01-01T01:00:00Z"
end_time = "2023-01-01T07:00:00Z"
recurrence = "FREQ=WEEKLY;BYDAY=SU,WE,SA"
}
}
}
Step 10. (Optional) Data lineage integration
Data lineage is a Dataplex feature that lets you track data movement.
Data lineage integration is available in all versions of Cloud Composer 3.Data lineage integration is automatically enabled in a new Cloud Composer environment if the following conditions are met:
Data Lineage API is enabled in your project. For more information, see Enabling Data Lineage API in Dataplex documentation.
A custom Lineage Backend is not configured in Airflow.
You can disable data lineage integration when you create an environment. For example, if you want to override the automatic behavior or choose to enable data lineage later, after the environment is created.
Console
To disable Data lineage integration, on the Create environment page:
In the Advanced configuration section, expand the Show advanced configuration item.
In the Dataplex data lineage integration section, select Disable integration with Dataplex data lineage.
gcloud
When you create an environment, the --disable-cloud-data-lineage-integration
argument disables the data lineage integration.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--disable-cloud-data-lineage-integration
Replace:
ENVIRONMENT_NAME
with the name of the environment.LOCATION
with the region where the environment is located.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--disable-cloud-data-lineage-integration
Step 11. (Optional) Configure data encryption (CMEK)
By default, data in your environment is encrypted with a key provided by Google.
To use customer-managed encryption keys (CMEK) to encrypt data in your environment, follow the instructions outlined in Using customer-managed encryption keys.
Step 12. (Optional) Use a custom environment's bucket
When you create an environment, Cloud Composer creates a bucket for your environment automatically.
As an alternative, you can specify a custom Cloud Storage bucket from your project. Your environment uses this bucket in the same way as the automatically created bucket.
To use a custom environment bucket, follow the instructions outlined in Use a custom environment's bucket.
Step 13. (Optional) Specify environment labels
You can assign labels to your environments to break down billing costs based on these labels.
Console
On the Create environment page, in the Labels section:
Click Add label.
In Key and Value fields, specify key and value pairs for the environment labels.
gcloud
When you create an environment, the --labels
argument specifies a comma-separated list of keys and values with environment labels.
gcloud composer environments create ENVIRONMENT_NAME \
--location LOCATION \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "SERVICE_ACCOUNT" \
--labels LABELS
Replace:
LABELS
with a list of comma-separatedKEY=VALUE
pairs for environment labels.
Example:
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-3-airflow-2.9.3-build.7 \
--service-account "
example-account@example-project.iam.gserviceaccount.com
" \
--labels owner=engineering-team,env=production
API
When you create an environment, in the Environment resource, specify labels for your environment.
{
"name": "projects/PROJECT_ID/locations/LOCATION/environments/ENVIRONMENT_NAME",
"labels": {
"LABEL_KEY": "LABEL_VALUE"
}
}
Replace:
LABEL_KEY
with a key of the environment label.LABEL_VALUE
with a value of the environment label.
Example:
// POST https://composer.googleapis.com/v1/{parent=projects/*/locations/*}/environments
{
"name": "projects/example-project/locations/us-central1/environments/example-environment",
"labels": {
"owner": "engineering-team",
"env": "production"
}
}
Terraform
When you create an environment, specify labels in the labels
block (
outside of the config
block).
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
labels = {
LABEL_KEY = "LABEL_VALUE"
}
}
Replace:
LABEL_KEY
with a key of the environment label.LABEL_VALUE
with a value of the environment label.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
labels = {
owner = "engineering-team"
env = "production"
}
}
What's next
- Troubleshooting environment creation
- Configuring Shared VPC
- Configuring VPC Service Controls
- Adding and updating DAGs
- Accessing Airflow UI
- Updating and deleting environments
- About Cloud Composer versions