Update environments

Cloud Composer 1 | Cloud Composer 2

This page explains how an environment can be updated.

About update operations

When you change parameters of your environment, such as specifying new scaling and performance parameters, or installing custom PyPI packages, your environment updates.

After this operation is completed, changes become available in your environment.

For a single Cloud Composer environment, you can start only one update operation at a time. You must wait for an update operation to complete before starting another environment operation.

How updates affect running Airflow tasks

When you run an update operation, such as installing custom PyPI packages, all Airflow schedulers and workers in your environment restart, and all currently running tasks are terminated. After the update operation is completed, Airflow schedules these tasks for a retry, depending on the way you configure retries for your DAGs.

Updating with Terraform

Run terraform plan before terraform apply to see if Terraform creates a new environment instead of updating it.

Before you begin

  • Check that your account, the service account of your environment, and the Cloud Composer Service Agent account in your project have required permissions:

  • The gcloud composer environments update command terminates when the operation is finished. You can use the --async flag to avoid waiting for the operation to complete.

Update environments

For more information about updating your environment, see other documentation pages about specific update operations. For example:

View environment details


  1. In Google Cloud console, go to the Environments page.

    Go to Environments

  2. In the list of environments, click the name of your environment. The Environment details page opens.


Run the following gcloud command:

gcloud composer environments describe ENVIRONMENT_NAME \
  --location LOCATION


  • ENVIRONMENT_NAME with the name of the environment.
  • LOCATION with the region where the environment is located.


Construct an environments.get API request.


GET https://composer.googleapis.com/v1/projects/example-project/


Run the terraform state show command for your environment's resource.

The name of your environment's Terraform resource might be different than the name of your environment.

terraform state show google_composer_environment.RESOURCE_NAME


  • RESOURCE_NAME with the name of your environment's resource.

Upgrading the machine type for GKE nodes

You can manually upgrade the machine type for your environment's GKE cluster by deleting the existing default-pool and creating a new default-pool with the desired machine type.

We recommend you to specify a suitable machine type for the type of computing that occurs in your Cloud Composer environment when you create an environment.

If you are running jobs that perform resource-intensive computation, you might want to use GKE Operators.

After an upgrade, the previous machine type is still listed in your environment's details. For example, the Environment details page does not reflect the new machine type.


To upgrade the machine type:

  1. In Google Cloud console, go to the Environments page.

    Go to Environments

  2. In the list of environments, click the name of your environment. The Environment details page opens.

  3. Get information about the default node pool:

    1. Go to the Environment configuration tab.

    2. Click the view cluster details link.

    3. On the Clusters page in the Nodes section, click default-pool.

    4. Note all the information for default-pool on the Node pool details page. You use this information to create a new default node pool for your environment.

  4. To delete default-pool:

    1. On the Node pool details page, click the back arrow to return to the Clusters page for your environment.

    2. In the Node Pools section, click the trash icon for the default-pool. Then click Delete to confirm the operation.

  5. To create the new default-pool:

    1. On the Clusters page, click Add node pool.

    2. For Name, enter default-pool. You must use the default-pool name so that workflows in your environment can run in this pool.

    3. Enter the Size and Nodes settings.

    4. (Only for default Compute Engine service accounts) For Access scopes, select Allow full access to all Cloud APIs.

    5. Click Save.

  6. If you notice that workloads are distributed unevenly, scale down the airflow-worker deployment to zero and scale up again.

What's next