Cloud Composer 1 | Cloud Composer 2 | Cloud Composer 3
This page describes how to scale Cloud Composer environments.
Other pages about scaling:
- For a guide about selecting optimal scale and performance parameters for your environment, see Optimize environment performance and costs.
- For information about how environment scaling works, see Environment scaling.
Scale vertically and horizontally
Options for horizontal scaling:
- Adjust the minimum and maximum number of workers.
- Adjust the number of schedulers, DAG processors, and triggerers.
Options for vertical scaling:
- Adjust worker, scheduler, triggerer, DAG processor, and web server. scale and performance parameters.
- Adjust the environment size.
Resource limits
Component | Minimum count | Maximum count | Minimum vCPU | Maximum vCPU | vCPU minimum step | Minimum memory (GB) | Maximum memory (GB) | Memory minimum step (GB) | Minimum memory per 1 vCPU (GB) | Maximum memory per 1 vCPU (GB) | Minimum storage (GB) | Maximum storage (GB) | Storage minimum step (GB) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Schedulers | 1 | 3 | 0.5 | 1 | 0.5 | 0.5 | 8 | 0.25 | 1 | 8 | 0 | 100 | 1 |
Triggerers | 0 | 10 | 0.5 | 1 | 0.5 | 0.5 | 8 | 0.25 | 1 | 8 | - | - | - |
Web server | - | - | 0.5 | 32 | 0.5, 1, or a multiple of 2 | 1 | 256 | 0.25 | 1 | 8 | 0 | 100 | 1 |
Workers | 1 | 100 | 0.5 | 32 | 0.5, 1, or a multiple of 2 | 1 | 256 | 0.25 | 1 | 8 | 0 | 100 | 1 |
DAG processors | 1 | 3 | 0.5 | 32 | 0.5, 1, or a multiple of 2 | 1 | 256 | 0.25 | 1 | 8 | 0 | 100 | 1 |
Adjust worker parameters
You can set the minimum and maximum number of workers for your environment. Cloud Composer automatically scales your environment within the set limits. You can adjust these limits at any time.
You can specify the amount of CPUs, memory, and disk space used by Airflow workers in your environment. In this way, you can increase performance of your environment, in addition to horizontal scaling provided by using multiple workers.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Workloads configuration pane adjust the parameters for Airflow workers:
In the Minimum number of workers field, specify the number of Airflow workers that your environment must always run. The number of workers in your environment does not go below this number during the regular operation of the environment, even if a lower number of workers can handle the load.
In the Maximum number of workers field, specify the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.
In the CPU, Memory, and Storage fields, specify the number of CPUs, memory, and storage for Airflow workers. Each worker uses the specified amount of resources.
Click Save.
gcloud
The following Airflow worker parameters are available:
--min-workers
: the number of Airflow workers that your environment must always run. The number of workers in your environment does not go below this number, even if a lower number of workers can handle the load.--max-workers
: the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.--worker-cpu
: the number of CPUs for an Airflow worker.--worker-memory
: the amount of memory for an Airflow worker.--worker-storage
: the amount of disk space for an Airflow worker.
Run the following Google Cloud CLI command:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--min-workers WORKERS_MIN \
--max-workers WORKERS_MAX \
--worker-cpu WORKER_CPU \
--worker-memory WORKER_MEMORY \
--worker-storage WORKER_STORAGE
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WORKERS_MIN
: the minimum number of Airflow workers.WORKERS_MAX
: the maximum number of Airflow workers.WORKER_CPU
: the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
: the amount of memory for a worker.WORKER_STORAGE
: the disk size for a worker.
Example:
gcloud composer environments update example-environment \
--location us-central1 \
--min-workers 2 \
--max-workers 6 \
--worker-cpu 1 \
--worker-memory 2 \
--worker-storage 2
API
Construct an
environments.patch
API request.In this request:
In the
updateMask
parameter, specify the fields that you want to update. For example, to update all parameters for workers, specify theconfig.workloadsConfig.worker.cpu,config.workloadsConfig.worker.memoryGb,config.workloadsConfig.worker.storageGB,config.softwareConfig.workloadsConfig.worker.minCount,config.softwareConfig.workloadsConfig.worker.maxCount
mask.In the request body, specify the new worker parameters.
"config": {
"workloadsConfig": {
"worker": {
"minCount": WORKERS_MIN,
"maxCount": WORKERS_MAX,
"cpu": WORKER_CPU,
"memoryGb": WORKER_MEMORY,
"storageGb": WORKER_STORAGE
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WORKERS_MIN
: the minimum number of Airflow workers.WORKERS_MAX
: the maximum number of Airflow workers.WORKER_CPU
: the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
: the amount of memory for a worker, in GB.WORKER_STORAGE
: the disk size for a worker, in GB.
Example:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.worker.minCount,
// config.workloadsConfig.worker.maxCount
// config.workloadsConfig.worker.cpu,
// config.workloadsConfig.worker.memoryGb,
// config.workloadsConfig.worker.storageGB
"config": {
"workloadsConfig": {
"worker": {
"minCount": 2,
"maxCount": 6,
"cpu": 1,
"memoryGb": 2,
"storageGb": 2
}
}
}
Terraform
The following fields in the workloads_config.worker
block control the
Airflow worker parameters. Each worker uses the specified amount of resources.
worker.min_count
: the number of Airflow workers that your environment must always run. The number of workers in your environment does not go below this number, even if a lower number of workers can handle the load.worker.max_count
: the maximum number of Airflow workers that your environment can run. The number of workers in your environment does not go above this number, even if a higher number of workers is required to handle the load.worker.cpu
: the number of CPUs for an Airflow worker.- The
worker.memory_gb
: the amount of memory for an Airflow worker. - The
worker.storage_gb
: the amount of disk space for an Airflow worker.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
worker {
min_count = WORKERS_MIN
max_count = WORKERS_MAX
cpu = WORKER_CPU
memory_gb = WORKER_MEMORY
storage_gb = WORKER_STORAGE
}
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WORKERS_MIN
: the minimum number of Airflow workers.WORKERS_MAX
: the maximum number of Airflow workers.WORKER_CPU
: the number of CPUs for a worker, in vCPU units.WORKER_MEMORY
: the amount of memory for a worker, in GB.WORKER_STORAGE
: the disk size for a worker, in GB.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
worker {
min_count = 2
max_count = 6
cpu = 1
memory_gb = 2
storage_gb = 2
}
}
}
}
Adjust scheduler parameters
Your environment can run more than one Airflow scheduler at the same time. Use multiple schedulers to distribute load between several scheduler instances for better performance and reliability.
You can have up to 3 schedulers in your environment.
Increasing the number of schedulers does not always improve Airflow performance. For example, having only one scheduler might provide better performance than having two. This might happen when the extra scheduler is not utilized, and thus consumes resources of your environment without contributing to overall performance. The actual scheduler performance depends on the number of Airflow workers, the number of DAGs and tasks that run in your environment, and the configuration of both Airflow and the environment.
We recommend starting with two schedulers and then monitoring the performance of your environment. If you change the number of schedulers, you can always scale your environment back to the original number of schedulers.
For more information about configuring multiple schedulers, see Airflow documentation.
You can specify the amount of CPUs, memory, and disk space used by Airflow schedulers in your environment. In this way, you can increase performance of your environment, in addition to horizontal scaling provided by using multiple schedulers.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Workloads configuration pane adjust the parameters for Airflow schedulers:
In the Number of schedulers drop-down list, select the number of schedulers for your environment.
In the CPU, Memory, and Storage fields, specify the number of CPUs, memory, and storage for Airflow schedulers. Each scheduler uses the specified amount of resources.
Click Save.
gcloud
The following Airflow scheduler parameters are available:
--scheduler-count
: the number of schedulers in your environment.--scheduler-cpu
: the number of CPUs for an Airflow scheduler.--scheduler-memory
: the amount of memory for an Airflow scheduler.--scheduler-storage
: the amount of disk space for an Airflow scheduler.
Run the following Google Cloud CLI command:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--scheduler-cpu SCHEDULER_CPU \
--scheduler-memory SCHEDULER_MEMORY \
--scheduler-storage SCHEDULER_STORAGE \
--scheduler-count SCHEDULER_COUNT
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.SCHEDULER_CPU
: the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
: the amount of memory for a scheduler.SCHEDULER_STORAGE
: the disk size for a scheduler.SCHEDULER_COUNT
: the number of schedulers.
Example:
gcloud composer environments update example-environment \
--location us-central1 \
--scheduler-cpu 0.5 \
--scheduler-memory 2.5 \
--scheduler-storage 2 \
--scheduler-count 2
API
Construct an
environments.patch
API request.In this request:
In the
updateMask
parameter, specify theconfig.workloadsConfig.scheduler
mask to update all scheduler parameters or only the number of schedulers. You can also update individual scheduler parameters exceptcount
by specifying a mask. For example,config.workloadsConfig.scheduler.cpu
.In the request body, specify the new scheduler parameters.
"config": {
"workloadsConfig": {
"scheduler": {
"cpu": SCHEDULER_CPU,
"memoryGb": SCHEDULER_MEMORY,
"storageGb": SCHEDULER_STORAGE,
"count": SCHEDULER_COUNT
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.SCHEDULER_CPU
: the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
: the amount of memory for a scheduler, in GB.SCHEDULER_STORAGE
: the disk size for a scheduler, in GB.SCHEDULER_COUNT
: the number of schedulers.
Example:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.scheduler
"config": {
"workloadsConfig": {
"scheduler": {
"cpu": 0.5,
"memoryGb": 2.5,
"storageGb": 2,
"count": 2
}
}
}
Terraform
The following fields in the workloads_config.scheduler
block control the
Airflow scheduler parameters. Each scheduler uses the specified amount of
resources.
scheduler.count
: the number of schedulers in your environment.scheduler.cpu
: the number of CPUs for an Airflow scheduler.scheduler.memory_gb
: the amount of memory for an Airflow scheduler.scheduler.storage_gb
: the amount of disk space for a scheduler.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
scheduler {
cpu = SCHEDULER_CPU
memory_gb = SCHEDULER_MEMORY
storage_gb = SCHEDULER_STORAGE
count = SCHEDULER_COUNT
}
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.SCHEDULER_CPU
: the number of CPUs for a scheduler, in vCPU units.SCHEDULER_MEMORY
: the amount of memory for a scheduler, in GB.SCHEDULER_STORAGE
: the disk size for a scheduler, in GB.SCHEDULER_COUNT
: the number of schedulers.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
scheduler {
cpu = 0.5
memory_gb = 1.875
storage_gb = 1
count = 2
}
}
}
}
Adjust triggerer parameters
You can set the number of triggerers to zero, but you need at least one triggerer instance in your environment (or at least two in highly resilient environments), to use deferrable operators in your DAGs.
Depending on your environment's resilience mode, there are different possible configurations for the number of triggerers:
- Standard resilience: you can run up to 10 triggerers.
- High resilience: at least 2 triggerers, up to a maximum of 10.
Even if the number of triggerers is set to zero, a triggerer pod definition is created and visible in your environment's cluster, but no actual triggerer workloads are run.
You can also specify the amount of CPUs, memory, and disk space used by Airflow triggerers in your environment. In this way, you can increase performance of your environment, in addition to horizontal scaling provided by using multiple triggerers.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Workloads configuration pane adjust the parameters for Airflow triggerers:
In the Triggerer section, in the Number of triggerers field, enter the number of triggerers in your environment.
If you set at least one triggerer for your environment, also use the the CPU, and Memory fields to configure resource allocation for your triggerers.
In the CPU and Memory, specify the number of CPUs, memory, and storage for Airflow triggerers. Each triggerer uses the specified amount of resources.
Click Save.
gcloud
The following Airflow triggerer parameters are available:
--triggerer-count
: the number of triggerers in your environment.- For standard resilience environments, use a value between
0
and10
. - For highly resilient environments, use
0
, or a value between2
and10
.
- For standard resilience environments, use a value between
--triggerer-cpu
: the number of CPUs for an Airflow triggerer.--triggerer-memory
: the amount of memory for an Airflow triggerer.
Run the following Google Cloud CLI command:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--triggerer-count TRIGGERER_COUNT \
--triggerer-cpu TRIGGERER_CPU \
--triggerer-memory TRIGGERER_MEMORY
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.TRIGGERER_COUNT
: the number of triggerers.TRIGGERER_CPU
: the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORY
: the amount of memory for a triggerer.
Examples:
- Scale to four triggerer instances:
gcloud composer environments update example-environment \
--location us-central1 \
--triggerer-count 4 \
--triggerer-cpu 1 \
--triggerer-memory 1
```
- Disable triggerers by setting triggerer count to `0`. This operation
doesn't require specifying CPU or memory for the triggerers.
```bash
gcloud composer environments update example-environment \
--location us-central1 \
--triggerer-count 0
```
API
In the
updateMask
query parameter, specify theconfig.workloadsConfig.triggerer
mask.In the request body, specify all three parameters for triggerers.
"config": {
"workloadsConfig": {
"triggerer": {
"count": TRIGGERER_COUNT,
"cpu": TRIGGERER_CPU,
"memoryGb": TRIGGERER_MEMORY
}
}
}
Replace the following:
TRIGGERER_COUNT
: the number of triggerers.- For standard resilience environments, use a value between
0
and10
. - For highly resilient environments, use
0
, or a value between2
and10
.
- For standard resilience environments, use a value between
TRIGGERER_CPU
: the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORY
: the amount of memory for a triggerer.
Examples:
- Disable triggerers by setting triggerer count to
0
. This operation doesn't require specifying CPU or memory for the triggerers.
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.triggerer
"config": {
"workloadsConfig": {
"triggerer": {
"count": 0
}
}
}
- Scale to four triggerer instances:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.triggerer
"config": {
"workloadsConfig": {
"triggerer": {
"count": 4,
"cpu": 1,
"memoryGb": 1
}
}
}
Terraform
The following fields in the workloads_config.triggerer
block control the
Airflow triggerer parameters. Each triggerer uses the specified amount of
resources.
triggerer.count
: the number of triggerers in your environment.- For standard resilience environments, use a value between
0
and10
. - For highly resilient environments, use
0
, or a value between2
and10
.
- For standard resilience environments, use a value between
triggerer.cpu
: the number of CPUs for an Airflow triggerer.triggerer.memory_gb
: the amount of memory for an Airflow triggerer.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
triggerer {
count = TRIGGERER_COUNT
cpu = TRIGGERER_CPU
memory_gb = TRIGGERER_MEMORY
}
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.TRIGGERER_COUNT
: the number of triggerers.TRIGGERER_CPU
: the number of CPUs for a triggerer, in vCPU units.TRIGGERER_MEMORY
: the amount of memory for a triggerer, in GB.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
triggerer {
count = 1
cpu = 0.5
memory_gb = 0.5
}
}
}
}
Adjust DAG processor parameters
You can specify the number of DAG processors in your environment and the amount of CPUs, memory, and disk space used by each DAG processor.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Workloads configuration pane adjust the parameters for Airflow DAG processors:
In the Number of DAG processors drop-down list, select the number of DAG processors for your environment.
In the CPU, Memory, and Storage fields, specify the number of CPUs, memory, and storage for Airflow DAG processors. Each DAG processor uses the specified amount of resources.
Click Save.
gcloud
The following Airflow DAG processor parameters are available:
--dag-processor-count
: the number of DAG processors.--dag-processor-cpu
: the number of CPUs for the DAG processor.--dag-processor-memory
: the amount of memory for the DAG processor.--dag-processor-storage
: the amount of disk space for the DAG processor.
Run the following Google Cloud CLI command:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--dag-processor-count DAG_PROCESSOR_COUNT \
--dag-processor-cpu DAG_PROCESSOR_CPU \
--dag-processor-memory DAG_PROCESSOR_MEMORY \
--dag-processor-storage DAG_PROCESSOR_STORAGE
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.DAG_PROCESSOR_CPU
: the number of CPUs for the DAG processor.DAG_PROCESSOR_MEMORY
: the amount of memory for the DAG processor.DAG_PROCESSOR_STORAGE
: the amount of disk space for the DAG processor.
Example:
gcloud composer environments update example-environment \
--location us-central1 \
--dag-processor-count 2 \
--dag-processor-cpu 0.5 \
--dag-processor-memory 2 \
--dag-processor-storage 1
API
Construct an
environments.patch
API request.In this request:
In the
updateMask
parameter, specify theconfig.workloadsConfig.dagProcessor
mask to update all DAG processor parameters including the number of DAG processors. You can also update individual DAG processor parameters by specifying a mask. For example,config.workloadsConfig.dagProcessor.cpu,config.workloadsConfig.dagProcessor.memoryGb,config.workloadsConfig.dagProcessor.storageGb
.In the request body, specify the new DAG processor parameters.
"config": {
"workloadsConfig": {
"dagProcessor": {
"count": DAG_PROCESSOR_COUNT,
"cpu": DAG_PROCESSOR_CPU,
"memoryGb": DAG_PROCESSOR_MEMORY,
"storageGb": DAG_PROCESSOR_STORAGE
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.DAG_PROCESSOR_COUNT
: the number of DAG processors.DAG_PROCESSOR_CPU
: the number of CPUs for the DAG processor, in vCPU units.DAG_PROCESSOR_MEMORY
: the amount of memory for the DAG processor, in GB.DAG_PROCESSOR_STORAGE
: the amount of disk space for the DAG processor, in GB.
Example:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.dagProcessor
"config": {
"workloadsConfig": {
"scheduler": {
"count": 2
"cpu": 0.5,
"memoryGb": 2.5,
"storageGb": 2
}
}
}
Terraform
The following fields in the workloads_config.dag_processor
block control the
Airflow DAG processor parameters. Each DAG processor uses the specified
amount of resources.
dag_processor.count
: the number of DAG processors in your environment.dag_processor.cpu
: the number of CPUs for a DAG processor.dag_processor.memory_gb
: the amount of memory for a DAG processor.dag_processor.storage_gb
the amount of disk space for a DAG processor.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
dag_processor {
count = DAG_PROCESSOR_COUNT
cpu = DAG_PROCESSOR_CPU
memory_gb = DAG_PROCESSOR_MEMORY
storage_gb = DAG_PROCESSOR_STORAGE
}
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.DAG_PROCESSOR_COUNT
: the number of DAG processors.DAG_PROCESSOR_CPU
: the number of CPUs for the DAG processor, in vCPU units.DAG_PROCESSOR_MEMORY
: the amount of memory for the DAG processor, in GB.DAG_PROCESSOR_STORAGE
: the amount of disk space for the DAG processor, in GB.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
dag_processor {
count = 2
cpu = 0.5
memory_gb = 2
storage_gb = 1
}
}
}
}
Adjust web server parameters
You can specify the amount of CPUs, memory, and disk space used by the Airflow web server in your environment. In this way, you can scale the performance of Airflow UI, for example, to match the demand coming from a large number of users or a large number of managed DAGs.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Workloads configuration pane adjust the parameters for the web server. In the CPU, Memory, and Storage fields, specify the number of CPUs, memory, and storage for the web server.
Click Save.
gcloud
The following Airflow web server parameters are available:
--web-server-cpu
: the number of CPUs for the Airflow web server.--web-server-memory
: the amount of memory for the Airflow web server.--web-server-storage
: the amount of disk space for the Airflow web server.
Run the following Google Cloud CLI command:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--web-server-cpu WEB_SERVER_CPU \
--web-server-memory WEB_SERVER_MEMORY \
--web-server-storage WEB_SERVER_STORAGE
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WEB_SERVER_CPU
: the number of CPUs for web server, in vCPU units.WEB_SERVER_MEMORY
: the amount of memory for web server.WEB_SERVER_STORAGE
: the amount of memory for the web server.
Example:
gcloud composer environments update example-environment \
--location us-central1 \
--web-server-cpu 1 \
--web-server-memory 2.5 \
--web-server-storage 2
API
Construct an
environments.patch
API request.In this request:
In the
updateMask
parameter, specify theconfig.workloadsConfig.webServer
mask to update all web server parameters. You can also update individual web server parameters by specifying a mask for those arameters:config.workloadsConfig.webServer.cpu
,config.workloadsConfig.webServer.memoryGb
,config.workloadsConfig.webServer.storageGb
.In the request body, specify the new web server parameters.
"config": {
"workloadsConfig": {
"webServer": {
"cpu": WEB_SERVER_CPU,
"memoryGb": WEB_SERVER_MEMORY,
"storageGb": WEB_SERVER_STORAGE
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WEB_SERVER_CPU
: the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORY
: the amount of memory for the web server, in GB.WEB_SERVER_STORAGE
: the disk size for the web server, in GB.
Example:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.workloadsConfig.webServer.cpu,
// config.workloadsConfig.webServer.memoryGb,
// config.workloadsConfig.webServer.storageGb
"config": {
"workloadsConfig": {
"webServer": {
"cpu": 0.5,
"memoryGb": 2.5,
"storageGb": 2
}
}
}
Terraform
The following fields in the workloads_config.web_server
block control the
web server parameters.
- The
web_server.cpu
: the number of CPUs for the web server. - The
web_server.memory_gb
: the amount of memory for the web server. - The
web_server.storage_gb
: the amount of disk space for the web server.
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
workloads_config {
web_server {
cpu = WEB_SERVER_CPU
memory_gb = WEB_SERVER_MEMORY
storage_gb = WEB_SERVER_STORAGE
}
}
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.WEB_SERVER_CPU
: the number of CPUs for the web server, in vCPU units.WEB_SERVER_MEMORY
: the amount of memory for the web server, in GB.WEB_SERVER_STORAGE
: the disk size for the web server, in GB.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
workloads_config {
web_server {
cpu = 0.5
memory_gb = 1.875
storage_gb = 1
}
}
}
}
Adjust the environment size
The environment size controls the performance parameters of the managed Cloud Composer infrastructure that includes, for example, the Airflow database.
Consider selecting a larger environment size if you want to run a large number of DAGs and tasks.
Console
In the Google Cloud console, go to the Environments page.
In the list of environments, click the name of your environment. The Environment details page opens.
Go to the Environment configuration tab.
In the Resources > Workloads configuration item, click Edit.
In the Resources > Core infrastructure item, click Edit.
In the Core infrastructure pane, in the Environment size field, specify the environment size.
Click Save.
gcloud
The --environment-size
argument controls the environment size:
gcloud composer environments update ENVIRONMENT_NAME \
--location LOCATION \
--environment-size ENVIRONMENT_SIZE
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.ENVIRONMENT_SIZE
:small
,medium
, orlarge
.
Example:
gcloud composer environments update example-environment \
--location us-central1 \
--environment-size medium
API
Create an
environments.patch
API request.In this request:
In the
updateMask
parameter, specify theconfig.environmentSize
mask.In the request body, specify the environment size.
"config": {
"environmentSize": "ENVIRONMENT_SIZE"
}
Replace the following:
ENVIRONMENT_SIZE
: the environment size,ENVIRONMENT_SIZE_SMALL
,ENVIRONMENT_SIZE_MEDIUM
, orENVIRONMENT_SIZE_LARGE
.
Example:
// PATCH https://composer.googleapis.com/v1/projects/example-project/
// locations/us-central1/environments/example-environment?updateMask=
// config.environmentSize
"config": {
"environmentSize": "ENVIRONMENT_SIZE_MEDIUM"
}
Terraform
The environment_size
field in the config
block controls the environment
size:
resource "google_composer_environment" "example" {
provider = google-beta
name = "ENVIRONMENT_NAME"
region = "LOCATION"
config {
environment_size = "ENVIRONMENT_SIZE"
}
}
Replace the following:
ENVIRONMENT_NAME
: the name of the environment.LOCATION
: the region where the environment is located.ENVIRONMENT_SIZE
: the environment size,ENVIRONMENT_SIZE_SMALL
,ENVIRONMENT_SIZE_MEDIUM
, orENVIRONMENT_SIZE_LARGE
.
Example:
resource "google_composer_environment" "example" {
provider = google-beta
name = "example-environment"
region = "us-central1"
config {
environment_size = "ENVIRONMENT_SIZE_SMALL"
}
}
}
What's next
- Environment scaling and performance
- Cloud Composer pricing
- Update environments
- Environment architecture