El 15 de septiembre del 2026, todos los entornos de Cloud Composer 1 y Cloud Composer 2 versión 2.0.x alcanzarán el final de su ciclo de vida previsto y no podrás usarlos. Te recomendamos que planifiques la migración a Cloud Composer 3.
Para obtener información sobre cómo escalar tus entornos, consulta Escalar entornos.
Entornos de autoescalado
Los entornos de Cloud Composer 2 se escalan automáticamente en función de las demandas de los DAGs y las tareas que se ejecutan:
Si tu entorno experimenta una carga pesada, Cloud Composer aumenta automáticamente el número de trabajadores de tu entorno.
Si tu entorno no utiliza algunos de sus trabajadores, estos se eliminan para ahorrar recursos y costes del entorno.
Puedes definir el número mínimo y máximo de trabajadores de tu entorno.
Cloud Composer escala automáticamente tu entorno dentro de los límites establecidos. Puedes ajustar estos límites en cualquier momento.
El número de trabajadores se ajusta en función de la métrica Objetivo del factor de escalado. Esta métrica se calcula en función de lo siguiente:
Número actual de trabajadores
Número de tareas de Celery en la cola de Celery que no están asignadas a un trabajador.
Número de trabajadores inactivos
Opción de configuración de celery.worker_concurrency Airflow
El autoescalado de Cloud Composer usa tres herramientas de autoescalado diferentes proporcionadas por GKE:
Cloud Composer configura estos escaladores automáticos en el clúster del entorno. De esta forma, se escala automáticamente el número de nodos del clúster, el tipo de máquina y el número de trabajadores.
Parámetros de escala y rendimiento
Además del autoescalado, puedes controlar los parámetros de escalado y rendimiento de tu entorno ajustando los límites de CPU, memoria y disco de los programadores, el servidor web y los trabajadores. De esta forma, puedes escalar tu entorno verticalmente, además del escalado horizontal que proporciona la función de autoescalado. Puedes ajustar los parámetros de escala y rendimiento de los programadores, el servidor web y los trabajadores de Airflow en cualquier momento.
El parámetro de rendimiento Tamaño del entorno de tu entorno controla los parámetros de rendimiento de la infraestructura gestionada de Cloud Composer, que incluye la base de datos de Airflow. Si quieres ejecutar un gran número de DAGs y tareas con un mayor rendimiento de la infraestructura, te recomendamos que selecciones un tamaño de entorno más grande. Por ejemplo, si el tamaño del entorno es mayor, aumenta la cantidad de entradas de registro de tareas de Airflow que puede procesar con un retraso mínimo.
Varias programaciones
Airflow 2 puede usar más de un programador de Airflow al mismo tiempo. Esta función de Airflow también se conoce como programador de alta disponibilidad. En Cloud Composer 2, puedes definir el número de programadores de tu entorno y ajustarlo en cualquier momento. Cloud Composer no escala automáticamente el número de programadores de tu entorno.
Para obtener más información sobre cómo configurar el número de programadores de tu entorno, consulta Escalar entornos.
Espacio en disco de la base de datos
El espacio en disco de la base de datos de Airflow aumenta automáticamente para adaptarse a la demanda.
[[["Es fácil de entender","easyToUnderstand","thumb-up"],["Me ofreció una solución al problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Es difícil de entender","hardToUnderstand","thumb-down"],["La información o el código de muestra no son correctos","incorrectInformationOrSampleCode","thumb-down"],["Me faltan las muestras o la información que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-08-28 (UTC)."],[[["\u003cp\u003eCloud Composer 2 environments automatically scale the number of workers based on the demands of executed DAGs and tasks, increasing workers during heavy loads and removing them during inactivity.\u003c/p\u003e\n"],["\u003cp\u003eThe environment's scaling is governed by the Scaling Factor Target metric, which considers the current worker count, queued tasks, idle workers, and the \u003ccode\u003ecelery.worker_concurrency\u003c/code\u003e Airflow setting.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 utilizes three GKE autoscalers (Horizontal Pod Autoscaler, Cluster Autoscaler, and Node auto-provisioning) to automatically adjust the number of nodes, machine types, and workers in the environment's cluster.\u003c/p\u003e\n"],["\u003cp\u003eIn addition to autoscaling, users can manually adjust the CPU, memory, and disk limits for schedulers, web servers, and workers to vertically scale the environment, and they can also choose the environment size to control the managed Cloud Composer infrastructure.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 allows for the configuration of multiple Airflow schedulers (HA scheduler) to be used simultaneously, but the number of schedulers is not automatically scaled by the system.\u003c/p\u003e\n"]]],[],null,["# Environment scaling\n\n[Cloud Composer 3](/composer/docs/composer-2/composer/docs/composer-3/environment-scaling \"View this page for Cloud Composer 3\") \\| **Cloud Composer 2** \\| [Cloud Composer 1](/composer/docs/composer-1/environment-scaling \"View this page for Cloud Composer 1\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page describes how environment scaling works in Cloud Composer 2.\n\nOther pages about scaling:\n\n- For a guide about selecting optimal scale and performance parameters for your environment, see [Optimize environment performance and costs](/composer/docs/composer-2/optimize-environments).\n- For information about scaling your environments, see [Scale environments](/composer/docs/composer-2/scale-environments).\n\nAutoscaling environments\n------------------------\n\nCloud Composer 2 environments automatically scale in response to the demands\nof your executed DAGs and tasks:\n\n- If your environment experiences a heavy load, Cloud Composer automatically increases the number of workers in your environment.\n- If your environment does not use some of its workers, these workers are removed to save environment resources and costs.\n- You can set the minimum and maximum number of workers for your environment. Cloud Composer automatically scales your environment within the set limits. You can adjust these limits at any time.\n\nThe number of workers is adjusted based on\nthe [Scaling Factor Target](/composer/docs/composer-2/monitor-environments#worker-metrics) metric. This metric is\ncalculated based on:\n\n- Current number of workers\n- Number of Celery tasks in the Celery queue, that are not assigned to a worker\n- Number of idle workers\n- `celery.worker_concurrency` Airflow configuration option\n\nCloud Composer autoscaling uses three different autoscalers\nprovided by GKE:\n\n- [Horizontal Pod Autoscaler (HPA)](/kubernetes-engine/docs/concepts/horizontalpodautoscaler)\n- [Cluster Autoscaler (CA)](/kubernetes-engine/docs/concepts/cluster-autoscaler)\n- [Node auto-provisioning (NAP)](/kubernetes-engine/docs/how-to/node-auto-provisioning)\n\nCloud Composer configures these autoscalers in the environment's\ncluster. This automatically scales the number of nodes in the cluster, the\nmachine type and the number of workers.\n\nScale and performance parameters\n--------------------------------\n\nIn addition to autoscaling, you can control the scale and performance\nparameters of your environment by adjusting the CPU, memory, and disk limits\nfor schedulers, web server, and workers. By doing so you can scale your\nenvironment vertically, in addition to the horizontal scaling provided by the\nautoscaling feature. You can adjust the scale and performance parameters of\nAirflow schedulers, web server, and workers at any time.\n\nThe *environment size* performance parameter of your environment controls the\nperformance parameters of the managed Cloud Composer infrastructure\nthat includes the Airflow database. Consider selecting a larger environment\nsize if you want to run a large number of DAGs and tasks with higher\ninfrastructure performance. For example, larger environment's size increases\nthe amount of Airflow task log entries that your environment can process with\nminimal delay.\n| **Note:** Environment size is different from the environment presets. Environment presets, which you can select when you create an environment, determine all limits, scale, and performance parameters of your environment, including the environment size. Environment size determines only the performance parameters of the managed Cloud Composer infrastructure of your environment.\n\nMultiple schedulers\n-------------------\n\nAirflow 2 can use more than one Airflow scheduler at the same time. This\nAirflow feature is also known as the **HA scheduler**. In Cloud Composer 2,\nyou can set the number of schedulers for your environment and adjust it at any\ntime. Cloud Composer does not automatically scale the number of\nschedulers in your environment.\n\nFor more information about configuring the number of schedulers for your\nenvironment, see [Scale environments](/composer/docs/composer-2/scale-environments#scheduler-count).\n\nDatabase disk space\n-------------------\n\nDisk space for the Airflow database automatically increases to accommodate the\ndemand.\n\n\nWhat's next\n-----------\n\n- [Scale environments](/composer/docs/composer-2/scale-environments)\n- [Cloud Composer pricing](/composer/pricing)\n- [Create environments](/composer/docs/composer-2/create-environments)\n- [Environment architecture](/composer/docs/composer-2/environment-architecture)"]]