A 15 de setembro de 2026, todos os ambientes do Cloud Composer 1 e do Cloud Composer 2 versão 2.0.x vão atingir o fim da vida útil planeado e não vai poder usá-los. Recomendamos que planeie a migração para o Cloud Composer 3.
Para obter informações sobre como dimensionar os seus ambientes, consulte o artigo
Dimensione os ambientes.
Ambientes de escala automática
Os ambientes do Cloud Composer 2 são dimensionados automaticamente em resposta às exigências
das suas tarefas e DAGs executados:
Se o seu ambiente tiver uma carga elevada, o Cloud Composer aumenta automaticamente o número de trabalhadores no seu ambiente.
Se o seu ambiente não usar alguns dos respetivos trabalhadores, estes são removidos para poupar recursos e custos do ambiente.
Pode definir o número mínimo e máximo de trabalhadores para o seu ambiente.
O Cloud Composer dimensiona automaticamente o seu ambiente dentro dos limites definidos. Pode ajustar estes limites em qualquer altura.
O número de trabalhadores é ajustado com base na métrica Alvo do fator de escalabilidade. Esta métrica é
calculada com base no seguinte:
Número atual de trabalhadores
Número de tarefas do Celery na fila do Celery que não estão atribuídas a um trabalhador
Número de trabalhadores inativos
celery.worker_concurrency Opção de configuração do fluxo de ar
A escala automática do Cloud Composer usa três redimensionadores automáticos diferentes
fornecidos pelo GKE:
O Cloud Composer configura estes escaladores automáticos no cluster do ambiente. Isto dimensiona automaticamente o número de nós no cluster, o tipo de máquina e o número de trabalhadores.
Parâmetros de escala e desempenho
Além do dimensionamento automático, pode controlar os parâmetros de escala e desempenho do seu ambiente ajustando os limites de CPU, memória e disco para programadores, servidor Web e trabalhadores. Ao fazê-lo, pode dimensionar o seu ambiente verticalmente, além do dimensionamento horizontal fornecido pela funcionalidade de dimensionamento automático. Pode ajustar os parâmetros de escala e desempenho dos
programadores, do servidor Web e dos trabalhadores do Airflow em qualquer altura.
O parâmetro de desempenho environment size do seu ambiente controla os
parâmetros de desempenho da infraestrutura do Cloud Composer gerida
que inclui a base de dados do Airflow. Pondere selecionar um tamanho do ambiente maior se quiser executar um grande número de DAGs e tarefas com um desempenho de infraestrutura mais elevado. Por exemplo, o aumento do tamanho do ambiente aumenta a quantidade de entradas do registo de tarefas do Airflow que o seu ambiente pode processar com um atraso mínimo.
Vários agendadores
O Airflow 2 pode usar mais do que um programador do Airflow em simultâneo. Esta funcionalidade do Airflow também é conhecida como programador de HA. No Cloud Composer 2, pode definir o número de programadores para o seu ambiente e ajustá-lo em qualquer altura. O Cloud Composer não dimensiona automaticamente o número de
programadores no seu ambiente.
Para mais informações sobre a configuração do número de agendadores para o seu ambiente, consulte o artigo Dimensione os ambientes.
Espaço em disco da base de dados
O espaço em disco para a base de dados do Airflow aumenta automaticamente para satisfazer a procura.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-08-28 UTC."],[[["\u003cp\u003eCloud Composer 2 environments automatically scale the number of workers based on the demands of your DAGs and tasks, increasing workers during heavy loads and removing them during low usage.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 uses GKE's Horizontal Pod Autoscaler (HPA), Cluster Autoscaler (CA), and Node auto-provisioning (NAP) to manage autoscaling, adjusting the number of nodes, machine types, and workers.\u003c/p\u003e\n"],["\u003cp\u003eYou can manually adjust CPU, memory, and disk limits for schedulers, web servers, and workers, providing vertical scaling in addition to the horizontal autoscaling.\u003c/p\u003e\n"],["\u003cp\u003eThe environment size parameter affects the managed Cloud Composer infrastructure's performance, impacting how many Airflow task log entries your environment can process efficiently.\u003c/p\u003e\n"],["\u003cp\u003eCloud Composer 2 allows for multiple Airflow schedulers, which you can configure, but the number of schedulers does not automatically scale, and the Airflow database's disk space scales automatically to adapt to the demand.\u003c/p\u003e\n"]]],[],null,["**Cloud Composer 3** \\| [Cloud Composer 2](/composer/docs/composer-2/environment-scaling \"View this page for Cloud Composer 2\") \\| [Cloud Composer 1](/composer/docs/composer-1/environment-scaling \"View this page for Cloud Composer 1\")\n\n\u003cbr /\u003e\n\n| **Note:** This page is **not yet revised for Cloud Composer 3** and displays content for Cloud Composer 2.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page describes how environment scaling works in Cloud Composer 2.\n\nOther pages about scaling:\n\n- For a guide about selecting optimal scale and performance parameters for your environment, see [Optimize environment performance and costs](/composer/docs/composer-2/optimize-environments).\n- For information about scaling your environments, see [Scale environments](/composer/docs/composer-2/scale-environments).\n\nAutoscaling environments\n\nCloud Composer 2 environments automatically scale in response to the demands\nof your executed DAGs and tasks:\n\n- If your environment experiences a heavy load, Cloud Composer automatically increases the number of workers in your environment.\n- If your environment does not use some of its workers, these workers are removed to save environment resources and costs.\n- You can set the minimum and maximum number of workers for your environment. Cloud Composer automatically scales your environment within the set limits. You can adjust these limits at any time.\n\nThe number of workers is adjusted based on\nthe [Scaling Factor Target](/composer/docs/composer-2/monitor-environments#worker-metrics) metric. This metric is\ncalculated based on:\n\n- Current number of workers\n- Number of Celery tasks in the Celery queue, that are not assigned to a worker\n- Number of idle workers\n- `celery.worker_concurrency` Airflow configuration option\n\nCloud Composer autoscaling uses three different autoscalers\nprovided by GKE:\n\n- [Horizontal Pod Autoscaler (HPA)](/kubernetes-engine/docs/concepts/horizontalpodautoscaler)\n- [Cluster Autoscaler (CA)](/kubernetes-engine/docs/concepts/cluster-autoscaler)\n- [Node auto-provisioning (NAP)](/kubernetes-engine/docs/how-to/node-auto-provisioning)\n\nCloud Composer configures these autoscalers in the environment's\ncluster. This automatically scales the number of nodes in the cluster, the\nmachine type and the number of workers.\n\nScale and performance parameters\n\nIn addition to autoscaling, you can control the scale and performance\nparameters of your environment by adjusting the CPU, memory, and disk limits\nfor schedulers, web server, and workers. By doing so you can scale your\nenvironment vertically, in addition to the horizontal scaling provided by the\nautoscaling feature. You can adjust the scale and performance parameters of\nAirflow schedulers, web server, and workers at any time.\n\nThe *environment size* performance parameter of your environment controls the\nperformance parameters of the managed Cloud Composer infrastructure\nthat includes the Airflow database. Consider selecting a larger environment\nsize if you want to run a large number of DAGs and tasks with higher\ninfrastructure performance. For example, larger environment's size increases\nthe amount of Airflow task log entries that your environment can process with\nminimal delay.\n| **Note:** Environment size is different from the environment presets. Environment presets, which you can select when you create an environment, determine all limits, scale, and performance parameters of your environment, including the environment size. Environment size determines only the performance parameters of the managed Cloud Composer infrastructure of your environment.\n\nMultiple schedulers\n\nAirflow 2 can use more than one Airflow scheduler at the same time. This\nAirflow feature is also known as the **HA scheduler**. In Cloud Composer 2,\nyou can set the number of schedulers for your environment and adjust it at any\ntime. Cloud Composer does not automatically scale the number of\nschedulers in your environment.\n\nFor more information about configuring the number of schedulers for your\nenvironment, see [Scale environments](/composer/docs/composer-2/scale-environments#scheduler-count).\n\nDatabase disk space\n\nDisk space for the Airflow database automatically increases to accommodate the\ndemand.\n\n\nWhat's next\n\n- [Scale environments](/composer/docs/composer-2/scale-environments)\n- [Cloud Composer pricing](/composer/pricing)\n- [Create environments](/composer/docs/composer-2/create-environments)\n- [Environment architecture](/composer/docs/composer-2/environment-architecture)"]]