이 페이지에서는 Cloud Composer에서 CeleryKubernetesExecutor를 사용 설정하는 방법과 DAG에서 KubernetesExecutor를 사용하는 방법을 설명합니다.
CeleryKubernetesExecutor 정보
CeleryKubernetesExecutor는 CeleryExecutor와 KubernetesExecutor를 동시에 사용할 수 있는 실행자 유형입니다. Airflow는 태스크에 정의한 큐를 기반으로 실행자를 선택합니다. DAG 하나에서 CeleryExecutor로 일부 태스크를 실행하고 KubernetesExecutor로 다른 태스크를 실행할 수 있습니다.
CeleryExecutor는 태스크를 빠르고 확장 가능하게 실행할 수 있도록 최적화되어 있습니다.
Cloud Composer의 CeleryKubernetesExecutor는 태스크에 KubernetesExecutor를 사용할 수 있는 기능을 제공합니다. CeleryKubernetesExecutor와 별도로 Cloud Composer에서 KubernetesExecutor를 사용할 수 없습니다.
Cloud Composer는 Airflow 작업자와 동일한 네임스페이스에서 환경 클러스터의 KubernetesExecutor로 실행하는 태스크를 실행합니다. 이러한 태스크에는 Airflow 작업자와 동일한 바인딩이 있으며 프로젝트의 리소스에 액세스할 수 있습니다.
KubernetesExecutor로 실행하는 태스크에는 Cloud Composer 가격 책정 모델이 사용됩니다. 이러한 태스크가 포함된 포드가 환경의 클러스터에서 실행되기 때문입니다. Cloud Composer 컴퓨팅 SKU(CPU, 메모리, 스토리지용)가 이러한 포드에 적용됩니다.
다음과 같은 경우에는 CeleryExecutor로 태스크를 실행하는 것이 좋습니다.
태스크 시작 시간이 중요한 경우
태스크는 런타임 격리가 필요하지 않으며 리소스가 많이 사용되지 않는 경우
다음과 같은 경우에는 KubernetesExecutor로 태스크를 실행하는 것이 좋습니다.
태스크에 런타임 격리가 필요한 경우. 예를 들어 태스크가 자체 포드에서 실행되기 때문에 메모리와 CPU를 두고 경쟁하지 않습니다.
KubernetesExecutor는 환경의 버전이 관리되는 Cloud Composer 네임스페이스에서만 태스크를 실행합니다. Cloud Composer에서는 이 네임스페이스를 변경할 수 없습니다. KubernetesPodOperator에서 포드 태스크를 실행하는 네임스페이스를 지정할 수 있습니다.
KubernetesExecutor는 모든 기본 제공 Airflow 연산자를 사용할 수 있습니다.
KubernetesPodOperator는 컨테이너의 진입점으로 정의되어 제공되는 스크립트만 실행합니다.
KubernetesExecutor는 Cloud Composer 환경에 정의된 것과 동일한 Python, Airflow 구성 옵션 재정의, 환경 변수, PyPI 패키지와 함께 기본 Cloud Composer Docker 이미지를 사용합니다.
Docker 이미지 정보
기본적으로 KubernetesExecutor는 Cloud Composer에서 Celery 작업자에 사용하는 이미지와 동일한 Docker 이미지를 사용하여 태스크를 실행합니다. 커스텀 PyPI 패키지 또는 환경 변수와 같이 환경에 지정한 모든 변경사항이 포함된 환경의 Cloud Composer 이미지입니다.
시작하기 전에
Cloud Composer 3에서 CeleryKubernetesExecutor를 사용할 수 있습니다.
Cloud Composer 3에서는 CeleryKubernetesExecutor 이외의 실행자를 사용할 수 없습니다. 즉, DAG 하나에서 CeleryExecutor, KubernetesExecutor 또는 둘 다를 사용하여 태스크를 실행할 수 있지만 KubernetesExecutor 또는 CeleryExecutor만 사용하도록 환경을 구성할 수 없습니다.
CeleryKubernetesExecutor 구성
KubernetesExecutor와 관련된 기존 Airflow 구성 옵션을 재정의할 수 있습니다.
[kubernetes]worker_pods_creation_batch_size
이 옵션은 스케줄러 루프당 Kubernetes 작업자 포드 생성 호출 수를 정의합니다. 기본값은 1이므로 스케줄러 하트비트당 포드 하나만 실행됩니다. KubernetesExecutor를 많이 사용하는 경우에는 이 값을 늘리는 것이 좋습니다.
[kubernetes]worker_pods_pending_timeout
이 옵션은 작업자가 실패한 것으로 간주되기 전에 작업자가 Pending상태(포드 생성 중)에 머무를 수 있는 시간(초)을 정의합니다. 기본값은 5분입니다.
KubernetesExecutor 또는 CeleryExecutor로 태스크 실행
DAG 하나에서 CeleryExecutor, KubernetesExecutor 또는 둘 다를 사용하여 태스크를 실행할 수 있습니다.
KubernetesExecutor로 태스크를 실행하려면 태스크의 queue 파라미터에 kubernetes 값을 지정합니다.
CeleryExecutor로 태스크를 실행하려면 queue 파라미터를 생략합니다.
다음 예시에서는 KubernetesExecutor를 사용하여 task-kubernetes 태스크를 실행하고 CeleryExecutor를 사용하여 task-celery 태스크를 실행합니다.
importdatetimeimportairflowfromairflow.operators.python_operatorimportPythonOperatorwithairflow.DAG("composer_sample_celery_kubernetes",start_date=datetime.datetime(2022,1,1),schedule_interval="@daily")asdag:defkubernetes_example():print("This task runs using KubernetesExecutor")defcelery_example():print("This task runs using CeleryExecutor")# To run with KubernetesExecutor, set queue to kubernetestask_kubernetes=PythonOperator(task_id='task-kubernetes',python_callable=kubernetes_example,dag=dag,queue='kubernetes')# To run with CeleryExecutor, omit the queue argumenttask_celery=PythonOperator(task_id='task-celery',python_callable=celery_example,dag=dag)task_kubernetes >> task_celery
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-29(UTC)"],[[["\u003cp\u003eCeleryKubernetesExecutor in Cloud Composer allows for running tasks with either CeleryExecutor or KubernetesExecutor, based on the designated queue, enabling resource-intensive and isolated tasks alongside fast, scalable ones within the same DAG.\u003c/p\u003e\n"],["\u003cp\u003eTasks run with KubernetesExecutor in Cloud Composer use the environment's cluster and share the same bindings as Airflow workers, applying Cloud Composer Compute SKUs for pricing and running in the same namespace.\u003c/p\u003e\n"],["\u003cp\u003eKubernetesExecutor tasks utilize the same Docker image as Celery workers by default, and custom images are not supported, limiting configuration changes to the environment.\u003c/p\u003e\n"],["\u003cp\u003eYou can run tasks with KubernetesExecutor by setting the \u003ccode\u003equeue\u003c/code\u003e parameter to \u003ccode\u003ekubernetes\u003c/code\u003e, and run them with CeleryExecutor by omitting it, offering flexibility in task execution within a DAG.\u003c/p\u003e\n"],["\u003cp\u003eCustomizing worker pod specifications, like CPU and memory requirements, is possible within specific parameters and ranges when using KubernetesExecutor, ensuring tasks have adequate resources.\u003c/p\u003e\n"]]],[],null,["\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n**Cloud Composer 3** \\| Cloud Composer 2 \\| Cloud Composer 1\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page explains how to enable CeleryKubernetesExecutor in\nCloud Composer and how to use KubernetesExecutor in your DAGs.\n\nAbout CeleryKubernetesExecutor\n\n[CeleryKubernetesExecutor](https://airflow.apache.org/docs/apache-airflow/stable/executor/celery_kubernetes.html) is a\ntype of executor that can use CeleryExecutor and KubernetesExecutor at the same\ntime. Airflow selects the executor based on the queue that you define for the\ntask. In one DAG, you can run some tasks with CeleryExecutor, and other tasks\nwith KubernetesExecutor:\n\n- CeleryExecutor is optimized for fast and scalable execution of tasks.\n- KubernetesExecutor is designed for execution of resource-intensive tasks and running tasks in isolation.\n\nCeleryKubernetesExecutor in Cloud Composer\n\nCeleryKubernetesExecutor in Cloud Composer provides the ability to\nuse KubernetesExecutor for your tasks. It is not possible to use\nKubernetesExecutor in Cloud Composer separately from\nCeleryKubernetesExecutor.\n\nCloud Composer runs tasks that you execute with KubernetesExecutor\nin your environment's cluster, in the same namespace with Airflow workers. Such\ntasks have the same [bindings](/composer/docs/composer-3/access-control#composer-sa) as Airflow\nworkers and can access resources in your project.\n\nTasks that you execute with KubernetesExecutor use the\n[Cloud Composer pricing model](/composer/pricing), since pods with these\ntasks run in your environment's cluster. Cloud Composer Compute SKUs\n(for CPU, Memory, and Storage) apply to these pods.\n\nWe recommend to run tasks with the CeleryExecutor when:\n\n- Task start-up time is important.\n- Tasks do not require runtime isolation and are not resource-intensive.\n\nWe recommend to run tasks with the KubernetesExecutor when:\n\n- Tasks require runtime isolation. For example, so that tasks do not compete for memory and CPU, since they run in their own pods.\n- Tasks are resource-intensive and you want to control the available CPU and memory resources.\n\nKubernetesExecutor compared to KubernetesPodOperator\n\nRunning tasks with KubernetesExecutor is similar to\n[running tasks using KubernetesPodOperator](/composer/docs/composer-3/use-kubernetes-pod-operator). Tasks are executed in\npods, thus providing pod-level task isolation and better resource management.\n\nHowever, there are some key differences:\n\n- KubernetesExecutor runs tasks only in the versioned Cloud Composer namespace of your environment. It is not possible to change this namespace in Cloud Composer. You can specify a namespace where KubernetesPodOperator runs pod tasks.\n- KubernetesExecutor can use any built-in Airflow operator. KubernetesPodOperator executes only a provided script defined by the entrypoint of the container.\n- KubernetesExecutor uses the default Cloud Composer Docker image with the same Python, Airflow configuration option overrides, environment variables, and PyPI packages that are defined in your Cloud Composer environment.\n\nAbout Docker images\n\nBy default, KubernetesExecutor launches tasks using the same Docker image that\nCloud Composer uses for Celery workers. This is the\n[Cloud Composer image](/composer/docs/composer-versions) for your environment, with\nall changes that you specified for your environment, such as custom PyPI\npackages or environment variables.\n| **Warning:** Cloud Composer **does not support using custom images** with KubernetesExecutor.\n\nBefore you begin\n\n- You can use CeleryKubernetesExecutor in Cloud Composer 3.\n\n- It is not possible to use any executor other than CeleryKubernetesExecutor\n in Cloud Composer 3. This means you can run tasks using\n CeleryExecutor, KubernetesExecutor or both in one DAG, but it's not\n possible to configure your environment to only use KubernetesExecutor or\n CeleryExecutor.\n\nConfigure CeleryKubernetesExecutor\n\nYou might want to [override](/composer/docs/composer-3/override-airflow-configurations) existing Airflow configuration\noptions that are related to KubernetesExecutor:\n\n- `[kubernetes]worker_pods_creation_batch_size`\n\n This option defines the number of Kubernetes Worker Pod creation calls per\n scheduler loop. The default value is `1`, so only a single pod is launched\n per scheduler heartbeat. If you use KubernetesExecutor heavily, we\n recommended to increase this value.\n- `[kubernetes]worker_pods_pending_timeout`\n\n This option defines, in seconds, how long a worker can stay in the `Pending`\n state (Pod is being created) before it is considered failed. The default\n value is 5 minutes.\n\nRun tasks with KubernetesExecutor or CeleryExecutor\n\nYou can run tasks using CeleryExecutor, KubernetesExecutor, or both in one DAG:\n\n- To run a task with KubernetesExecutor, specify the `kubernetes` value in the `queue` parameter of a task.\n- To run a task with CeleryExecutor, omit the `queue` parameter.\n\n| **Note:** The default value for the `[celery_kubernetes_executor]kubernetes_queue` Airflow configuration option is `kubernetes`. You do not need to override this value.\n\nThe following example runs the `task-kubernetes` task using\nKubernetesExecutor and the `task-celery` task using CeleryExecutor: \n\n import datetime\n import airflow\n from airflow.operators.python_operator import PythonOperator\n\n with airflow.DAG(\n \"composer_sample_celery_kubernetes\",\n start_date=datetime.datetime(2022, 1, 1),\n schedule_interval=\"@daily\") as dag:\n\n def kubernetes_example():\n print(\"This task runs using KubernetesExecutor\")\n\n def celery_example():\n print(\"This task runs using CeleryExecutor\")\n\n # To run with KubernetesExecutor, set queue to kubernetes\n task_kubernetes = PythonOperator(\n task_id='task-kubernetes',\n python_callable=kubernetes_example,\n dag=dag,\n queue='kubernetes')\n\n # To run with CeleryExecutor, omit the queue argument\n task_celery = PythonOperator(\n task_id='task-celery',\n python_callable=celery_example,\n dag=dag)\n\n task_kubernetes \u003e\u003e task_celery\n\nRun Airflow CLI commands related to KubernetesExecutor\n\nYou can run several\n[Airflow CLI commands related to KubernetesExecutor](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#kubernetes)\nusing `gcloud`.\n\nCustomize worker pod spec\n\nYou can customize worker pod spec by passing it in the `executor_config`\nparameter of a task. You can use this to define custom CPU and memory\nrequirements.\n\nYou can override the entire worker pod spec that is used to run a task. To\nretrieve the pod spec of a task used by KubernetesExecutor, you can\n[run the `kubernetes generate-dag-yaml`](#generate-dag-yaml) Airflow CLI\ncommand.\n\nFor more information about customizing worker pod spec, see\n[Airflow documentation](https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#pod-override).\n| **Caution:** Always use `base` as the name of the container in the overridden pod spec.\n\nCloud Composer 3 supports the following values for resource requirements:\n\n| Resource | Minimum | Maximum | Step |\n|----------|---------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------|\n| CPU | 0.25 | 32 | Step values: 0.25, 0.5, 1, 2, 4, 6, 8, 10, ..., 32. Requested values are rounded up to the closest supported step value (for example, 5 to 6). |\n| Memory | 2G (GB) | 128G (GB) | Step values: 2, 3, 4, 5, ..., 128. Requested values are rounded up to the closest supported step value (for example, 3.5G to 4G). |\n| Storage | - | 100G (GB) | Any value. If more than 100 GB are requested, only 100 GB are provided. |\n\nFor more information about resource units in Kubernetes, see\n[Resource units in Kubernetes](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).\n\nThe following example demonstrates a task that uses custom worker pod spec: \n\n PythonOperator(\n task_id='custom-spec-example',\n python_callable=f,\n dag=dag,\n queue='kubernetes',\n executor_config={\n 'pod_override': k8s.V1Pod(\n spec=k8s.V1PodSpec(\n containers=[\n k8s.V1Container(\n name='base',\n resources=k8s.V1ResourceRequirements(requests={\n 'cpu': '0.5',\n 'memory': '2G',\n })\n ),\n ],\n ),\n )\n },\n )\n\nView task logs\n\nLogs of tasks executed by KubernetesExecutor are available in the **Logs** tab,\ntogether with logs of tasks run by CeleryExecutor:\n\n1. In Google Cloud console, go to the **Environments** page.\n\n [Go to Environments](https://console.cloud.google.com/composer/environments)\n2. In the list of environments, click the name of your environment.\n The **Environment details** page opens.\n\n3. Go to the **Logs** tab.\n\n4. Navigate to **All logs** \\\u003e **Airflow logs**\n \\\u003e **Workers**.\n\n5. Workers named `airflow-k8s-worker` execute\n KubernetesExecutor tasks. To look for logs of a specific task, you can\n use a DAG id or a task id as a keyword in the search.\n\nWhat's next\n\n- [Troubleshooting KubernetesExecutor](/composer/docs/composer-3/troubleshooting-kubernetes-executor)\n- [Using KubernetesPodOperator](/composer/docs/composer-3/use-kubernetes-pod-operator)\n- [Using GKE operators](/composer/docs/composer-3/use-gke-operator)\n- [Overriding Airflow configuration options](/composer/docs/composer-3/override-airflow-configurations)"]]