이 페이지에서는 KubernetesPodOperator를 사용해서 Kubernetes 포드를 Cloud Composer에서 Cloud Composer 환경에 포함된 Google Kubernetes Engine 클러스터에 배포하고 해당 환경에 적절한 리소스가 포함되었는지 확인하는 방법을 설명합니다.
KubernetesPodOperator는 환경의 클러스터에서 Kubernetes 포드를 실행합니다. 이에 비해 Google Kubernetes Engine 연산자는 지정된 클러스터에서 Kubernetes 포드를 실행합니다. 이 클러스터는 환경과 관련이 없는 별도의 클러스터일 수 있습니다. Google Kubernetes Engine 연산자를 사용하여 클러스터를 만들고 삭제할 수도 있습니다.
KubernetesPodOperator는 다음이 필요한 경우에 적합한 옵션입니다.
공개 PyPI 저장소를 통해 사용할 수 없는 커스텀 Python 종속 항목
스톡 Cloud Composer 작업자 이미지에서 사용할 수 없는 바이너리 종속 항목
이 페이지에서는 다음 KubernetesPodOperator 구성을 포함하는 예시 Airflow DAG를 안내합니다.
Cloud Composer의 최신 버전을 사용하는 것이 좋습니다.
최소한 이 버전은 지원 중단 및 지원 정책의 일부로 지원되어야 합니다.
환경에 충분한 리소스가 있어야 합니다.
리소스가 부족한 환경으로 포드를 실행하면 Airflow 작업자 및 Airflow 스케줄러 오류가 발생할 수 있습니다.
Cloud Composer 환경 리소스 설정
Cloud Composer 환경을 만들 때 환경 클러스터의 성능 매개변수를 포함한 성능 매개변수를 지정합니다. Kubernetes 포드를 환경 클러스터에 실행하면 CPU 또는 메모리와 같은 클러스터 리소스에 경쟁이 발생할 수 있습니다. Airflow 스케줄러와 작업자가 동일한 GKE 클러스터에 있기 때문에 경쟁으로 인해 리소스 부족이 발생하면 스케줄러와 작업자가 제대로 작동하지 않습니다.
다음 섹션에서는 예시의 각 KubernetesPodOperator 구성에 대해 설명합니다. 각 구성 변수에 대한 자세한 내용은 Airflow 참조를 확인하세요.
Airflow 2
import datetime
from airflow import models
from airflow.kubernetes.secret import Secret
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
from kubernetes.client import models as k8s_models
# A Secret is an object that contains a small amount of sensitive data such as
# a password, a token, or a key. Such information might otherwise be put in a
# Pod specification or in an image; putting it in a Secret object allows for
# more control over how it is used, and reduces the risk of accidental
# exposure.
secret_env = Secret(
# Expose the secret as environment variable.
deploy_type="env",
# The name of the environment variable, since deploy_type is `env` rather
# than `volume`.
deploy_target="SQL_CONN",
# Name of the Kubernetes Secret
secret="airflow-secrets",
# Key of a secret stored in this Secret object
key="sql_alchemy_conn",
)
secret_volume = Secret(
deploy_type="volume",
# Path where we mount the secret as volume
deploy_target="/var/secrets/google",
# Name of Kubernetes Secret
secret="service-account",
# Key in the form of service account file name
key="service-account.json",
)
# If you are running Airflow in more than one time zone
# see https://airflow.apache.org/docs/apache-airflow/stable/timezone.html
# for best practices
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
# If a Pod fails to launch, or has an error occur in the container, Airflow
# will show the task as failed, as well as contain all of the task logs
# required to debug.
with models.DAG(
dag_id="composer_sample_kubernetes_pod",
schedule_interval=datetime.timedelta(days=1),
start_date=YESTERDAY,
) as dag:
# Only name, namespace, image, and task_id are required to create a
# KubernetesPodOperator. In Cloud Composer, currently the operator defaults
# to using the config file found at `/home/airflow/composer_kube_config if
# no `config_file` parameter is specified. By default it will contain the
# credentials for Cloud Composer's Google Kubernetes Engine cluster that is
# created upon environment creation.
kubernetes_min_pod = KubernetesPodOperator(
# The ID specified for the task.
task_id="pod-ex-minimum",
# Name of task you want to run, used to generate Pod ID.
name="pod-ex-minimum",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# The namespace to run within Kubernetes, default namespace is
# `default`. In Composer 1 there is the potential for
# the resource starvation of Airflow workers and scheduler
# within the Cloud Composer environment,
# the recommended solution is to increase the amount of nodes in order
# to satisfy the computing requirements. Alternatively, launching pods
# into a custom namespace will stop fighting over resources,
# and using Composer 2 will mean the environment will autoscale.
namespace="default",
# Docker image specified. Defaults to hub.docker.com, but any fully
# qualified URLs will point to a custom repository. Supports private
# gcr.io images if the Composer Environment is under the same
# project-id as the gcr.io images and the service account that Composer
# uses has permission to access the Google Container Registry
# (the default service account has permission)
image="gcr.io/gcp-runtimes/ubuntu_18_0_4",
)
kubenetes_template_ex = KubernetesPodOperator(
task_id="ex-kube-templates",
name="ex-kube-templates",
namespace="default",
image="bash",
# All parameters below are able to be templated with jinja -- cmds,
# arguments, env_vars, and config_file. For more information visit:
# https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# DS in jinja is the execution date as YYYY-MM-DD, this docker image
# will echo the execution date. Arguments to the entrypoint. The docker
# image's CMD is used if this is not provided. The arguments parameter
# is templated.
arguments=["{{ ds }}"],
# The var template variable allows you to access variables defined in
# Airflow UI. In this case we are getting the value of my_value and
# setting the environment variable `MY_VALUE`. The pod will fail if
# `my_value` is not set in the Airflow UI.
env_vars={"MY_VALUE": "{{ var.value.my_value }}"},
# Sets the config file to a kubernetes config file specified in
# airflow.cfg. If the configuration file does not exist or does
# not provide validcredentials the pod will fail to launch. If not
# specified, config_file defaults to ~/.kube/config
config_file="{{ conf.get('core', 'kube_config') }}",
)
kubernetes_secret_vars_ex = KubernetesPodOperator(
task_id="ex-kube-secrets",
name="ex-kube-secrets",
namespace="default",
image="ubuntu",
startup_timeout_seconds=300,
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[secret_env, secret_volume],
# env_vars allows you to specify environment variables for your
# container to use. env_vars is templated.
env_vars={
"EXAMPLE_VAR": "/example/value",
"GOOGLE_APPLICATION_CREDENTIALS": "/var/secrets/google/service-account.json ",
},
)
# Pod affinity with the KubernetesPodOperator
# is not supported with Composer 2
# instead, create a cluster and use the GKEStartPodOperator
# https://cloud.google.com/composer/docs/using-gke-operator
kubernetes_affinity_ex = KubernetesPodOperator(
task_id="ex-pod-affinity",
name="ex-pod-affinity",
namespace="default",
image="perl:5.34.0",
cmds=["perl"],
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# affinity allows you to constrain which nodes your pod is eligible to
# be scheduled on, based on labels on the node. In this case, if the
# label 'cloud.google.com/gke-nodepool' with value
# 'nodepool-label-value' or 'nodepool-label-value2' is not found on any
# nodes, it will fail to schedule.
affinity={
"nodeAffinity": {
# requiredDuringSchedulingIgnoredDuringExecution means in order
# for a pod to be scheduled on a node, the node must have the
# specified labels. However, if labels on a node change at
# runtime such that the affinity rules on a pod are no longer
# met, the pod will still continue to run on the node.
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
# When nodepools are created in Google Kubernetes
# Engine, the nodes inside of that nodepool are
# automatically assigned the label
# 'cloud.google.com/gke-nodepool' with the value of
# the nodepool's name.
"key": "cloud.google.com/gke-nodepool",
"operator": "In",
# The label key's value that pods can be scheduled
# on.
"values": [
"pool-0",
"pool-1",
],
}
]
}
]
}
}
},
)
kubernetes_full_pod = KubernetesPodOperator(
task_id="ex-all-configs",
name="pi",
namespace="default",
image="perl:5.34.0",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["perl"],
# Arguments to the entrypoint. The docker image's CMD is used if this
# is not provided. The arguments parameter is templated.
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[],
# Labels to apply to the Pod.
labels={"pod-label": "label-name"},
# Timeout to start up the Pod, default is 120.
startup_timeout_seconds=120,
# The environment variables to be initialized in the container
# env_vars are templated.
env_vars={"EXAMPLE_VAR": "/example/value"},
# If true, logs stdout output of container. Defaults to True.
get_logs=True,
# Determines when to pull a fresh image, if 'IfNotPresent' will cause
# the Kubelet to skip pulling an image if it already exists. If you
# want to always pull a new image, set it to 'Always'.
image_pull_policy="Always",
# Annotations are non-identifying metadata you can attach to the Pod.
# Can be a large range of data, and can include characters that are not
# permitted by labels.
annotations={"key1": "value1"},
# Optional resource specifications for Pod, this will allow you to
# set both cpu and memory limits and requirements.
# Prior to Airflow 2.3 and the cncf providers package 5.0.0
# resources were passed as a dictionary. This change was made in
# https://github.com/apache/airflow/pull/27197
# Additionally, "memory" and "cpu" were previously named
# "limit_memory" and "limit_cpu"
# resources={'limit_memory': "250M", 'limit_cpu': "100m"},
container_resources=k8s_models.V1ResourceRequirements(
limits={"memory": "250M", "cpu": "100m"},
),
# Specifies path to kubernetes config. If no config is specified will
# default to '~/.kube/config'. The config_file is templated.
config_file="/home/airflow/composer_kube_config",
# If true, the content of /airflow/xcom/return.json from container will
# also be pushed to an XCom when the container ends.
do_xcom_push=False,
# List of Volume objects to pass to the Pod.
volumes=[],
# List of VolumeMount objects to pass to the Pod.
volume_mounts=[],
# Affinity determines which nodes the Pod can run on based on the
# config. For more information see:
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
# Pod affinity with the KubernetesPodOperator
# is not supported with Composer 2
# instead, create a cluster and use the GKEStartPodOperator
# https://cloud.google.com/composer/docs/using-gke-operator
affinity={},
)
Airflow 1
import datetime
from airflow import models
from airflow.contrib.kubernetes import secret
from airflow.contrib.operators import kubernetes_pod_operator
# A Secret is an object that contains a small amount of sensitive data such as
# a password, a token, or a key. Such information might otherwise be put in a
# Pod specification or in an image; putting it in a Secret object allows for
# more control over how it is used, and reduces the risk of accidental
# exposure.
secret_env = secret.Secret(
# Expose the secret as environment variable.
deploy_type="env",
# The name of the environment variable, since deploy_type is `env` rather
# than `volume`.
deploy_target="SQL_CONN",
# Name of the Kubernetes Secret
secret="airflow-secrets",
# Key of a secret stored in this Secret object
key="sql_alchemy_conn",
)
secret_volume = secret.Secret(
deploy_type="volume",
# Path where we mount the secret as volume
deploy_target="/var/secrets/google",
# Name of Kubernetes Secret
secret="service-account",
# Key in the form of service account file name
key="service-account.json",
)
# If you are running Airflow in more than one time zone
# see https://airflow.apache.org/docs/apache-airflow/stable/timezone.html
# for best practices
YESTERDAY = datetime.datetime.now() - datetime.timedelta(days=1)
# If a Pod fails to launch, or has an error occur in the container, Airflow
# will show the task as failed, as well as contain all of the task logs
# required to debug.
with models.DAG(
dag_id="composer_sample_kubernetes_pod",
schedule_interval=datetime.timedelta(days=1),
start_date=YESTERDAY,
) as dag:
# Only name, namespace, image, and task_id are required to create a
# KubernetesPodOperator. In Cloud Composer, currently the operator defaults
# to using the config file found at `/home/airflow/composer_kube_config if
# no `config_file` parameter is specified. By default it will contain the
# credentials for Cloud Composer's Google Kubernetes Engine cluster that is
# created upon environment creation.
kubernetes_min_pod = kubernetes_pod_operator.KubernetesPodOperator(
# The ID specified for the task.
task_id="pod-ex-minimum",
# Name of task you want to run, used to generate Pod ID.
name="pod-ex-minimum",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# The namespace to run within Kubernetes, default namespace is
# `default`. There is the potential for the resource starvation of
# Airflow workers and scheduler within the Cloud Composer environment,
# the recommended solution is to increase the amount of nodes in order
# to satisfy the computing requirements. Alternatively, launching pods
# into a custom namespace will stop fighting over resources.
namespace="default",
# Docker image specified. Defaults to hub.docker.com, but any fully
# qualified URLs will point to a custom repository. Supports private
# gcr.io images if the Composer Environment is under the same
# project-id as the gcr.io images and the service account that Composer
# uses has permission to access the Google Container Registry
# (the default service account has permission)
image="gcr.io/gcp-runtimes/ubuntu_18_0_4",
)
kubenetes_template_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-kube-templates",
name="ex-kube-templates",
namespace="default",
image="bash",
# All parameters below are able to be templated with jinja -- cmds,
# arguments, env_vars, and config_file. For more information visit:
# https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# DS in jinja is the execution date as YYYY-MM-DD, this docker image
# will echo the execution date. Arguments to the entrypoint. The docker
# image's CMD is used if this is not provided. The arguments parameter
# is templated.
arguments=["{{ ds }}"],
# The var template variable allows you to access variables defined in
# Airflow UI. In this case we are getting the value of my_value and
# setting the environment variable `MY_VALUE`. The pod will fail if
# `my_value` is not set in the Airflow UI.
env_vars={"MY_VALUE": "{{ var.value.my_value }}"},
# Sets the config file to a kubernetes config file specified in
# airflow.cfg. If the configuration file does not exist or does
# not provide validcredentials the pod will fail to launch. If not
# specified, config_file defaults to ~/.kube/config
config_file="{{ conf.get('core', 'kube_config') }}",
)
kubernetes_secret_vars_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-kube-secrets",
name="ex-kube-secrets",
namespace="default",
image="ubuntu",
startup_timeout_seconds=300,
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[secret_env, secret_volume],
# env_vars allows you to specify environment variables for your
# container to use. env_vars is templated.
env_vars={
"EXAMPLE_VAR": "/example/value",
"GOOGLE_APPLICATION_CREDENTIALS": "/var/secrets/google/service-account.json ",
},
)
kubernetes_affinity_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-pod-affinity",
name="ex-pod-affinity",
namespace="default",
image="perl:5.34.0",
cmds=["perl"],
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# affinity allows you to constrain which nodes your pod is eligible to
# be scheduled on, based on labels on the node. In this case, if the
# label 'cloud.google.com/gke-nodepool' with value
# 'nodepool-label-value' or 'nodepool-label-value2' is not found on any
# nodes, it will fail to schedule.
affinity={
"nodeAffinity": {
# requiredDuringSchedulingIgnoredDuringExecution means in order
# for a pod to be scheduled on a node, the node must have the
# specified labels. However, if labels on a node change at
# runtime such that the affinity rules on a pod are no longer
# met, the pod will still continue to run on the node.
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
# When nodepools are created in Google Kubernetes
# Engine, the nodes inside of that nodepool are
# automatically assigned the label
# 'cloud.google.com/gke-nodepool' with the value of
# the nodepool's name.
"key": "cloud.google.com/gke-nodepool",
"operator": "In",
# The label key's value that pods can be scheduled
# on.
"values": [
"pool-0",
"pool-1",
],
}
]
}
]
}
}
},
)
kubernetes_full_pod = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-all-configs",
name="pi",
namespace="default",
image="perl:5.34.0",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["perl"],
# Arguments to the entrypoint. The docker image's CMD is used if this
# is not provided. The arguments parameter is templated.
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[],
# Labels to apply to the Pod.
labels={"pod-label": "label-name"},
# Timeout to start up the Pod, default is 120.
startup_timeout_seconds=120,
# The environment variables to be initialized in the container
# env_vars are templated.
env_vars={"EXAMPLE_VAR": "/example/value"},
# If true, logs stdout output of container. Defaults to True.
get_logs=True,
# Determines when to pull a fresh image, if 'IfNotPresent' will cause
# the Kubelet to skip pulling an image if it already exists. If you
# want to always pull a new image, set it to 'Always'.
image_pull_policy="Always",
# Annotations are non-identifying metadata you can attach to the Pod.
# Can be a large range of data, and can include characters that are not
# permitted by labels.
annotations={"key1": "value1"},
# Optional resource specifications for Pod, this will allow you to
# set both cpu and memory limits and requirements.
# Prior to Airflow 1.10.4, resource specifications were
# passed as a Pod Resources Class object,
# If using this example on a version of Airflow prior to 1.10.4,
# import the "pod" package from airflow.contrib.kubernetes and use
# resources = pod.Resources() instead passing a dict
# For more info see:
# https://github.com/apache/airflow/pull/4551
resources={"limit_memory": "250M", "limit_cpu": "100m"},
# Specifies path to kubernetes config. If no config is specified will
# default to '~/.kube/config'. The config_file is templated.
config_file="/home/airflow/composer_kube_config",
# If true, the content of /airflow/xcom/return.json from container will
# also be pushed to an XCom when the container ends.
do_xcom_push=False,
# List of Volume objects to pass to the Pod.
volumes=[],
# List of VolumeMount objects to pass to the Pod.
volume_mounts=[],
# Affinity determines which nodes the Pod can run on based on the
# config. For more information see:
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity={},
)
최소 구성
KubernetesPodOperator를 만들려면 포드의 name, 포드를 실행할 namespace, 사용할 image, task_id만 필요합니다.
DAG에 다음 코드 스니펫을 배치하면 구성에서 /home/airflow/composer_kube_config의 기본값을 사용합니다. 코드를 수정하지 않아도 pod-ex-minimum 작업이 성공적으로 수행됩니다.
Airflow 2
kubernetes_min_pod = KubernetesPodOperator(
# The ID specified for the task.
task_id="pod-ex-minimum",
# Name of task you want to run, used to generate Pod ID.
name="pod-ex-minimum",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# The namespace to run within Kubernetes, default namespace is
# `default`. In Composer 1 there is the potential for
# the resource starvation of Airflow workers and scheduler
# within the Cloud Composer environment,
# the recommended solution is to increase the amount of nodes in order
# to satisfy the computing requirements. Alternatively, launching pods
# into a custom namespace will stop fighting over resources,
# and using Composer 2 will mean the environment will autoscale.
namespace="default",
# Docker image specified. Defaults to hub.docker.com, but any fully
# qualified URLs will point to a custom repository. Supports private
# gcr.io images if the Composer Environment is under the same
# project-id as the gcr.io images and the service account that Composer
# uses has permission to access the Google Container Registry
# (the default service account has permission)
image="gcr.io/gcp-runtimes/ubuntu_18_0_4",
)
Airflow 1
kubernetes_min_pod = kubernetes_pod_operator.KubernetesPodOperator(
# The ID specified for the task.
task_id="pod-ex-minimum",
# Name of task you want to run, used to generate Pod ID.
name="pod-ex-minimum",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# The namespace to run within Kubernetes, default namespace is
# `default`. There is the potential for the resource starvation of
# Airflow workers and scheduler within the Cloud Composer environment,
# the recommended solution is to increase the amount of nodes in order
# to satisfy the computing requirements. Alternatively, launching pods
# into a custom namespace will stop fighting over resources.
namespace="default",
# Docker image specified. Defaults to hub.docker.com, but any fully
# qualified URLs will point to a custom repository. Supports private
# gcr.io images if the Composer Environment is under the same
# project-id as the gcr.io images and the service account that Composer
# uses has permission to access the Google Container Registry
# (the default service account has permission)
image="gcr.io/gcp-runtimes/ubuntu_18_0_4",
)
템플릿 구성
Airflow는 Jinja 템플릿 사용을 지원합니다.
연산자와 함께 필수 변수(task_id, name, namespace, image)를 선언해야 합니다. 다음 예시와 같이 Jinja를 사용하여 다른 모든 매개변수(cmds, arguments, env_vars, config_file)를 템플릿으로 만들 수 있습니다.
Airflow 2
kubenetes_template_ex = KubernetesPodOperator(
task_id="ex-kube-templates",
name="ex-kube-templates",
namespace="default",
image="bash",
# All parameters below are able to be templated with jinja -- cmds,
# arguments, env_vars, and config_file. For more information visit:
# https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# DS in jinja is the execution date as YYYY-MM-DD, this docker image
# will echo the execution date. Arguments to the entrypoint. The docker
# image's CMD is used if this is not provided. The arguments parameter
# is templated.
arguments=["{{ ds }}"],
# The var template variable allows you to access variables defined in
# Airflow UI. In this case we are getting the value of my_value and
# setting the environment variable `MY_VALUE`. The pod will fail if
# `my_value` is not set in the Airflow UI.
env_vars={"MY_VALUE": "{{ var.value.my_value }}"},
# Sets the config file to a kubernetes config file specified in
# airflow.cfg. If the configuration file does not exist or does
# not provide validcredentials the pod will fail to launch. If not
# specified, config_file defaults to ~/.kube/config
config_file="{{ conf.get('core', 'kube_config') }}",
)
Airflow 1
kubenetes_template_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-kube-templates",
name="ex-kube-templates",
namespace="default",
image="bash",
# All parameters below are able to be templated with jinja -- cmds,
# arguments, env_vars, and config_file. For more information visit:
# https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["echo"],
# DS in jinja is the execution date as YYYY-MM-DD, this docker image
# will echo the execution date. Arguments to the entrypoint. The docker
# image's CMD is used if this is not provided. The arguments parameter
# is templated.
arguments=["{{ ds }}"],
# The var template variable allows you to access variables defined in
# Airflow UI. In this case we are getting the value of my_value and
# setting the environment variable `MY_VALUE`. The pod will fail if
# `my_value` is not set in the Airflow UI.
env_vars={"MY_VALUE": "{{ var.value.my_value }}"},
# Sets the config file to a kubernetes config file specified in
# airflow.cfg. If the configuration file does not exist or does
# not provide validcredentials the pod will fail to launch. If not
# specified, config_file defaults to ~/.kube/config
config_file="{{ conf.get('core', 'kube_config') }}",
)
DAG 또는 사용자 환경을 변경하지 않으면 두 오류로 인해 ex-kube-templates 작업이 실패합니다. 로그는 적합한 변수가 존재하지 않아(my_value) 이 작업이 실패함을 보여줍니다. 첫 번째 오류를 해결한 후 발생할 수 있는 두 번째 오류는 config에서 core/kube_config를 찾을 수 없어서 작업이 실패함을 보여줍니다.
두 오류를 해결하려면 추가로 설명된 단계를 따릅니다.
gcloud 또는 Airflow UI를 통해 my_value를 설정하려면 다음 안내를 따릅니다.
커스텀 config_file(Kubernetes 구성 파일)을 참조하려면 kube_config Airflow 구성 옵션을 유효한 Kubernetes 구성으로 재정의합니다.
섹션
키
값
core
kube_config
/home/airflow/composer_kube_config
사용자 환경이 업데이트될 때까지 몇 분 정도 기다립니다. 그런 다음 ex-kube-templates 작업을 다시 실행하고 ex-kube-templates 작업이 성공하는지 확인합니다.
보안 비밀 변수 구성
Kubernetes 보안 비밀은 민감한 정보를 포함하는 객체입니다. KubernetesPodOperator를 사용하여 보안 비밀을 Kubernetes 포드에 전달할 수 있습니다.
보안 비밀이 Kubernetes에 정의되어 있지 않으면 포드가 실행되지 않습니다.
이 예시에서는 Kubernetes 보안 비밀을 환경 변수와 포드가 마운트한 볼륨으로 사용하는 두 가지 방법을 보여줍니다.
Airflow 또는 Cloud Composer 환경 변수와 반대로 첫 번째 보안 비밀 airflow-secrets는 SQL_CONN이라는 Kubernetes 환경 변수로 설정됩니다.
두 번째 보안 비밀 service-account는 서비스 계정 토큰이 있는 파일인 service-account.json을 /var/secrets/google에 마운트합니다.
보안 비밀은 다음과 같습니다.
Airflow 2
secret_env = Secret(
# Expose the secret as environment variable.
deploy_type="env",
# The name of the environment variable, since deploy_type is `env` rather
# than `volume`.
deploy_target="SQL_CONN",
# Name of the Kubernetes Secret
secret="airflow-secrets",
# Key of a secret stored in this Secret object
key="sql_alchemy_conn",
)
secret_volume = Secret(
deploy_type="volume",
# Path where we mount the secret as volume
deploy_target="/var/secrets/google",
# Name of Kubernetes Secret
secret="service-account",
# Key in the form of service account file name
key="service-account.json",
)
Airflow 1
secret_env = secret.Secret(
# Expose the secret as environment variable.
deploy_type="env",
# The name of the environment variable, since deploy_type is `env` rather
# than `volume`.
deploy_target="SQL_CONN",
# Name of the Kubernetes Secret
secret="airflow-secrets",
# Key of a secret stored in this Secret object
key="sql_alchemy_conn",
)
secret_volume = secret.Secret(
deploy_type="volume",
# Path where we mount the secret as volume
deploy_target="/var/secrets/google",
# Name of Kubernetes Secret
secret="service-account",
# Key in the form of service account file name
key="service-account.json",
)
첫 번째 Kubernetes 보안 비밀의 이름은 secret 변수에 정의됩니다.
이 특정 보안 비밀의 이름은 airflow-secrets입니다. deploy_type의 지시에 따라 환경 변수로 표시됩니다. 설정되는 환경 변수 deploy_target은 SQL_CONN입니다. 마지막으로 deploy_target에 저장된 보안 비밀의 key는 sql_alchemy_conn입니다.
두 번째 Kubernetes 보안 비밀의 이름은 secret 변수에 정의됩니다.
이 특정 보안 비밀의 이름은 service-account입니다. deploy_type의 지시에 따라 볼륨으로 표시됩니다. 마운트할 파일의 경로 deploy_target은 /var/secrets/google입니다. 마지막으로 deploy_target에 저장된 보안 비밀의 key는 service-account.json입니다.
연산자 구성은 다음과 같습니다.
Airflow 2
kubernetes_secret_vars_ex = KubernetesPodOperator(
task_id="ex-kube-secrets",
name="ex-kube-secrets",
namespace="default",
image="ubuntu",
startup_timeout_seconds=300,
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[secret_env, secret_volume],
# env_vars allows you to specify environment variables for your
# container to use. env_vars is templated.
env_vars={
"EXAMPLE_VAR": "/example/value",
"GOOGLE_APPLICATION_CREDENTIALS": "/var/secrets/google/service-account.json ",
},
)
Airflow 1
kubernetes_secret_vars_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-kube-secrets",
name="ex-kube-secrets",
namespace="default",
image="ubuntu",
startup_timeout_seconds=300,
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[secret_env, secret_volume],
# env_vars allows you to specify environment variables for your
# container to use. env_vars is templated.
env_vars={
"EXAMPLE_VAR": "/example/value",
"GOOGLE_APPLICATION_CREDENTIALS": "/var/secrets/google/service-account.json ",
},
)
DAG 또는 사용자 환경을 변경하지 않으면 ex-kube-secrets 태스크가 실패합니다. 로그를 살펴보면 Pod took too long to start 오류로 인해 태스크가 실패합니다. 이 오류는 Airflow가 구성에 지정된 보안 비밀 secret_env를 찾을 수 없기 때문에 발생합니다.
보안 비밀을 설정한 후 Airflow UI에서 ex-kube-secrets 태스크를 다시 실행합니다.
ex-kube-secrets 작업 성공을 확인합니다.
포드 어피니티 구성
KubernetesPodOperator에서 affinity 매개변수를 구성할 때 특정 노드 풀의 노드 등 포드를 예약할 노드를 제어합니다. 이 예시에서 연산자는 pool-0 및 pool-1이라는 이름의 노드 풀에서만 실행됩니다. Cloud Composer 1 환경 노드는 default-pool에 있으므로 포드는 환경의 노드에서 실행되지 않습니다.
Airflow 2
# Pod affinity with the KubernetesPodOperator
# is not supported with Composer 2
# instead, create a cluster and use the GKEStartPodOperator
# https://cloud.google.com/composer/docs/using-gke-operator
kubernetes_affinity_ex = KubernetesPodOperator(
task_id="ex-pod-affinity",
name="ex-pod-affinity",
namespace="default",
image="perl:5.34.0",
cmds=["perl"],
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# affinity allows you to constrain which nodes your pod is eligible to
# be scheduled on, based on labels on the node. In this case, if the
# label 'cloud.google.com/gke-nodepool' with value
# 'nodepool-label-value' or 'nodepool-label-value2' is not found on any
# nodes, it will fail to schedule.
affinity={
"nodeAffinity": {
# requiredDuringSchedulingIgnoredDuringExecution means in order
# for a pod to be scheduled on a node, the node must have the
# specified labels. However, if labels on a node change at
# runtime such that the affinity rules on a pod are no longer
# met, the pod will still continue to run on the node.
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
# When nodepools are created in Google Kubernetes
# Engine, the nodes inside of that nodepool are
# automatically assigned the label
# 'cloud.google.com/gke-nodepool' with the value of
# the nodepool's name.
"key": "cloud.google.com/gke-nodepool",
"operator": "In",
# The label key's value that pods can be scheduled
# on.
"values": [
"pool-0",
"pool-1",
],
}
]
}
]
}
}
},
)
Airflow 1
kubernetes_affinity_ex = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-pod-affinity",
name="ex-pod-affinity",
namespace="default",
image="perl:5.34.0",
cmds=["perl"],
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# affinity allows you to constrain which nodes your pod is eligible to
# be scheduled on, based on labels on the node. In this case, if the
# label 'cloud.google.com/gke-nodepool' with value
# 'nodepool-label-value' or 'nodepool-label-value2' is not found on any
# nodes, it will fail to schedule.
affinity={
"nodeAffinity": {
# requiredDuringSchedulingIgnoredDuringExecution means in order
# for a pod to be scheduled on a node, the node must have the
# specified labels. However, if labels on a node change at
# runtime such that the affinity rules on a pod are no longer
# met, the pod will still continue to run on the node.
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
# When nodepools are created in Google Kubernetes
# Engine, the nodes inside of that nodepool are
# automatically assigned the label
# 'cloud.google.com/gke-nodepool' with the value of
# the nodepool's name.
"key": "cloud.google.com/gke-nodepool",
"operator": "In",
# The label key's value that pods can be scheduled
# on.
"values": [
"pool-0",
"pool-1",
],
}
]
}
]
}
}
},
)
현재 구성되어 있는 예시에 따르면 작업이 실패합니다. 로그를 살펴보면 노드 풀 pool-0과 pool-1이 존재하지 않아 작업이 실패합니다.
values에 노드 풀이 있는지 확인하려면 다음 중 원하는 구성 변경을 수행합니다.
이전에 노드 풀을 만든 경우 pool-0과 pool-1을 노드 풀 이름으로 바꾸고 DAG를 다시 업로드합니다.
이름이 pool-0 또는 pool-1인 노드 풀을 만듭니다. 둘 다 만들 수 있지만 하나만 있어도 작업이 성공할 수 있습니다.
pool-0과 pool-1을 Airflow가 사용하는 기본 풀인 default-pool로 바꿉니다. DAG를 다시 업로드합니다.
변경 후 사용자 환경이 업데이트될 때까지 몇 분 정도 기다립니다.
그런 다음 ex-pod-affinity 작업을 다시 실행하고 ex-pod-affinity 작업이 성공하는지 확인합니다.
전체 구성
이 예시에서는 KubernetesPodOperator에서 구성할 수 있는 모든 변수를 보여줍니다 코드를 수정하지 않아도 ex-all-configs 작업이 성공적으로 수행됩니다.
kubernetes_full_pod = KubernetesPodOperator(
task_id="ex-all-configs",
name="pi",
namespace="default",
image="perl:5.34.0",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["perl"],
# Arguments to the entrypoint. The docker image's CMD is used if this
# is not provided. The arguments parameter is templated.
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[],
# Labels to apply to the Pod.
labels={"pod-label": "label-name"},
# Timeout to start up the Pod, default is 120.
startup_timeout_seconds=120,
# The environment variables to be initialized in the container
# env_vars are templated.
env_vars={"EXAMPLE_VAR": "/example/value"},
# If true, logs stdout output of container. Defaults to True.
get_logs=True,
# Determines when to pull a fresh image, if 'IfNotPresent' will cause
# the Kubelet to skip pulling an image if it already exists. If you
# want to always pull a new image, set it to 'Always'.
image_pull_policy="Always",
# Annotations are non-identifying metadata you can attach to the Pod.
# Can be a large range of data, and can include characters that are not
# permitted by labels.
annotations={"key1": "value1"},
# Optional resource specifications for Pod, this will allow you to
# set both cpu and memory limits and requirements.
# Prior to Airflow 2.3 and the cncf providers package 5.0.0
# resources were passed as a dictionary. This change was made in
# https://github.com/apache/airflow/pull/27197
# Additionally, "memory" and "cpu" were previously named
# "limit_memory" and "limit_cpu"
# resources={'limit_memory': "250M", 'limit_cpu': "100m"},
container_resources=k8s_models.V1ResourceRequirements(
limits={"memory": "250M", "cpu": "100m"},
),
# Specifies path to kubernetes config. If no config is specified will
# default to '~/.kube/config'. The config_file is templated.
config_file="/home/airflow/composer_kube_config",
# If true, the content of /airflow/xcom/return.json from container will
# also be pushed to an XCom when the container ends.
do_xcom_push=False,
# List of Volume objects to pass to the Pod.
volumes=[],
# List of VolumeMount objects to pass to the Pod.
volume_mounts=[],
# Affinity determines which nodes the Pod can run on based on the
# config. For more information see:
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
# Pod affinity with the KubernetesPodOperator
# is not supported with Composer 2
# instead, create a cluster and use the GKEStartPodOperator
# https://cloud.google.com/composer/docs/using-gke-operator
affinity={},
)
Airflow 1
kubernetes_full_pod = kubernetes_pod_operator.KubernetesPodOperator(
task_id="ex-all-configs",
name="pi",
namespace="default",
image="perl:5.34.0",
# Entrypoint of the container, if not specified the Docker container's
# entrypoint is used. The cmds parameter is templated.
cmds=["perl"],
# Arguments to the entrypoint. The docker image's CMD is used if this
# is not provided. The arguments parameter is templated.
arguments=["-Mbignum=bpi", "-wle", "print bpi(2000)"],
# The secrets to pass to Pod, the Pod will fail to create if the
# secrets you specify in a Secret object do not exist in Kubernetes.
secrets=[],
# Labels to apply to the Pod.
labels={"pod-label": "label-name"},
# Timeout to start up the Pod, default is 120.
startup_timeout_seconds=120,
# The environment variables to be initialized in the container
# env_vars are templated.
env_vars={"EXAMPLE_VAR": "/example/value"},
# If true, logs stdout output of container. Defaults to True.
get_logs=True,
# Determines when to pull a fresh image, if 'IfNotPresent' will cause
# the Kubelet to skip pulling an image if it already exists. If you
# want to always pull a new image, set it to 'Always'.
image_pull_policy="Always",
# Annotations are non-identifying metadata you can attach to the Pod.
# Can be a large range of data, and can include characters that are not
# permitted by labels.
annotations={"key1": "value1"},
# Optional resource specifications for Pod, this will allow you to
# set both cpu and memory limits and requirements.
# Prior to Airflow 1.10.4, resource specifications were
# passed as a Pod Resources Class object,
# If using this example on a version of Airflow prior to 1.10.4,
# import the "pod" package from airflow.contrib.kubernetes and use
# resources = pod.Resources() instead passing a dict
# For more info see:
# https://github.com/apache/airflow/pull/4551
resources={"limit_memory": "250M", "limit_cpu": "100m"},
# Specifies path to kubernetes config. If no config is specified will
# default to '~/.kube/config'. The config_file is templated.
config_file="/home/airflow/composer_kube_config",
# If true, the content of /airflow/xcom/return.json from container will
# also be pushed to an XCom when the container ends.
do_xcom_push=False,
# List of Volume objects to pass to the Pod.
volumes=[],
# List of VolumeMount objects to pass to the Pod.
volume_mounts=[],
# Affinity determines which nodes the Pod can run on based on the
# config. For more information see:
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity={},
)
CNCF Kubernetes Provider 정보
GKEStartPodOperator 및 KubernetesPodOperator는 apache-airflow-providers-cncf-kubernetes Provider 내에서 구현됩니다.
CNCF Kubernetes Provider 패키지 버전 6.0.0에서는 기본적으로 kubernetes_default 연결이 KubernetesPodOperator에서 사용됩니다.
버전 5.0.0에서 커스텀 연결을 지정한 경우 이 커스텀 연결이 연산자에서 계속 사용합니다. kubernetes_default 연결을 사용하도록 다시 전환하려면 이에 따라 DAG를 조정해야 할 수 있습니다.
버전 5.0.0
이 버전은 버전 4.4.0과 비교하여 이전 버전과 호환되지 않는 몇 가지 변경사항을 도입합니다. 가장 중요한 것은 버전 5.0.0에서 사용되지 않는 kubernetes_default 연결과 관련이 있습니다.
kubernetes_default 연결을 수정해야 합니다. Kube 구성 경로를 /home/airflow/composer_kube_config로 설정해야 합니다(그림 1 참조).
또는 config_file을 KubernetesPodOperator 구성에 추가해야 합니다(다음 코드 예시 참조).
다음과 같은 방법으로 KubernetesPodOperator를 사용하여 태스크 코드를 수정합니다.
KubernetesPodOperator(
# config_file parameter - can be skipped if connection contains this setting
config_file="/home/airflow/composer_kube_config",
# definition of connection to be used by the operator
kubernetes_conn_id='kubernetes_default',
...
)
GKE 워크로드 내 Google Cloud 콘솔의 자세한 포드 로그. 이러한 로그에는 포드 정의 YAML 파일, 포드 이벤트, 포드 세부정보가 포함됩니다.
GKEStartPodOperator를 사용할 때의 0이 아닌 반환 코드
KubernetesPodOperator와 GKEStartPodOperator를 사용할 때 컨테이너 진입점의 반환 코드를 통해 작업의 성공 여부를 알 수 있습니다. 0이 아닌 반환 코드는 실패를 나타냅니다.
KubernetesPodOperator 및 GKEStartPodOperator 사용 시 공통적인 패턴은 셸 스크립트를 컨테이너 진입점으로 실행하여 컨테이너 내에서 여러 작업을 그룹화합니다.
이러한 스크립트를 작성 중인 경우 스크립트 상단에 set -e 명령어를 사용하여 스크립트의 실패한 명령어가 스크립트를 종료하고 실패를 Airflow 작업 인스턴스에 적용하도록 합니다.
포드 제한 시간
KubernetesPodOperator의 기본 제한 시간은 120초이므로 큰 이미지를 다운로드하기 전에 시간 초과가 발생할 수 있습니다. 제한 시간을 변경하려면 KubernetesPodOperator 생성 시 startup_timeout_seconds 매개변수를 변경합니다.
포드가 시간 초과되면 Airflow UI에서 작업별 로그를 사용할 수 있습니다. 예를 들면 다음과 같습니다.
Executing <Task(KubernetesPodOperator): ex-all-configs> on 2018-07-23 19:06:58.133811
Running: ['bash', '-c', u'airflow run kubernetes-pod-example ex-all-configs 2018-07-23T19:06:58.133811 --job_id 726 --raw -sd DAGS_FOLDER/kubernetes_pod_operator_sample.py']
Event: pod-name-9a8e9d06 had an event of type Pending
...
...
Event: pod-name-9a8e9d06 had an event of type Pending
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python2.7/site-packages/airflow/bin/cli.py", line 392, in run
pool=args.pool,
File "/usr/local/lib/python2.7/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/airflow/models.py", line 1492, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python2.7/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 123, in execute
raise AirflowException('Pod Launching failed: {error}'.format(error=ex))
airflow.exceptions.AirflowException: Pod Launching failed: Pod took too long to start
Cloud Composer 서비스 계정에 작업을 수행하는 데 필요한 IAM 권한이 없는 경우에도 포드 시간 초과가 발생할 수 있습니다. 이를 확인하려면 GKE 대시보드를 사용하여 포드 수준 오류를 보고 특정 워크로드의 로그를 확인하거나 Cloud Logging을 사용하세요.
새 연결을 설정할 수 없음
자동 업그레이드는 GKE 클러스터에서 기본적으로 사용 설정됩니다.
노드 풀이 업그레이드 중인 클러스터에 있는 경우 다음 오류가 표시될 수 있습니다.
<Task(KubernetesPodOperator): gke-upgrade> Failed to establish a new
connection: [Errno 111] Connection refused
클러스터가 업그레이드 중인지 확인하려면 Google Cloud Console에서 Kubernetes 클러스터 페이지로 이동하여 환경의 클러스터 이름 옆에 있는 로드 아이콘을 찾으세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-05-31(UTC)"],[],[]]