Upgrading environments

This page describes how to upgrade the Airflow version or Cloud Composer version that your environment runs.

During upgrade, Cloud Composer:

  • Re-deploys the Airflow scheduler and worker pods in a new Kubernetes namespace. After the upgrade is completed, Airflow will use a new MySQL database; the database name matches the Kubernetes namespace. The DAG run history is preserved.

  • Updates the Airflow airflow_db connection to point to the new Cloud SQL database.

These changes affect how you access pods and connect to the Cloud SQL database.

  • To access pods in the GKE cluster after upgrade, you need to use namespace-aware kubectl commands. For example, to list pods in the cluster, use kubectl get pods -A. To execute a command on a pod, use kubectl exec -n <NAMESPACE> ....
  • If you use Airflow connections and workloads that reference the SQL proxy directly, use the default namespace as part of the hostname: airflow-sqlproxy-service.default, not airflow-sqlproxy-service.

Upgrading Cloud Composer, does not change how you connect to the resources in your environment, such as Google Kubernetes Engine node VM IP addresses, Cloud SQL instance IP address, Cloud Storage bucket, or Airflow webserver domain name.

Before you begin

  • Upgrading environments is currently in Preview. Use this feature with caution in production environments.
  • The following role is required to upgrade: roles/editor or roles/composer.admin
  • Pause all DAGs and wait for in-progress tasks to finish before upgrading.
  • You can upgrade the Cloud Composer, the Airflow version, or both at the same time.
  • The Cloud Composer-Airflow combination that you are upgrading to must be a released version.
    • For available upgrades, see Viewing available upgrades. To get the latest features and fixes, consider upgrading to the most recent Cloud Composer release.
    • For the list of PyPI packages and customizations in a supported version, see the Cloud Composer version list.

      Before you upgrade, ensure that you know the differences between the current versions of Airflow and Cloud Composer and the versions that you're upgrading to. Incompatible changes can cause DAGs to break.

Airflow Database Maintenance

Airflow database is collecting more and more data with time going.

You can use this maintenance DAG to prune the content of your database. This DAG is going to remove old entries from DagRun, TaskInstance, Log, XCom, Job DB and SlaMiss tables.

Database maintenance DAG

"""
A maintenance workflow that you can deploy into Airflow to periodically clean
out the DagRun, TaskInstance, Log, XCom, Job DB and SlaMiss entries to avoid
having too much data in your Airflow MetaStore.

## Authors

The DAG is a fork of [teamclairvoyant repository.](https://github.com/teamclairvoyant/airflow-maintenance-dags/tree/master/db-cleanup)

## Usage

1. Update the global variables (SCHEDULE_INTERVAL, DAG_OWNER_NAME,
  ALERT_EMAIL_ADDRESSES and ENABLE_DELETE) in the DAG with the desired values

2. Modify the DATABASE_OBJECTS list to add/remove objects as needed. Each
   dictionary in the list features the following parameters:
    - airflow_db_model: Model imported from airflow.models corresponding to
      a table in the airflow metadata database
    - age_check_column: Column in the model/table to use for calculating max
      date of data deletion
    - keep_last: Boolean to specify whether to preserve last run instance
        - keep_last_filters: List of filters to preserve data from deleting
          during clean-up, such as DAG runs where the external trigger is set to 0.
        - keep_last_group_by: Option to specify column by which to group the
          database entries and perform aggregate functions.

3. Create and Set the following Variables in the Airflow Web Server
  (Admin -> Variables)
    - airflow_db_cleanup__max_db_entry_age_in_days - integer - Length to retain
      the log files if not already provided in the conf. If this is set to 30,
      the job will remove those files that are 30 days old or older.

4. Put the DAG in your gcs bucket.
"""
from datetime import datetime, timedelta
import logging
import os

import airflow
from airflow import settings
from airflow.configuration import conf
from airflow.jobs import BaseJob
from airflow.models import DAG, DagModel, DagRun, Log, SlaMiss, \
    TaskInstance, Variable, XCom
from airflow.operators.python_operator import PythonOperator
import dateutil.parser
from sqlalchemy import and_, func
from sqlalchemy.exc import ProgrammingError
from sqlalchemy.orm import load_only

try:
    # airflow.utils.timezone is available from v1.10 onwards
    from airflow.utils import timezone
    now = timezone.utcnow
except ImportError:
    now = datetime.utcnow

# airflow-db-cleanup
DAG_ID = os.path.basename(__file__).replace(".pyc", "").replace(".py", "")
START_DATE = airflow.utils.dates.days_ago(1)
# How often to Run. @daily - Once a day at Midnight (UTC)
SCHEDULE_INTERVAL = "@daily"
# Who is listed as the owner of this DAG in the Airflow Web Server
DAG_OWNER_NAME = "operations"
# List of email address to send email alerts to if this job fails
ALERT_EMAIL_ADDRESSES = []
# Length to retain the log files if not already provided in the conf. If this
# is set to 30, the job will remove those files that arE 30 days old or older.

DEFAULT_MAX_DB_ENTRY_AGE_IN_DAYS = int(
    Variable.get("airflow_db_cleanup__max_db_entry_age_in_days", 30))
# Prints the database entries which will be getting deleted; set to False
# to avoid printing large lists and slowdown process
PRINT_DELETES = False
# Whether the job should delete the db entries or not. Included if you want to
# temporarily avoid deleting the db entries.
ENABLE_DELETE = True
# List of all the objects that will be deleted. Comment out the DB objects you
# want to skip.
DATABASE_OBJECTS = [{
    "airflow_db_model": BaseJob,
    "age_check_column": BaseJob.latest_heartbeat,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}, {
    "airflow_db_model": DagRun,
    "age_check_column": DagRun.execution_date,
    "keep_last": True,
    "keep_last_filters": [DagRun.external_trigger.is_(False)],
    "keep_last_group_by": DagRun.dag_id
}, {
    "airflow_db_model": TaskInstance,
    "age_check_column": TaskInstance.execution_date,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}, {
    "airflow_db_model": Log,
    "age_check_column": Log.dttm,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}, {
    "airflow_db_model": XCom,
    "age_check_column": XCom.execution_date,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}, {
    "airflow_db_model": SlaMiss,
    "age_check_column": SlaMiss.execution_date,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}, {
    "airflow_db_model": DagModel,
    "age_check_column": DagModel.last_scheduler_run,
    "keep_last": False,
    "keep_last_filters": None,
    "keep_last_group_by": None
}]

# Check for TaskReschedule model
try:
    from airflow.models import TaskReschedule
    DATABASE_OBJECTS.append({
        "airflow_db_model": TaskReschedule,
        "age_check_column": TaskReschedule.execution_date,
        "keep_last": False,
        "keep_last_filters": None,
        "keep_last_group_by": None
    })

except Exception as e:
    logging.error(e)

# Check for TaskFail model
try:
    from airflow.models import TaskFail
    DATABASE_OBJECTS.append({
        "airflow_db_model": TaskFail,
        "age_check_column": TaskFail.execution_date,
        "keep_last": False,
        "keep_last_filters": None,
        "keep_last_group_by": None
    })

except Exception as e:
    logging.error(e)

# Check for RenderedTaskInstanceFields model
try:
    from airflow.models import RenderedTaskInstanceFields
    DATABASE_OBJECTS.append({
        "airflow_db_model": RenderedTaskInstanceFields,
        "age_check_column": RenderedTaskInstanceFields.execution_date,
        "keep_last": False,
        "keep_last_filters": None,
        "keep_last_group_by": None
    })

except Exception as e:
    logging.error(e)

# Check for ImportError model
try:
    from airflow.models import ImportError
    DATABASE_OBJECTS.append({
        "airflow_db_model": ImportError,
        "age_check_column": ImportError.timestamp,
        "keep_last": False,
        "keep_last_filters": None,
        "keep_last_group_by": None
    })

except Exception as e:
    logging.error(e)

# Check for celery executor
airflow_executor = str(conf.get("core", "executor"))
logging.info("Airflow Executor: " + str(airflow_executor))
if (airflow_executor == "CeleryExecutor"):
    logging.info("Including Celery Modules")
    try:
        from celery.backends.database.models import Task, TaskSet
        DATABASE_OBJECTS.extend(({
            "airflow_db_model": Task,
            "age_check_column": Task.date_done,
            "keep_last": False,
            "keep_last_filters": None,
            "keep_last_group_by": None
        }, {
            "airflow_db_model": TaskSet,
            "age_check_column": TaskSet.date_done,
            "keep_last": False,
            "keep_last_filters": None,
            "keep_last_group_by": None
        }))

    except Exception as e:
        logging.error(e)

session = settings.Session()

default_args = {
    "owner": DAG_OWNER_NAME,
    "depends_on_past": False,
    "email": ALERT_EMAIL_ADDRESSES,
    "email_on_failure": True,
    "email_on_retry": False,
    "start_date": START_DATE,
    "retries": 1,
    "retry_delay": timedelta(minutes=1)
}

dag = DAG(
    DAG_ID,
    default_args=default_args,
    schedule_interval=SCHEDULE_INTERVAL,
    start_date=START_DATE)
if hasattr(dag, "doc_md"):
    dag.doc_md = __doc__
if hasattr(dag, "catchup"):
    dag.catchup = False


def print_configuration_function(**context):
    logging.info("Loading Configurations...")
    dag_run_conf = context.get("dag_run").conf
    logging.info("dag_run.conf: " + str(dag_run_conf))
    max_db_entry_age_in_days = None
    if dag_run_conf:
        max_db_entry_age_in_days = dag_run_conf.get(
            "maxDBEntryAgeInDays", None)
    logging.info("maxDBEntryAgeInDays from dag_run.conf: " + str(dag_run_conf))
    if (max_db_entry_age_in_days is None or max_db_entry_age_in_days < 1):
        logging.info(
            "maxDBEntryAgeInDays conf variable isn't included or Variable " +
            "value is less than 1. Using Default '" +
            str(DEFAULT_MAX_DB_ENTRY_AGE_IN_DAYS) + "'")
        max_db_entry_age_in_days = DEFAULT_MAX_DB_ENTRY_AGE_IN_DAYS
    max_date = now() + timedelta(-max_db_entry_age_in_days)
    logging.info("Finished Loading Configurations")
    logging.info("")

    logging.info("Configurations:")
    logging.info("max_db_entry_age_in_days: " + str(max_db_entry_age_in_days))
    logging.info("max_date:                 " + str(max_date))
    logging.info("enable_delete:            " + str(ENABLE_DELETE))
    logging.info("session:                  " + str(session))
    logging.info("")

    logging.info("Setting max_execution_date to XCom for Downstream Processes")
    context["ti"].xcom_push(key="max_date", value=max_date.isoformat())


print_configuration = PythonOperator(
    task_id="print_configuration",
    python_callable=print_configuration_function,
    provide_context=True,
    dag=dag)


def cleanup_function(**context):

    logging.info("Retrieving max_execution_date from XCom")
    max_date = context["ti"].xcom_pull(
        task_ids=print_configuration.task_id, key="max_date")
    max_date = dateutil.parser.parse(max_date)  # stored as iso8601 str in xcom

    airflow_db_model = context["params"].get("airflow_db_model")
    state = context["params"].get("state")
    age_check_column = context["params"].get("age_check_column")
    keep_last = context["params"].get("keep_last")
    keep_last_filters = context["params"].get("keep_last_filters")
    keep_last_group_by = context["params"].get("keep_last_group_by")

    logging.info("Configurations:")
    logging.info("max_date:                 " + str(max_date))
    logging.info("enable_delete:            " + str(ENABLE_DELETE))
    logging.info("session:                  " + str(session))
    logging.info("airflow_db_model:         " + str(airflow_db_model))
    logging.info("state:                    " + str(state))
    logging.info("age_check_column:         " + str(age_check_column))
    logging.info("keep_last:                " + str(keep_last))
    logging.info("keep_last_filters:        " + str(keep_last_filters))
    logging.info("keep_last_group_by:       " + str(keep_last_group_by))

    logging.info("")

    logging.info("Running Cleanup Process...")

    try:
        query = session.query(airflow_db_model).options(
            load_only(age_check_column))

        logging.info("INITIAL QUERY : " + str(query))

        if keep_last:

            subquery = session.query(func.max(DagRun.execution_date))
            # workaround for MySQL "table specified twice" issue
            # https://github.com/teamclairvoyant/airflow-maintenance-dags/issues/41
            if keep_last_filters is not None:
                for entry in keep_last_filters:
                    subquery = subquery.filter(entry)

                logging.info("SUB QUERY [keep_last_filters]: " + str(subquery))

            if keep_last_group_by is not None:
                subquery = subquery.group_by(keep_last_group_by)
                logging.info(
                    "SUB QUERY [keep_last_group_by]: " +
                    str(subquery))

            subquery = subquery.from_self()

            query = query.filter(
                and_(age_check_column.notin_(subquery)),
                and_(age_check_column <= max_date))

        else:
            query = query.filter(age_check_column <= max_date,)

        if PRINT_DELETES:
            entries_to_delete = query.all()

            logging.info("Query: " + str(query))
            logging.info("Process will be Deleting the following " +
                         str(airflow_db_model.__name__) + "(s):")
            for entry in entries_to_delete:
                date = str(entry.__dict__[str(age_check_column).split(".")[1]])
                logging.info("\tEntry: " + str(entry) + ", Date: " + date)

            logging.info("Process will be Deleting "
                         + str(len(entries_to_delete)) + " "
                         + str(airflow_db_model.__name__) + "(s)")
        else:
            logging.warn(
                "You've opted to skip printing the db entries to be deleted. "
                "Set PRINT_DELETES to True to show entries!!!")

        if ENABLE_DELETE:
            logging.info("Performing Delete...")
            # using bulk delete
            query.delete(synchronize_session=False)
            session.commit()
            logging.info("Finished Performing Delete")
        else:
            logging.warn("You've opted to skip deleting the db entries. "
                         "Set ENABLE_DELETE to True to delete entries!!!")

        logging.info("Finished Running Cleanup Process")

    except ProgrammingError as e:
        logging.error(e)
        logging.error(
            str(airflow_db_model) + " is not present in the metadata."
            "Skipping...")


for db_object in DATABASE_OBJECTS:

    cleanup_op = PythonOperator(
        task_id="cleanup_" + str(db_object["airflow_db_model"].__name__),
        python_callable=cleanup_function,
        params=db_object,
        provide_context=True,
        dag=dag)

    print_configuration.set_downstream(cleanup_op)

You can also remove entries related to DAGs that are not longer needed as it was described in "Deleting a DAG" section.

Limitations

  • You cannot downgrade to an earlier version of Cloud Composer or Airflow.
  • You can only upgrade to the latest Cloud Composer version within the same major version, such as composer-1.12.4-airflow-1.10.10 to composer-1.13.0-airflow-1.10.10. Upgrading from composer-1.4.0-airflow-1.10.0 to composer-2.0.0-airflow-1.10.0 is not permitted because the Cloud Composer major version changes from 1 to 2.
  • The image version that you upgrade to must support your environment's current Python version.
  • Upgrades cannot be performed if the Airflow database contains more than 16GB of data. During the upgrade, you will be shown a warning if it is the case and you will need to perform database maintenance as it was described in Airflow Database Maintenance section

Deprecation messages

Cloud Composer dispays warnings when your environment image approaches its end of support date. You can use these warnings to always keep your environments in the full support period.

Deprecation message

Cloud Composer keeps track of the Cloud Composer image version that your environment is based on. When the image approaches its end of support date, you can see a warning in the list of environments and on the Environment details page.

To check if your environment image is up to date:

  1. Open the Environments page in the Google Cloud Console.

    Open the Environments page

  2. Click the name of the environment to view its details.

  3. Under Environment configuration, find the Image version field.

  4. In the Image version field, one of the following messages is displayed:

    • Newest available version. Your environment image is fully supported.

    • New version available. Your environment image is fully supported and you can upgrade it to a later version.

    • Support for this image version ends in... Your environment image approaches the end of the full support period.

    • This version is not supported as of... Your environment is past the full support period.

Viewing available upgrades

To view Cloud Composer versions that you can upgrade to:

Console

  1. Open the Environments page in the Google Cloud.

    Open the Environments page

  2. Click the environment Name.

  3. In the Environment Configuration tab, click Upgrade Image Version.

  4. For available versions, click the Cloud Composer Image version drop-down menu.

gcloud

gcloud beta composer environments list-upgrades ENVIRONMENT_NAME \
    --location LOCATION 

where:

  • ENVIRONMENT_NAME is the name of the environment.
  • LOCATION is the Compute Engine region where the environment is located.

For example:

gcloud beta composer environments list-upgrades test-environment \
    --location us-central1
┌─────────────────────────────────────────────────────────────────────────────┐
│                              AVAILABLE UPGRADES                             │
├──────────────────────────────┬──────────────────┬───────────────────────────┤
│        IMAGE VERSION         │ COMPOSER DEFAULT │ SUPPORTED PYTHON VERSIONS │
├──────────────────────────────┼──────────────────┼───────────────────────────┤
│ composer-1.4.0-airflow-1.9.0 │ True             │ 2,3                       │
└──────────────────────────────┴──────────────────┴───────────────────────────┘

API

To view available versions using the Cloud Composer REST API, construct an imageVersions.list API request and provide the project and location in the form projects/{projectId}/locations/{locationId}.

For example:

GET https://composer.googleapis.com/v1/projects/test-project-id/locations/us-central1/imageVersions

{
  "imageVersions": [
    {
      "imageVersionId": "composer-1.4.2-airflow-1.10.0",
      "supportedPythonVersions": [
        "2",
        "3"
      ]
    },
    {
      "imageVersionId": "composer-1.4.2-airflow-1.9.0",
      "isDefault": true,
      "supportedPythonVersions": [
        "2",
        "3"
      ]
    }
  ]
} 

Upgrading the Cloud Composer version

To upgrade the version of Cloud Composer that your environment runs:

Console

  1. Open the Environments page in the Google Cloud.

    Open the Environments page

  2. Click the environment Name to modify.

  3. In the Environment Configuration tab, click Upgrade Image Version.

  4. Click the Cloud Composer Image version drop-down menu and select a version.

  5. Click Submit.

gcloud

gcloud beta composer environments update ENVIRONMENT_NAME \
    --location LOCATION --image-version VERSION

where:

  • ENVIRONMENT_NAME is the name of the environment.
  • LOCATION is the Compute Engine region where the environment is located.
  • VERSION is the Cloud Composer version and Airflow version to use for your environment in the form composer-a.b.c-airflow-x.y.z or composer-a.b.c-airflow-x.y. If you do not specify the Airflow patch, the highest available patch version for the given major and minor version is used.

For example:

gcloud beta composer environments update test-environment \
    --location us-central1 --image-version composer-latest-airflow-1.10.1 

API

To upgrade using the Cloud Composer REST API, construct an environments.patch API request. Provide the version in the form composer-a.b.c-airflow-x.y.z.

For example:

PATCH https://composer.googleapis.com/v1beta1/projects/test-project/locations/us-central1/environments/test-environment?updateMask=config.software_config.image_version

The request body includes the imageVersion:

{
  "config": {
    "softwareConfig": {
      "imageVersion": "composer-1.6.0-airflow-1.10.1"
    }
  }
}

Upgrading the Airflow version

When your environment is running the latest Cloud Composer version, you can use the Cloud SDK to upgrade only the Airflow version, for example to upgrade from composer-1.6.1-airflow-1.9.0 to composer-1.6.1-airflow-1.10.0.

gcloud beta composer environments update ENVIRONMENT_NAME \
--location LOCATION --airflow-version VERSION

where:

  • ENVIRONMENT_NAME is the name of the environment.
  • LOCATION is the Compute Engine region where the environment is located.
  • VERSION is the Airflow version to use for your environment in the form x.y.z or x.y.

For example:

gcloud beta composer environments update test-environment \
--location us-central1 --airflow-version=1.10.1