This procedure covers upgrading from Apigee hybrid version 1.13.x to Apigee hybrid version 1.14.0.
Changes from Apigee hybrid v1.13
Please note the following changes:
- Starting in version 1.14, data plane components write data directly to the control plane by default. This provides increased reliability and compliance for analytics and debug data. See Analytics and debug data collection with data residency.
- Anthos (on bare metal or VMware) is now Google Distributed Cloud (for bare metal or VMware): For more information see the product overviews for Google Distributed Cloud for bare metal and Google Distributed Cloud for VMware.
For additional information about features in Hybrid version 1.14, see the Apigee hybrid v1.14.0 release notes.
Prerequisites
Before upgrading to hybrid version 1.14, make sure your installation meets the following requirements:
- If your hybrid installation is running a version older than v1.13, you must upgrade to version 1.13 before upgrading to v1.14. See Upgrading Apigee hybrid to version 1.13.
- Helm version v3.14.2+.
kubectl
: A supported version ofkubectl
appropriate for your Kubernetes platform version. see Supported platforms and versions:kubectl
.- cert-manager: A supported version of cert-manager. See Supported platforms and versions: cert-manager. If needed, you will upgrade cert-manager in the Prepare to upgrade to version 1.14 section below.
Before you upgrade to 1.14.0 - limitations and important notes
Apigee hybrid 1.14.0 introduces a new enhanced per-environment proxy limit that lets you deploy more proxies and shared flows in a single environment. See Limits: API Proxies to understand the limits on the number of proxies and shared flows you may deploy per environment. This feature is available only on newly created hybrid organizations, and cannot be applied to upgraded orgs. To use this feature, perform a fresh installation of hybrid 1.14.0, and create a new organization.
This feature is available exclusively as part of the 2024 subscription plan, and is subject to the entitlements granted under that subscription. See Enhanced per-environment proxy limits to learn more about this feature.
Upgrading to Apigee hybrid version 1.14 may require downtime.
When upgrading the Apigee controller to version 1.14.0, all Apigee deployments undergo a rolling restart. To minimize downtime in production hybrid environments during a rolling restart, make sure you are running at least two clusters (in the same or different region/data center). Divert all production traffic to a single cluster and take the cluster you are about to upgrade offline, and then proceed with the upgrade process. Repeat the process for each cluster.
Apigee recommends that once you begin the upgrade, you should upgrade all clusters as soon as possible to reduce the chances of production impact. There is no time limit on when all remaining clusters must be upgraded after the first one is upgraded. However, until all remaining clusters are upgraded Cassandra backup and restore cannot work with mixed versions. For example, a backup from Hybrid 1.13 cannot be used to restore a Hybrid 1.14 instance.
Management plane changes do not need to be fully suspended during an upgrade. Any required temporary suspensions to management plane changes are noted in the upgrade instructions below.
Upgrading to version 1.14.0 overview
The procedures for upgrading Apigee hybrid are organized in the following sections:
Prepare to upgrade to version 1.14
Back up your hybrid installation
- These instructions use the environment variable APIGEE_HELM_CHARTS_HOME for the directory
in your file system where you have installed the Helm charts. If needed, change directory
into this directory and define the variable with the following command:
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
set APIGEE_HELM_CHARTS_HOME=%CD%
echo %APIGEE_HELM_CHARTS_HOME%
- Make a backup copy of your version 1.13
$APIGEE_HELM_CHARTS_HOME/
directory. You can use any backup process. For example, you can create atar
file of your entire directory with:tar -czvf $APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.13-backup.tar.gz $APIGEE_HELM_CHARTS_HOME
- Back up your Cassandra database following the instructions in Cassandra backup and recovery.
- If you are using service cert files (
.json
) in your overrides to authenticate service accounts, make sure your service account cert files reside in the correct Helm chart directory. Helm charts cannot read files outside of each chart directory.This step is not required if you are using Kubernetes secrets or Workload Identity to authenticate service accounts.
The following table shows the destination for each service account file, depending on your type of installation:
Service account Default filename Helm chart directory apigee-cassandra
PROJECT_ID-apigee-cassandra.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
apigee-logger
PROJECT_ID-apigee-logger.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-mart
PROJECT_ID-apigee-mart.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-metrics
PROJECT_ID-apigee-metrics.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-runtime
PROJECT_ID-apigee-runtime.json
$APIGEE_HELM_CHARTS_HOME/apigee-env
apigee-synchronizer
PROJECT_ID-apigee-synchronizer.json
$APIGEE_HELM_CHARTS_HOME/apigee-env/
apigee-udca
PROJECT_ID-apigee-udca.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-watcher
PROJECT_ID-apigee-watcher.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
Make a copy of the
apigee-non-prod
service account file in each of the following directories:Service account Default filename Helm chart directories apigee-non-prod
PROJECT_ID-apigee-non-prod.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
$APIGEE_HELM_CHARTS_HOME/apigee-org/
$APIGEE_HELM_CHARTS_HOME/apigee-env/
-
Make sure that your TLS certificate and key files (
.crt
,.key
, and/or.pem
) reside in the$APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/
directory.
Upgrade your Kubernetes version
Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.13 and hybrid 1.14. Follow your platform's documentation if you need help.
Click to expand a list of supported platforms
1.10
|
1.11 | 1.12 | 1.13 | 1.14 | |
---|---|---|---|---|---|
GKE on Google Cloud | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x |
GKE on AWS | 1.13.x (K8s v1.24.x)
1.14.x (K8s v1.25.x) 1.26.x(4) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x (K8s v1.25.x)
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
1.27.x
1.28.x 1.29.x(≥ 1.12.1)(8) 1.30.x |
1.28.x
1.29.x(≥ 1.12.1)(8) 1.30.x 1.31.x |
GKE on Azure | 1.13.x
1.14.x 1.26.x(4) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
1.27.x
1.28.x 1.29.x(≥ 1.12.1)(8) 1.30.x |
1.28.x
1.29.x(≥ 1.12.1)(8) 1.30.x 1.31.x |
Google Distributed Cloud (software only) on VMware (5) | 1.13.x (1)
1.14.x 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1)(8) |
1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
1.28.x(4)
1.29.x 1.30.x 1.31.x |
Google Distributed Cloud (software only) on bare metal | 1.13.x (1)
1.14.x (K8s v1.25.x) 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(4)(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(4)(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1)(8) |
1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
1.28.x(4)
1.29.x 1.30.x 1.31.x |
EKS | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x |
AKS | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x |
OpenShift | 4.11
4.12 4.14(≥ 1.10.5)(6) 4.15(≥ 1.10.5)(6) |
4.12
4.13 4.14 4.15(≥ 1.11.2)(7) 4.16(≥ 1.11.2)(7) |
4.12
4.13 4.14 4.15 4.16(≥ 1.12.1)(8) |
4.12
4.13 4.14 4.15 4.16 |
4.13
4.14 4.15 4.16 4.17 |
Rancher Kubernetes Engine (RKE) |
v1.26.2+rke2r1
1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
v1.26.2+rke2r1
v1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x 1.30.x |
v1.27.x
1.28.x 1.29.x 1.30.x 1.31.x |
VMware Tanzu | N/A | N/A | v1.26.x | v1.26.x | v1.26.x |
Components |
1.10 | 1.11 | 1.12 | 1.13 | 1.14 |
Cloud Service Mesh | 1.17.x(3) |
1.17.x(v1.11.0 - v1.11.1)(3)
1.18.x(≥ 1.11.2)(7)(3) |
1.18.x(3) | 1.19.x(3) | 1.19.x(3) |
JDK | JDK 11 | JDK 11 | JDK 11 | JDK 11 | JDK 11 |
cert-manager |
1.10.x 1.11.x 1.12.x |
1.11.x 1.12.x 1.13.x |
1.11.x 1.12.x 1.13.x |
1.13.x 1.14.x 1.15.x |
1.14.x 1.15.x 1.16.x |
Cassandra | 3.11 | 3.11 | 4.0 | 4.0 | 4.0 |
Kubernetes | 1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
1.27.x 1.28.x 1.29.x 1.30.x |
1.28.x 1.29.x 1.30.x 1.31.x |
kubectl | 1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
1.27.x 1.28.x 1.29.x 1.30.x |
1.28.x 1.29.x 1.30.x 1.31.x |
Helm | 3.10+ | 3.10+ | 3.14.2+ | 3.14.2+ | 3.14.2+ |
Secret Store CSI driver | 1.3.4 | 1.4.1 | 1.4.1 | 1.4.6 | |
Vault | 1.13.x | 1.15.2 | 1.15.2 | 1.17.2 | |
(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict with (2) The official EOL dates for Apigee hybrid versions 1.10 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade. (3) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer. (4) GKE on AWS version numbers now reflect the Kubernetes versions. See GKE Enterprise version and upgrade support for version details and recommended patches. (5) Vault is not certified on Google Distributed Cloud for VMware. (6) Support available with Apigee hybrid version 1.10.5 and newer. (7) Support available with Apigee hybrid version 1.11.2 and newer. (8) Support available with Apigee hybrid version 1.12.1 and newer. |
Install the hybrid 1.14.0 runtime
Configure the data collection pipeline.
-
For hybrid v1.14, you must add the following
newDataPipeline
stanza to youroverrides.yaml
file to allow data plane components to write directly to Apigee's control plane:# Required newDataPipeline: debugSession: true analytics: true
- Follow the steps in Analytics and debug data collection with data residency: Configuration to configure the authorization flow.
Prepare for the Helm charts upgrade
- Pull the Apigee Helm charts.
Apigee hybrid charts are hosted in Google Artifact Registry:
oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
Using the
pull
command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:export CHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
export CHART_VERSION=1.14.0
helm pull $CHART_REPO/apigee-operator --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-datastore --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-env --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-ingress-manager --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-org --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-redis --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-telemetry --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-virtualhost --version $CHART_VERSION --untar
- Upgrade cert-manager if needed.
If you need to upgrade your cert-manager version, install the new version with the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/
v1.15.1 /cert-manager.yamlSee Supported platforms and versions: cert-manager for a list of supported versions.
- If your Apigee namespace is not
apigee
, edit theapigee-operator/etc/crds/default/kustomization.yaml
file and replace thenamespace
value with your Apigee namespace.apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace:
APIGEE_NAMESPACE If you are using
apigee
as your namespace you do not need to edit the file. - Install the updated Apigee CRDs:
-
Use the
kubectl
dry-run feature by running the following command:kubectl apply -k apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run
-
After validating with the dry-run command, run the following command:
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Validate the installation with the
kubectl get crds
command:kubectl get crds | grep apigee
Your output should look something like the following:
apigeedatastores.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeedeployments.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeeenvironments.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeissues.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeorganizations.apigee.cloud.google.com 2024-08-21T14:48:32Z apigeeredis.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeerouteconfigs.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeeroutes.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeetelemetries.apigee.cloud.google.com 2024-08-21T14:48:34Z cassandradatareplications.apigee.cloud.google.com 2024-08-21T14:48:35Z
-
- Update the replicas of your existing Apigee Operator deployment in the
apigee-system
namespace to 0 (zero) to avoid the two controllers reconciling.kubectl scale deployment apigee-controller-manager -n apigee-system --replicas=0
- Update the replicas of your existing Apigee Operator deployment in the
apigee-system
namespace to 0 (zero) to avoid the two controllers reconciling.kubectl delete mutatingwebhookconfiguration apigee-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration apigee-validating-webhook-configuration
-
Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the label
cloud.google.com/gke-nodepool=apigee-data
and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime
. You can customize your node pool labels in theoverrides.yaml
file.For more information, see Configuring dedicated node pools.
Install the Apigee hybrid Helm charts
- If you have not, navigate into your
APIGEE_HELM_CHARTS_HOME
directory. Run the following commands from that directory. - Upgrade the Apigee Operator/Controller:
Dry run:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify Apigee Operator installation:
helm ls -n
APIGEE_NAMESPACE NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION operator apigee 3 2024-08-21 00:42:44.492009 -0800 PST deployed apigee-operator-1.14.0 1.14.0
Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deploy apigee-controller-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-controller-manager 1/1 1 1 7d20h
- Upgrade the Apigee datastore:
Dry run:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify
apigeedatastore
is up and running by checking its state:kubectl -n
APIGEE_NAMESPACE get apigeedatastore defaultNAME STATE AGE default running 2d
- Upgrade Apigee telemetry:
Dry run:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeetelemetry apigee-telemetryNAME STATE AGE apigee-telemetry running 2d
- Upgrade Apigee Redis:
Dry run:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeeredis defaultNAME STATE AGE default running 2d
- Upgrade Apigee ingress manager:
Dry run:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deployment apigee-ingressgateway-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-ingressgateway-manager 2/2 2 2 2d
- Upgrade the Apigee organization:
Dry run:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective org:
kubectl -n
APIGEE_NAMESPACE get apigeeorgNAME STATE AGE apigee-org1-xxxxx running 2d
- Upgrade the environment.
You must install one environment at a time. Specify the environment with
--set env=
ENV_NAME.Dry run:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE \ --dry-run=server- ENV_RELEASE_NAME is the name with which you previously installed the
apigee-env
chart. In hybrid v1.10, it is usuallyapigee-env-ENV_NAME
. In Hybrid v1.11 and newer it is usually ENV_NAME. - ENV_NAME is the name of the environment you are upgrading.
- OVERRIDES_FILE is your new overrides file for v.1.14.0
Upgrade the chart:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective env:
kubectl -n
APIGEE_NAMESPACE get apigeeenvNAME STATE AGE GATEWAYTYPE apigee-org1-dev-xxx running 2d
- ENV_RELEASE_NAME is the name with which you previously installed the
-
Upgrade the environment groups (
virtualhosts
).- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
--set envgroup=
ENV_GROUP_NAME. Repeat the following commands for each environment group mentioned in the overrides.yaml file:Dry run:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE \ --dry-run=serverENV_GROUP_RELEASE_NAME is the name with which you previously installed the
apigee-virtualhost
chart. In hybrid v1.10, it is usuallyapigee-virtualhost-ENV_GROUP_NAME
. In Hybrid v1.11 and newer it is usually ENV_GROUP_NAME.Upgrade the chart:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE - Check the state of the ApigeeRoute (AR).
Installing the
virtualhosts
creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls environment group-related details from the control plane. Therefore, check that the corresponding AR's state is running:kubectl -n
APIGEE_NAMESPACE get arcNAME STATE AGE apigee-org1-dev-egroup 2d
kubectl -n
APIGEE_NAMESPACE get arNAME STATE AGE apigee-org1-dev-egroup-xxxxxx running 2d
- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
- After you have verified all the installations are upgraded successfully, delete the older
apigee-operator
release from theapigee-system
namespace.- Uninstall the old
operator
release:helm delete operator -n apigee-system
- Delete the
apigee-system
namespace:kubectl delete namespace apigee-system
- Uninstall the old
- Upgrade
operator
again in your Apigee namespace to re-install the deleted cluster-scoped resources:helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ --atomic \ -f overrides.yaml
Rolling back to a previous version
To roll back to the previous version, use the older chart version to roll back the upgrade process in the reverse order. Start with apigee-virtualhost
and work your way back to apigee-operator
, and then revert the CRDs.
Because of the change in the namespace for apigee-operator
, you need to perform extra steps to delete the Validating and Mutating admission hooks. That way, when you install the apigee-operator
back in the apigee-system
namespace, they will be recreated to point to the correct Apigee Operator endpoint.
- Update the replicas of the existing Apigee Operator deployment in Apigee to 0 (zero) to avoid the two controllers reconciling the Custom Resources to avoid conflicts when rolling it back in the
apigee-system
namespace.kubectl scale deployment apigee-controller-manager -n
APIGEE_NAMESPACE --replicas=0kubectl delete mutatingwebhookconfiguration \ apigee-mutating-webhook-configuration-
APIGEE_NAMESPACE kubectl delete validatingwebhookconfiguration \ apigee-validating-webhook-configuration-
APIGEE_NAMESPACE - Revert all the charts from
apigee-virtualhost
toapigee-datastore
. The following commands assume you are using the charts from the previous version (v1.13.x).Run the following command for each environment group:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespace apigee \ --atomic \ --set envgroup=ENV_GROUP_NAME \ -f1.13_OVERRIDES_FILE Run the following command for each environment:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespace apigee \ --atomic \ --set env=ENV_NAME \ -f1.13_OVERRIDES_FILE Revert the remaining charts except for
apigee-operator
.helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespace apigee \ --atomic \ -f1.13_OVERRIDES_FILE helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace apigee \ --atomic \ -f
1.13_OVERRIDES_FILE helm upgrade redis apigee-redis/ \ --install \ --namespace apigee \ --atomic \ -f
1.13_OVERRIDES_FILE helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f
1.13_OVERRIDES_FILE helm upgrade datastore apigee-datastore/ \ --install \ --namespace apigee \ --atomic \ -f
1.13_OVERRIDES_FILE - Create the
apigee-system
namespace.kubectl create namespace apigee-system
- Patch the resource annotation back to the
apigee-system
namespace.kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='apigee-system'
- If you have changed the release name as well, update the annotation with the
operator
release name.kubectl annotate --overwrite cluseterIssuer apigee-ca-issuer meta.helm.sh/release-name='operator'
- Install
apigee-operator
back into theapigee-system
namespace.helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.13_OVERRIDES_FILE - Revert the CRDs by reinstalling the older CRDs.
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Clean up the
apigee-operator
release from the APIGEE_NAMESPACE namespace to complete the rollback process.helm uninstall operator -n
APIGEE_NAMESPACE - Some cluster-scoped resources, such as
clusterIssuer
, are deleted whenoperator
is uninstalled. Reinstall them with the following command:helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.13_OVERRIDES_FILE