This procedure covers upgrading from Apigee hybrid version 1.12.x to Apigee hybrid version 1.13.2 and from previous releases of hybrid 1.13.x to version 1.13.2.
Use the same procedures for minor version upgrades (for example version 1.12 to 1.13) and for patch release upgrades (for example 1.13.0 to 1.13.2).
If you are upgrading from Apigee hybrid version 1.11 or older, you must first upgrade to hybrid version 1.12 before upgrading to version 1.13.2. See the instructions for Upgrading Apigee hybrid to version 1.12.
Changes from Apigee hybrid v1.12
Please note the following changes:
-
apigee-operator
in the Apigee namespace: Starting in version 1.13,apigee-operator
runs in the same Kubernetes namespace as the other Apigee hybrid components,apigee
by default. You can supply any name for the namespace. In previous versions, theapigee-operator
was required to run in its own namespace,apigee-system
. - Anthos (on bare metal or VMware) is now Google Distributed Cloud (for bare metal or VMware): For more information see the product overviews for Google Distributed Cloud for bare metal and Google Distributed Cloud for VMware.
Prerequisites
Before upgrading to hybrid version 1.13, make sure your installation meets the following requirements:
- If your hybrid installation is running a version older than v1.12, you must upgrade to version 1.12 before upgrading to v1.13. See Upgrading Apigee hybrid to version 1.12.
- Helm version v3.14.2+.
kubectl
: A supported version ofkubectl
appropriate for your Kubernetes platform version. see Supported platforms and versions:kubectl
.- cert-manager: A supported version of cert-manager. See Supported platforms and versions: cert-manager. If needed, you will upgrade cert-manager in the Prepare to upgrade to version 1.13 section below.
Upgrading to version 1.13.2 overview
The procedures for upgrading Apigee hybrid are organized in the following sections:
Prepare to upgrade to version 1.13
Back up your hybrid installation
- These instructions use the environment variable APIGEE_HELM_CHARTS_HOME for the directory
in your file system where you have installed the Helm charts. If needed, change directory
into this directory and define the variable with the following command:
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
set APIGEE_HELM_CHARTS_HOME=%CD%
echo %APIGEE_HELM_CHARTS_HOME%
- Make a backup copy of your version 1.12
$APIGEE_HELM_CHARTS_HOME/
directory. You can use any backup process. For example, you can create atar
file of your entire directory with:tar -czvf $APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.12-backup.tar.gz $APIGEE_HELM_CHARTS_HOME
- Back up your Cassandra database following the instructions in Cassandra backup and recovery.
- If you are using service cert files (
.json
) in your overrides to authenticate service accounts, make sure your service account cert files reside in the correct Helm chart directory. Helm charts cannot read files outside of each chart directory.This step is not required if you are using Kubernetes secrets or Workload Identity to authenticate service accounts.
The following table shows the destination for each service account file, depending on your type of installation:
Service account Default filename Helm chart directory apigee-cassandra
PROJECT_ID-apigee-cassandra.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
apigee-logger
PROJECT_ID-apigee-logger.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-mart
PROJECT_ID-apigee-mart.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-metrics
PROJECT_ID-apigee-metrics.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-runtime
PROJECT_ID-apigee-runtime.json
$APIGEE_HELM_CHARTS_HOME/apigee-env
apigee-synchronizer
PROJECT_ID-apigee-synchronizer.json
$APIGEE_HELM_CHARTS_HOME/apigee-env/
apigee-udca
PROJECT_ID-apigee-udca.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-watcher
PROJECT_ID-apigee-watcher.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
Make a copy of the
apigee-non-prod
service account file in each of the following directories:Service account Default filename Helm chart directories apigee-non-prod
PROJECT_ID-apigee-non-prod.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
$APIGEE_HELM_CHARTS_HOME/apigee-org/
$APIGEE_HELM_CHARTS_HOME/apigee-env/
-
Make sure that your TLS certificate and key files (
.crt
,.key
, and/or.pem
) reside in the$APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/
directory.
Upgrade your Kubernetes version
Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.12 and hybrid 1.13. Follow your platform's documentation if you need help.
Click to expand a list of supported platforms
1.10
|
1.11 | 1.12 | 1.13 | ||
---|---|---|---|---|---|
GKE on Google Cloud | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
1.27.x
1.28.x 1.29.x 1.30.x |
|
GKE on AWS | 1.13.x (K8s v1.24.x)
1.14.x (K8s v1.25.x) 1.26.x(4) 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.14.x (K8s v1.25.x)
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
1.27.x
1.28.x 1.29.x(≥ 1.12.1) 1.30.x |
|
GKE on Azure | 1.13.x
1.14.x 1.26.x(4) 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.14.x
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
1.27.x
1.28.x 1.29.x(≥ 1.12.1) 1.30.x |
|
Google Distributed Cloud (software only) on VMware (5) | 1.13.x (1)
1.14.x 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1) |
1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
|
Google Distributed Cloud (software only) on bare metal | 1.13.x (1)
1.14.x (K8s v1.25.x) 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(4)(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(4)(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1) |
1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
|
EKS | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
1.26.x
1.27.x 1.28.x 1.29.x |
|
AKS | 1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
1.26.x
1.27.x 1.28.x 1.29.x |
|
OpenShift | 4.11
4.12 4.14(≥ 1.10.5) 4.15(≥ 1.10.5) |
4.12
4.13 4.14 4.15(≥ 1.11.2) 4.16(≥ 1.11.2) |
4.12
4.13 4.14 4.15 4.16(≥ 1.12.1) |
4.12
4.13 4.14 4.15 4.16 |
|
Rancher Kubernetes Engine (RKE) |
v1.26.2+rke2r1
1.27.x(≥ 1.10.5) 1.28.x(≥ 1.10.5) |
v1.26.2+rke2r1
v1.27.x 1.28.x(≥ 1.11.2) 1.29.x(≥ 1.11.2) |
v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x(≥ 1.12.1) |
v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x 1.30.x |
|
VMware Tanzu | N/A | N/A | v1.26.x | v1.26.x | |
Components |
1.10 | 1.11 | 1.12 | 1.13 | |
Cloud Service Mesh | 1.17.x(3) |
1.17.x(v1.11.0 - v1.11.1)(3)
1.18.x(≥ 1.11.2)(3) |
1.18.x(3) | 1.19.x(3) | |
JDK | JDK 11 | JDK 11 | JDK 11 | JDK 11 | |
cert-manager |
1.10.x 1.11.x 1.12.x |
1.11.x 1.12.x 1.13.x |
1.11.x 1.12.x 1.13.x |
1.13.x 1.14.x 1.15.x |
|
Cassandra | 3.11 | 3.11 | 4.0 | 4.0 | |
Kubernetes | 1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
1.27.x 1.28.x 1.29.x 1.30.x |
|
kubectl | 1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
1.27.x 1.28.x 1.29.x 1.30.x |
|
Helm | 3.10+ | 3.10+ | 3.14.2+ | 3.14.2+ | |
Secret Store CSI driver | 1.3.4 | 1.4.1 | 1.4.1 | ||
Vault | 1.13.x | 1.15.2 | 1.15.2 | ||
(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict with (2) The official EOL dates for Apigee hybrid versions 1.10 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade. (3) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer. (4) GKE on AWS version numbers now reflect the Kubernetes versions. See GKE Enterprise version and upgrade support for version details and recommended patches. (5) Vault is not certified on Google Distributed Cloud for VMware. (6) Support available with Apigee hybrid version 1.10.5 and newer. (7) Support available with Apigee hybrid version 1.11.2 and newer. (8) Support available with Apigee hybrid version 1.12.1 and newer. |
Install the hybrid 1.13.2 runtime
Prepare for the Helm charts upgrade
- Pull the Apigee Helm charts.
Apigee hybrid charts are hosted in Google Artifact Registry:
oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
Using the
pull
command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:export CHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
export CHART_VERSION=1.13.2
helm pull $CHART_REPO/apigee-operator --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-datastore --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-env --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-ingress-manager --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-org --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-redis --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-telemetry --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-virtualhost --version $CHART_VERSION --untar
- Upgrade cert-manager if needed.
If you need to upgrade your cert-manager version, install the new version with the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/
v1.15.1 /cert-manager.yamlSee Supported platforms and versions: cert-manager for a list of supported versions.
- If your Apigee namespace is not
apigee
, edit theapigee-operator/etc/crds/default/kustomization.yaml
file and replace thenamespace
value with your Apigee namespace.apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace:
APIGEE_NAMESPACE If you are using
apigee
as your namespace you do not need to edit the file. - Install the updated Apigee CRDs:
-
Use the
kubectl
dry-run feature by running the following command:kubectl apply -k apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run
-
After validating with the dry-run command, run the following command:
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Validate the installation with the
kubectl get crds
command:kubectl get crds | grep apigee
Your output should look something like the following:
apigeedatastores.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeedeployments.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeeenvironments.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeissues.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeorganizations.apigee.cloud.google.com 2024-08-21T14:48:32Z apigeeredis.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeerouteconfigs.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeeroutes.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeetelemetries.apigee.cloud.google.com 2024-08-21T14:48:34Z cassandradatareplications.apigee.cloud.google.com 2024-08-21T14:48:35Z
-
-
Migrate
apigee-operator
from theapigee-system
namespace to APIGEE_NAMESPACE.- Annotate the
clusterIssuer
with the new namespacekubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='
APIGEE_NAMESPACE ' - If you are changing the release name for
apigee-operator
, annotate theclusterIssuer
with the new release name.kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-name='
APIGEE_OPERATOR_RELEASE_NAME ' - Update the replicas of your existing Apigee Operator deployment in the
apigee-system
namespace to 0 (zero) to avoid the two controllers reconciling.kubectl scale deployment apigee-controller-manager -n apigee-system --replicas=0
- Delete the
apigee-mutating-webhook-configuration
andapigee-validating-webhook-configuration
.kubectl delete mutatingwebhookconfiguration apigee-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration apigee-validating-webhook-configuration
- Annotate the
-
Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the label
cloud.google.com/gke-nodepool=apigee-data
and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime
. You can customize your node pool labels in theoverrides.yaml
file.For more information, see Configuring dedicated node pools.
Install the Apigee hybrid Helm charts
- If you have not, navigate into your
APIGEE_HELM_CHARTS_HOME
directory. Run the following commands from that directory. - Upgrade the Apigee Operator/Controller:
Dry run:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify Apigee Operator installation:
helm ls -n
APIGEE_NAMESPACE NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION operator apigee 3 2024-08-21 00:42:44.492009 -0800 PST deployed apigee-operator-1.13.2 1.13.2
Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deploy apigee-controller-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-controller-manager 1/1 1 1 7d20h
- Upgrade the Apigee datastore:
Dry run:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify
apigeedatastore
is up and running by checking its state:kubectl -n
APIGEE_NAMESPACE get apigeedatastore defaultNAME STATE AGE default running 2d
- Upgrade Apigee telemetry:
Dry run:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeetelemetry apigee-telemetryNAME STATE AGE apigee-telemetry running 2d
- Upgrade Apigee Redis:
Dry run:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeeredis defaultNAME STATE AGE default running 2d
- Upgrade Apigee ingress manager:
Dry run:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deployment apigee-ingressgateway-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-ingressgateway-manager 2/2 2 2 2d
- Upgrade the Apigee organization:
Dry run:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective org:
kubectl -n
APIGEE_NAMESPACE get apigeeorgNAME STATE AGE apigee-org1-xxxxx running 2d
- Upgrade the environment.
You must install one environment at a time. Specify the environment with
--set env=
ENV_NAME.Dry run:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE \ --dry-run=server- ENV_RELEASE_NAME is the name with which you previously installed the
apigee-env
chart. In hybrid v1.10, it is usuallyapigee-env-ENV_NAME
. In Hybrid v1.11 and newer it is usually ENV_NAME. - ENV_NAME is the name of the environment you are upgrading.
- OVERRIDES_FILE is your new overrides file for v.1.13.2
Upgrade the chart:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective env:
kubectl -n
APIGEE_NAMESPACE get apigeeenvNAME STATE AGE GATEWAYTYPE apigee-org1-dev-xxx running 2d
- ENV_RELEASE_NAME is the name with which you previously installed the
-
Upgrade the environment groups (
virtualhosts
).- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
--set envgroup=
ENV_GROUP_NAME. Repeat the following commands for each environment group mentioned in the overrides.yaml file:Dry run:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE \ --dry-run=serverENV_GROUP_RELEASE_NAME is the name with which you previously installed the
apigee-virtualhost
chart. In hybrid v1.10, it is usuallyapigee-virtualhost-ENV_GROUP_NAME
. In Hybrid v1.11 and newer it is usually ENV_GROUP_NAME.Upgrade the chart:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE - Check the state of the ApigeeRoute (AR).
Installing the
virtualhosts
creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls environment group-related details from the control plane. Therefore, check that the corresponding AR's state is running:kubectl -n
APIGEE_NAMESPACE get arcNAME STATE AGE apigee-org1-dev-egroup 2d
kubectl -n
APIGEE_NAMESPACE get arNAME STATE AGE apigee-org1-dev-egroup-xxxxxx running 2d
- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
- After you have verified all the installations are upgraded successfully, delete the older
apigee-operator
release from theapigee-system
namespace.- Uninstall the old
operator
release:helm delete operator -n apigee-system
- Delete the
apigee-system
namespace:kubectl delete namespace apigee-system
- Uninstall the old
- Upgrade
operator
again in your Apigee namespace to re-install the deleted cluster-scoped resources:helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ --atomic \ -f overrides.yaml
Rolling back to a previous version
To roll back to the previous version, use the older chart version to roll back the upgrade process in the reverse order. Start with apigee-virtualhost
and work your way back to apigee-operator
, and then revert the CRDs.
Because of the change in the namespace for apigee-operator
, you need to perform extra steps to delete the Validating and Mutating admission hooks. That way, when you install the apigee-operator
back in the apigee-system
namespace, they will be recreated to point to the correct Apigee Operator endpoint.
- Update the replicas of the existing Apigee Operator deployment in Apigee to 0 (zero) to avoid the two controllers reconciling the Custom Resources to avoid conflicts when rolling it back in the
apigee-system
namespace.kubectl scale deployment apigee-controller-manager -n
APIGEE_NAMESPACE --replicas=0kubectl delete mutatingwebhookconfiguration \ apigee-mutating-webhook-configuration-
APIGEE_NAMESPACE kubectl delete validatingwebhookconfiguration \ apigee-validating-webhook-configuration-
APIGEE_NAMESPACE - Revert all the charts from
apigee-virtualhost
toapigee-datastore
. The following commands assume you are using the charts from the previous version (v1.12.x).Run the following command for each environment group:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespace apigee \ --atomic \ --set envgroup=ENV_GROUP_NAME \ -f1.12_OVERRIDES_FILE Run the following command for each environment:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespace apigee \ --atomic \ --set env=ENV_NAME \ -f1.12_OVERRIDES_FILE Revert the remaining charts except for
apigee-operator
.helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespace apigee \ --atomic \ -f1.12_OVERRIDES_FILE helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace apigee \ --atomic \ -f
1.12_OVERRIDES_FILE helm upgrade redis apigee-redis/ \ --install \ --namespace apigee \ --atomic \ -f
1.12_OVERRIDES_FILE helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f
1.12_OVERRIDES_FILE helm upgrade datastore apigee-datastore/ \ --install \ --namespace apigee \ --atomic \ -f
1.12_OVERRIDES_FILE - Create the
apigee-system
namespace.kubectl create namespace apigee-system
- Patch the resource annotation back to the
apigee-system
namespace.kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='apigee-system'
- If you have changed the release name as well, update the annotation with the
operator
release name.kubectl annotate --overwrite cluseterIssuer apigee-ca-issuer meta.helm.sh/release-name='operator'
- Install
apigee-operator
back into theapigee-system
namespace.helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.12_OVERRIDES_FILE - Revert the CRDs by reinstalling the older CRDs.
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Clean up the
apigee-operator
release from the APIGEE_NAMESPACE namespace to complete the rollback process.helm uninstall operator -n
APIGEE_NAMESPACE - Some cluster-scoped resources, such as
clusterIssuer
, are deleted whenoperator
is uninstalled. Reinstall them with the following command:helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.12_OVERRIDES_FILE