Upgrading to version 1.2.0
Follow these steps to upgrade Apigee hybrid to version 1.2.0:
Step 1: Upgrade Kubernetes and download the release package
- Upgrade your Kubernetes platform as follows. Follow your platform's documentation if
you need help:
Platform Upgrade to version GKE 1.14.x Anthos 1.2 AKS 1.14.x Download the release package for your operating system:
Mac 64 bit:
curl -LO \ https://storage.googleapis.com/apigee-release/hybrid/apigee-hybrid-setup/1.2.0/apigeectl_mac_64.tar.gz
Linux 64 bit
curl -LO \ https://storage.googleapis.com/apigee-release/hybrid/apigee-hybrid-setup/1.2.0/apigeectl_linux_64.tar.gz
Mac 32 bit:
curl -LO \ https://storage.googleapis.com/apigee-release/hybrid/apigee-hybrid-setup/1.2.0/apigeectl_mac_32.tar.gz
Linux 32 bit
curl -LO \ https://storage.googleapis.com/apigee-release/hybrid/apigee-hybrid-setup/1.2.0/apigeectl_linux_32.tar.gz
Step 2: Reconfigure your installation directory
- Identify the base installation directory that was created when Apigee hybrid
was originally installed. The
base directory is the directory in which the
$APIGEEGTL_HOME
directory resides. In the following example, the base directory is/Users/myhome/hybrid
:echo $APIGEECTL_HOME /Users/myhome/hybrid/apigeectl
-
Extract the downloaded gzip file contents into the Apigee hybrid base directory:
tar xvzf filename.tar.gz -C path-to-base-directory
cd
to the base directory.-
The tar contents are, by default, expanded into a directory with the version and platform in its name. For example:
./apigeectl_1.2.0-f7b96a8_linux_64
. - Rename the current
apigeectl
directory. For example, if the current version is 1.1.1, rename theapigeectl
directory toapigeectl_1.1.1
. -
Rename the newly extracted installation directory to
apigeectl
. This is now where the environment$APIGEECTL_HOME
points to.
Step 3: Update your overrides file
- Make a copy of your overrides file, and be careful to save the old file in case you ever need to roll back. In the following steps, you will make required changes to the overrides file before applying it to the cluster.
Update your overrides file with the changes described below:
Following is a summary of the configuration changes you must make to your overrides file. A complete example is given in the table following the summary. As you will see, the
envs[]
property has changed significantly from previous versions:- The property
envs[].hostAlias
has been removed and replaced by the new propertyvirtualhosts.hostAliases[]
. - You must add the new required configuration property
virtualhosts
. - You must move the
envs[].sslCertPath
andenvs[].sslKeyPath
properties fromenvs
tovirtualhosts
. - You must add the
virtualhosts.routingRules
configuration stanza. Thevirtualhosts.routingRules
property replaces the previousenvs[].paths
property. If you haveenvs[].paths
in your overrides file, you must remove it. For more information on the virtual host configuration, see Configure virtual hosts.
The table below illustrates the differences between a 1.1.1 overrides file and a version 1.2.0 file. The example is intended to highlight the kinds of changes you need to make for version 1.2.0:
v1.1.x Configuration v1.2.0 Configuration envs: - name: test1 hostAlias: "api.example.com" sslCertPath: ./certs/fullchain.pem sslKeyPath: ./certs/privkey.pem serviceAccountPaths: synchronizer: ./sa/sync.json udca: ./sa/udca.json paths: uri: prefixes: - /orders - /items - name: test2 hostAlias: "api.example.com" sslCertPath: ./certs/fullchain.pem sslKeyPath: ./certs/privkey.pem serviceAccountPaths: synchronizer: ./sa/sync.json udca: ./sa/udca.json paths: uri: prefixes: - /v0/hello - /httpbin
virtualhosts: - name: default hostAliases: ["api.example.com"] sslCertPath: ./certs/fullchain.pem sslKeyPath: ./certs/privkey.pem routingRules: - paths: - /orders - /items env: test1 - paths: - /v0/hello - /httpbin env: test2 envs: - name: test1 serviceAccountPaths: synchronizer: ./sa/synchronizer.json udca: ./sa/udca.json - name: test2 serviceAccountPaths: synchronizer: ./sa/synchronizer.json udca: ./sa/udca.json
- The property
Step 4: Apply the upgrade to the cluster
- If you enabled Apigee Connect in your version 1.1.1 installation, you must remove
the deployment:
- First, list the Apigee Deployments:
kubectl -n namespace get ad
- Delete the Apigee Connect deployment:
kubectl -n namespace delete ad apigee-connect-name
- First, list the Apigee Deployments:
- List the pods:
kubectl get pods -n namespace
- Delete the
apigee-cps-setup
pod from the cluster. Use the pod's full name, which includes your organization name, as returned in the previous command. For example:kubectl -n namespace delete pod apigee-cps-setup-org
- Delete the
apigee-cps-create-user
pod in the same namespace:kubectl -n namespace delete pod apigee-cps-create-user
- Clean up completed jobs for the hybrid runtime namespace,
where namespace is the
namespace specified in your overrides file, if you specified a namespace. If not, the default namespace
is
apigee
:kubectl delete job -n namespace \ $(kubectl get job -n namespace -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
- Clean up completed jobs for the
apigee-system
namespace:kubectl delete job -n apigee-system \ $(kubectl get job -n apigee-system -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
- Clean up completed jobs for the
istio-system
namespace:kubectl delete job -n istio-system \ $(kubectl get job -n istio-system -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
cd
to the./hybrid-files
directory:- Initialize
apigeectl
for the new version:$APIGEECTL_HOME/apigeectl init -f overrides/overrides-file.yaml
- Check to determine when the initialization is complete:
$APIGEECTL_HOME/apigeectl check-ready -f overrides/overrides-file.yaml
- When
check-ready
replies with "All containers are ready", you can try a "dry run" install. Execute theapply
command with the--dry-run=true
flag. Doing a dry run lets you check for any errors before any changes are made to the cluster:$APIGEECTL_HOME/apigeectl apply -f overrides/overrides-file.yaml --dry-run=true
-
If there are no errors, you can apply the Apigee-specific
runtime components to the cluster:
$APIGEECTL_HOME/apigeectl apply -f overrides/overrides-file.yaml
- Re-run
check-ready
to determine when the upgrade is complete.
Rolling back an upgrade
Follow these steps to roll back a previous upgrade:
- Clean up completed jobs for the hybrid runtime namespace,
where namespace is the
namespace specified in your overrides file, if you specified a namespace. If not, the default namespace
is
apigee
:kubectl delete job -n namespace \ $(kubectl get job -n namespace -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
- Clean up completed jobs for the
apigee-system
namespace:kubectl delete job -n apigee-system \ $(kubectl get job -n apigee-system -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
- Clean up completed jobs for the
istio-system
namespace:kubectl delete job -n istio-system \ $(kubectl get job -n istio-system -o=jsonpath='{.items[?(@.status.succeeded==1)].metadata.name}')
- Delete the Apigee Operators deployment. This operation will not have any effect on
your runtime traffic:
kubectl -n apigee-system delete deployment apigee-controller-manager
- Change the
$APIGEECTL_HOME
variable to point to the directory that contains the original version ofapigeectl
. For example:export APIGEECTL_HOME=path-to-original-apigeectl-directory
- In the root directory of the installation you want to roll back to, run
apigeectl init
and then runapigeectl apply
. Be sure to use the original overrides file for the version you wish to roll back to:$APIGEECTL_HOME
/apigeectl init -f overrides/original-overrides.yaml$APIGEECTL_HOME
/apigeectl apply -f overrides/original-overrides.yaml