Prepare to upgrade your previous installation of Knative serving and migrate your workloads by setting up your command-line environment, creating environment variables, and downloading the migration script.
Before you begin
You must first review and ensure that you meet the requirements before upgrading.
By default, Cloud Shell includes the latest versions of the
gcloud
andkubectl
commands. If you choose to use the command-line environment of your local machine, you must ensure that you meet the following minimum requirements:gcloud
version 346.0.0 or newerkubectl
client version 1.21 or newer
The preparation steps on this page are required throughout the upgrade and migration process.
Important: All the commands that are used during the process rely on the environment variables that you set below. For example, if you close Cloud Shell or your session timesout, you must ensure that the required environment variables are reset.
Setup your environment
To use Cloud Shell, open Cloud Shell in the Google Cloud console:
Important: Cloud Shell has usage limits and can timeout. If your session timesout, you must ensure that the required environment variables are reset.
Create the following required environment variables:
Set variables for your Google Cloud project and cluster details:
export PROJECT_ID=PROJECT_ID export CLUSTER_NAME=CLUSTER_NAME export CLUSTER_LOCATION=CLUSTER_LOCATION
Replace the following:
- PROJECT_ID with the ID of your Google Cloud project.
- CLUSTER_NAME with the ID of your cluster or the fully qualified identifier for the cluster.
- CLUSTER_LOCATION with the region or zone in which your cluster is located.
Depending on your configuration, you must identify the ingress gateway that is handling traffic in your cluster. It is important to identify which version of Istio is actually configured and serving traffic.
If you use the bundled version of Istio, verify that the ingress service name is
istio-ingress
in thegke-system
namespace:kubectl get svc istio-ingress -n gke-system
Result: The details of your configuration are returned.
If you installed the "Istio add-on", you must determine which ingress gateway is configured and actively handling traffic by obtaining the IP addresses of the services and identifying which one is configured to your domain.
Obtain the
EXTERNAL-IP
address for each ingress service:Run the following commands to obtain the configuration details of both the "bundled version of Istio" (
istio-ingress
) and "Istio add-on" (istio-ingressgateway
) ingress services:kubectl get svc istio-ingress -n gke-system kubectl get svc istio-ingressgateway -n istio-system
Example output:
Note the value of
EXTERNAL-IP
for each service.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingress LoadBalancer 11.11.1.111 12.345.678.910 15020:31265/TCP,80:30059/TCP,443:32004/TCP 8d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 22.22.2.222 10.987.654.321 15021:32747/TCP,80:30695/TCP,443:32695/TCP,15012:32369/TCP,15443:30909/TCP 88d
Identify which external IP address is configured to handle traffic through the DNS record of your custom domain:
Go to the Knative serving domain mappings page:
Click the 3-dot vertical ellipse icon to the right of your service, then click DNS RECORDS to display all the DNS records:
Using the example above, the Istio add-on ingress gateway is in use and serving traffic if the DNS record configuration is set to the
10.987.654.321
IP address of theistio-ingressgateway
service.
Set variables for the name and namespace of the ingress service that is serving traffic for your cluster:
export INGRESS_NAME=INGRESS_NAME export INGRESS_NAMESPACE=INGRESS_NAMESPACE
Replace the following:
INGRESS_NAME with the name of the ingress service that you identified in the previous step.
INGRESS_NAMESPACE with the namespace of the ingress service that you identified in the previous step.
Configure the Google Cloud CLI:
gcloud config set project ${PROJECT_ID} gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${CLUSTER_LOCATION}
For private clusters:
If you already have access to your private cluster from the client where you will run the migration script, you can skip to the next step.
If your private cluster has
master-authorized-network
enabled, you can enable access from the client where you will run the migration script by adding the client's IP address to themaster-authorized-networks
allowlist:gcloud container clusters update ${CLUSTER_NAME} \ --region=${CLUSTER_LOCATION} \ --enable-master-authorized-networks \ --master-authorized-networks $(curl ifconfig.me)/32
Download the Knative serving migration script:
TMP_DIR=$(mktemp -d) gcloud storage cp gs://crfa-to-hub-upgrade/migration-addon.sh $TMP_DIR cd $TMP_DIR chmod +x ./migration-addon.sh
Run the following command to disable "scaling to zero", otherwise scaling will fail and cause errors when the master node gets updated:
kubectl patch cm config-autoscaler -n knative-serving -p '{"data":{"enable-scale-to-zero": "false"}}'
Note that the final step in this upgrade and migration process is to re-enable "scaling to zero".