[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-01 (世界標準時間)。"],[],[],null,["# Environment setup and preparation for upgrading\n\nPrepare to upgrade your previous installation of Knative serving and migrate\nyour workloads by setting up your command-line environment, creating environment\nvariables, and downloading the migration script.\n\nBefore you begin\n----------------\n\n- You must first review and ensure that you meet the\n [requirements](/kubernetes-engine/enterprise/knative-serving/docs/install/on-gcp/upgrade) before upgrading.\n\n- By default, Cloud Shell includes the latest versions of the `gcloud` and\n `kubectl` commands. If you choose to use the command-line environment of your\n local machine, you must ensure that you meet the following minimum\n requirements:\n\n - `gcloud` version 346.0.0 or newer\n - `kubectl` client version 1.21 or newer\n\n [Learn more about setting up your command-line tools](/kubernetes-engine/enterprise/knative-serving/docs/install/on-gcp/command-line-tools)\n- The preparation steps on this page are required throughout the\n [upgrade and migration process](/kubernetes-engine/enterprise/knative-serving/docs/install/on-gcp/upgrade).\n\n **Important**: All the commands that are used during the process rely on the\n environment variables that you set below. For example, if you close\n Cloud Shell or your session timesout, you must ensure that the required\n environment variables are reset.\n\nSetup your environment\n----------------------\n\n1. To use Cloud Shell, open Cloud Shell in the Google Cloud console:\n\n [Activate Cloud Shell](https://console.cloud.google.com/kubernetes/run?cloudshell=true%22)\n\n **Important** : Cloud Shell has\n [usage limits](/shell/docs/limitations#usage_limits) and can timeout. If your\n session timesout, you must ensure that the required environment variables\n are reset.\n2. Create the following required environment variables:\n\n 1. Set variables for your Google Cloud project and cluster details:\n\n export PROJECT_ID=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n export CLUSTER_NAME=\u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e\n export CLUSTER_LOCATION=\u003cvar translate=\"no\"\u003eCLUSTER_LOCATION\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e with the ID of your Google Cloud project.\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with the ID of your cluster or the fully qualified identifier for the cluster.\n - \u003cvar translate=\"no\"\u003eCLUSTER_LOCATION\u003c/var\u003e with the [region or zone](/compute/docs/regions-zones#available) in which your cluster is located.\n 2. Depending on your configuration, you must identify the ingress gateway\n that is handling traffic in your cluster. It is important to identify\n which version of Istio is actually configured and serving traffic.\n\n - If you use the bundled version of Istio, verify that the ingress\n service name is `istio-ingress` in the `gke-system` namespace:\n\n kubectl get svc istio-ingress -n gke-system\n\n Result: The details of your configuration are returned.\n - If you installed the\n \"[*Istio add-on*](/istio/docs/istio-on-gke/overview)\", you must\n determine which ingress gateway is configured and actively handling\n traffic by obtaining the IP addresses of the services and identifying\n which one is configured to your domain.\n\n 1. Obtain the `EXTERNAL-IP` address for each ingress service:\n\n Run the following commands to obtain the configuration details of\n both the \"bundled version of Istio\" (`istio-ingress`) and\n \"Istio add-on\" (`istio-ingressgateway`) ingress services: \n\n kubectl get svc istio-ingress -n gke-system\n kubectl get svc istio-ingressgateway -n istio-system\n\n **Example output**:\n\n Note the value of `EXTERNAL-IP` for each service. \n\n NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n istio-ingress LoadBalancer 11.11.1.111 12.345.678.910 15020:31265/TCP,80:30059/TCP,443:32004/TCP 8d\n\n NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n istio-ingressgateway LoadBalancer 22.22.2.222 10.987.654.321 15021:32747/TCP,80:30695/TCP,443:32695/TCP,15012:32369/TCP,15443:30909/TCP 88d\n\n 2. Identify which external IP address is configured to handle\n traffic through the DNS record of your custom domain:\n\n 1. Go to the Knative serving domain mappings page:\n\n [Go to Domain mappings](https://console.cloud.google.com/kubernetes/run/domains)\n 2. Click the 3-dot vertical ellipse icon to the right of your\n service, then click **DNS RECORDS** to display all the DNS\n records:\n\n Using the example above, the *Istio add-on* ingress gateway is\n in use and serving traffic if the DNS record configuration is\n set to the `10.987.654.321` IP address of the\n `istio-ingressgateway` service.\n 3. Set variables for the name and namespace of the ingress service that is\n serving traffic for your cluster:\n\n export INGRESS_NAME=\u003cvar translate=\"no\"\u003eINGRESS_NAME\u003c/var\u003e\n export INGRESS_NAMESPACE=\u003cvar translate=\"no\"\u003eINGRESS_NAMESPACE\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eINGRESS_NAME\u003c/var\u003e with the name of the ingress service that\n you identified in the previous step.\n\n - \u003cvar translate=\"no\"\u003eINGRESS_NAMESPACE\u003c/var\u003e with the namespace of the ingress\n service that you identified in the previous step.\n\n3. Configure the Google Cloud CLI:\n\n gcloud config set project ${PROJECT_ID}\n gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${CLUSTER_LOCATION}\n\n4. For private clusters:\n\n - If you already have access to your private cluster from the client where\n you will run the migration script, you can skip to the\n next step.\n\n - If your private cluster has `master-authorized-network` enabled, you can\n enable access from the client where you will run the migration script by\n adding the client's IP address to the `master-authorized-networks`\n allowlist:\n\n gcloud container clusters update ${CLUSTER_NAME} \\\n --region=${CLUSTER_LOCATION} \\\n --enable-master-authorized-networks \\\n --master-authorized-networks $(curl ifconfig.me)/32\n\n5. Download the Knative serving migration script:\n\n TMP_DIR=$(mktemp -d)\n gcloud storage cp gs://crfa-to-hub-upgrade/migration-addon.sh $TMP_DIR\n cd $TMP_DIR\n chmod +x ./migration-addon.sh\n\n6. Run the following command to disable \"scaling to zero\", otherwise\n scaling will fail and cause errors when the master node gets updated:\n\n kubectl patch cm config-autoscaler -n knative-serving -p '{\"data\":{\"enable-scale-to-zero\": \"false\"}}'\n\n Note that the final step in this upgrade and migration process is to\n re-enable \"scaling to zero\".\n\nWhat's next\n-----------\n\n[Uninstall the GKE addon](/kubernetes-engine/enterprise/knative-serving/docs/install/on-gcp/upgrade/uninstall-addon)."]]