在測試群升級中使用控制層修訂版本時,您會在現有控制層旁邊安裝新的獨立控制層和設定。安裝程式會指派名為修訂版本的字串,用於識別新的控制平面。一開始,附加元件 Proxy 會繼續接收先前版本控制層的設定。您可以為命名空間或 Pod 加上新控制層修訂版本的標籤,逐步將工作負載與新控制層建立關聯。使用新修訂版本標記命名空間或 Pod 後,請重新啟動工作負載 Pod,以便自動插入新的附屬程式,並從新的控制層接收設定。如果發生問題,您可以將工作負載與原始控制平面建立關聯,藉此回復原狀。
自動插入功能的運作方式為何?
自動插入功能會使用名為「許可控制」的 Kubernetes 功能。系統會註冊變更的許可 webhook,以便監控新建的 Pod。webhook 會使用命名空間選取器進行設定,因此只會比對已部署至具有特定標籤的命名空間的 Pod。當 Pod 符合條件時,webhook 會諮詢控制層提供的注入服務,為 Pod 取得新的變異設定,其中包含執行 sidecar 所需的容器和磁碟區。
系統會在安裝期間建立 webhook 設定。Webhook 會向 Kubernetes API 伺服器註冊。
Kubernetes API 伺服器會監控命名空間中是否有與 webhook namespaceSelector 相符的 Pod 部署作業。
將 Pod 與新控制層建立關聯後,系統仍會安裝現有的控制層和 webhook。舊版 webhook 對已遷移至新控制層的命名空間中的 Pod 沒有任何影響。您可以移除新修訂版本標籤、重新加入原始標籤,然後重新啟動 Pod,將命名空間中的 Pod 復原為原始控制平面。確定升級完成後,您就可以移除舊的控制平面。
進一步瞭解變異 Webhook 設定
如要進一步瞭解自動側載注入功能的變異 webhook,請自行檢查設定。使用下列指令:
kubectl -n istio-system get mutatingwebhookconfiguration -l app=sidecar-injector -o yaml
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["Cloud Service Mesh control plane revisions\n\nThis page describes how control plane *revisions* work and the value of using\nthem for safe service mesh upgrades (and rollbacks).\n\nService mesh installation fundamentals\n\nAt a high level, Cloud Service Mesh installation consists of two major phases:\n\n1. First you use the `asmcli` tool to install an in-cluster control plane. The\n control plane consists of a set of system services that are responsible for\n managing mesh configuration.\n\n2. Next, you deploy a special *sidecar proxy* throughout your environment that\n intercepts network communication to and from each workload. The proxies\n communicate with the control plane to get their configuration, which lets you\n direct and control traffic (data plane traffic) around your mesh without\n making any changes to your workloads.\n\n To deploy the proxies, you use a process called\n [*automatic sidecar injection* (auto-injection)](/service-mesh/legacy/in-cluster/operate-and-maintain/proxy-injection)\n to run a proxy as an additional sidecar container in each of\n your workload Pods. You don't need to modify the Kubernetes manifests that\n you use to deploy your workloads, but you do need to add a label to your\n namespaces and restart the Pods.\n\nUse revisions to upgrade your mesh safely\n\nThe ability to control traffic is one of the principal benefits of using a\nservice mesh. For example, you can gradually shift traffic to a new version of\nan application when you first deploy it to production. If you detect problems\nduring the upgrade, you can shift traffic back to the original version,\nproviding a low risk means of rolling back. This procedure is known as a\n*canary release*, and it greatly reduces the risk associated with new\ndeployments.\n| **Note:** An alternative type of upgrade is an *in-place upgrade* in which you upgrade by installing a new version of the control plane. The new control plane version immediately replaces the old version. In-place upgrades are risky because if there are failures, rolling back can be difficult. To re-inject the proxies and have them communicate with the new control plane version, you must to restart all workloads in all of your namespaces. Depending on the number of workloads and namespaces in your mesh, the entire upgrade process could take an hour or more. In-place upgrades can lead to downtime and should be scheduled in maintenance windows.\n\nUsing control plane revisions in a canary upgrade, you install a new and\nseparate control plane and configuration alongside the existing control plane.\nThe installer assigns a string called a revision to identify the new control\nplane. At first, the sidecar proxies continue to receive configuration from the\nprevious version of the control plane. You gradually associate workloads with\nthe new control plane by labelling their namespaces or Pods with the new control\nplane revision. Once you have labelled a namespace or Pods with the new\nrevision, you restart the workload Pods so that new sidecars are auto-injected,\nand they receive their configuration from the new control plane. If there are\nproblems, you can roll back by associating the workloads with the original\ncontrol plane.\n\nHow does auto-injection work?\n\nAuto-injection uses a Kubernetes feature called\n[admission control](https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/).\nA mutating admission webhook is registered to watch for newly created Pods. The\nwebhook is configured with a namespace selector so that it only matches Pods\nthat are being deployed to namespaces that have a particular label. When a Pod\nmatches, the webhook consults an injection service provided by the control\nplane to obtain a new, mutated configuration for the Pod, which contains the\ncontainers and volumes needed to run the sidecar.\n\n1. A webhook configuration is created during installation. The webhook is registered with the Kubernetes API server.\n2. The Kubernetes API server watches for Pod deployments in namespaces that match the webhook `namespaceSelector`.\n3. A namespace is labeled so that it will be matched by the `namespaceSelector`.\n4. Pods deployed to the namespace trigger the webhook.\n5. The `inject` service provided by the control plane mutates the Pod specifications to auto-inject the sidecar.\n\nWhat is a revision?\n\nThe label used for auto-injection is like any other user-defined Kubernetes\nlabel. A label is essentially a key-value pair which can be used to support the\nconcept of labelling. Labels are widely used for tagging and for\nrevisions. For example, Git tags, Docker tags, and\n[Knative revisions](https://knative.dev/docs/getting-started/first-traffic-split/#creating-a-new-revision).\n\nThe current Cloud Service Mesh installation process lets you label the installed\ncontrol plane with a revision string. The installer labels every control plane\nobject with the revision. The key in the key-value pair is `istio.io/rev`. For\nin-cluster control planes, the `istiod` Service and Deployment typically have a\nrevision label similar to `istio.io/rev=asm-1264-1`, where\n`asm-1264-1` identifies the Cloud Service Mesh version. The revision\nbecomes part of the service name, for example:\n`istiod-asm-1264-1.istio-system`.\n\nTo enable auto-injection, you add a revision label to your namespaces that\nmatches the revision label on the control plane. For example, a control plane\nwith revision `istio.io/rev=asm-1264-1` selects Pods in namespaces with\nthe label `istio.io/rev=asm-1264-1` and injects sidecars.\n\nThe canary upgrade process\n\nRevision labels make it possible to perform canary upgrades and rollbacks of the\nin-cluster control plane.\n\nThe following steps describe how the process works:\n\n1. Start with an existing Cloud Service Mesh or open source Istio installation. It doesn't matter whether the namespaces are using a revision label or the `istio-injection=enabled` label.\n2. Use a revision string when you install the new version of the control plane. Because of the revision string, the new control plane is installed alongside the existing version. The new installation includes a new webhook configuration with a `namespaceSelector` configured to watch for namespaces with that specific revision label.\n3. You migrate sidecar proxies to the new control plane by removing the old label from the namespace, adding the new revision label, and then restarting the Pods. If you use revisions with Cloud Service Mesh, you must stop using the `istio-injection=enabled` label. A control plane with a revision does not select Pods in namespaces with an `istio-injection` label, even if there is a revision label. The webhook for the new control plane injects sidecars into the Pods.\n4. Carefully test the workloads associated with the upgraded control plane and either continue to roll out the upgrade or roll back to the original control plane.\n\nAfter associating Pods with the new control plane, the existing control plane\nand webhook are still installed. The old webhook has no effect for Pods in\nnamespaces that have been migrated to the new control plane. You can roll back\nthe Pods in a namespace to the original control plane by removing the new\nrevision label, adding back the original label and restarting the Pods. When you\nare certain that the upgrade is complete, you can remove the old control\nplane.\n\nA closer look at a mutating webhook configuration\n\nTo better understand the mutating webhook for automatic sidecar injection,\ninspect the configuration yourself. Use the following command: \n\n kubectl -n istio-system get mutatingwebhookconfiguration -l app=sidecar-injector -o yaml\n\nYou should see a separate configuration for each control plane that you have\ninstalled. A namespace selector for a revision-based control plane looks like\nthis: \n\n namespaceSelector:\n matchExpressions:\n - key: istio-injection\n operator: DoesNotExist\n - key: istio.io/rev\n operator: In\n values:\n - asm-1264-1\n\nThe selector may vary depending on the version of Cloud Service Mesh or Istio that\nyou are running. This selector matches namespaces with a specific revision label\nas long as they don't also have an `istio-injection` label.\n\nWhen a Pod is deployed to a namespace matching the selector, its Pod\nspecification is submitted to the injector service for mutation. The injector\nservice to be called is specified as follows: \n\n service:\n name: istiod-asm-1264-1\n namespace: istio-system\n path: /inject\n port: 443\n\nThe service is exposed by the control plane on port 443 at the `inject` URL\npath.\n\nThe `rules` section specifies that the webhook should apply to Pod creation: \n\n rules:\n - apiGroups:\n - \"\"\n apiVersions:\n - v1\n operations:\n - CREATE\n resources:\n - pods\n scope: '*'"]]