本頁說明如何在 Google Kubernetes Engine (GKE) 叢集上啟用自動 IP 位址管理 (自動 IPAM)。啟用自動 IPAM 後,GKE 會自動在叢集中建立子網路,並管理節點和 Pod 的 IP 位址。對於 Service,GKE 預設會從 GKE 管理的範圍指派 IP 位址。
本頁內容適用於負責佈建及設定雲端資源、部署應用程式和服務,以及管理雲端部署作業網路的作業人員、雲端架構師、開發人員和網路工程師。如要進一步瞭解內容中提及的常見角色和範例工作,請參閱「常見的 GKE Enterprise 使用者角色和工作」。 Google Cloud
總覽
傳統上,建立 GKE 叢集時,您會手動設定子網路,其中包含節點 IP 位址的主要範圍,以及 Pod 和 Service IP 位址的兩個次要範圍。手動設定 Pod IP 位址範圍時,可能難以判斷要設定的 IP 位址範圍大小。如果分配的 IP 位址不足,可能會限制叢集擴充及建立新 Pod。反之,如果 IP 位址分配過多,可能會浪費其他資源可用的寶貴 IP 位址空間。
GKE 自動 IPAM 可解決這個問題,並具有下列優點:
降低複雜度:自動 IPAM 會自動建立子網路,並為該子網路指派適當的 IP 位址範圍,藉此降低 IP 位址分配的複雜度。
自動調整 IP 位址範圍:啟用自動 IPAM 時,GKE 會先為節點和 Pod 設定較小的 IP 位址範圍。隨著叢集擴大或縮小,GKE 會使用多個不重疊的 IP 位址範圍,並在叢集層級定義這些範圍,動態新增或移除額外的 IP 位址範圍。這種自動化方法可在整個 GKE 叢集生命週期中,將 IP 位址健康狀態和效率最佳化。
簡化 IP 位址管理:自動 IPAM 可減少您為 GKE 叢集仔細規劃及管理 IP 位址分配作業的需求。
當現有節點集區擴大或縮減時,自動 IPAM 不會新增或移除已指派給現有節點集區的 IP 位址範圍。在叢集中建立新節點集區時,如果 IP 位址空間不足,自動 IPAM 會建立額外的子網路和 IP 位址範圍。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-01 (世界標準時間)。"],[],[],null,["# Use auto IP address management\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page explains how to enable automatic IP address management (auto IPAM) on\na Google Kubernetes Engine (GKE) cluster. When you enable auto IPAM,\nGKE automatically creates subnets in the cluster and manages IP\naddresses for nodes and Pods. For Services, GKE assigns IP\naddresses from a [GKE-managed\nrange](/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing) by default.\n\nThis page is for Operators, Cloud architects, Developers,\nand Network engineers who provision and configure cloud resources, deploy\napps and services, and manage networking for their cloud deployments. To learn\nmore about common roles and example tasks referenced in Google Cloud\ncontent, see\n[Common GKE Enterprise user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nOverview\n--------\n\nTraditionally, when you create a GKE cluster, you manually\nconfigure a subnet with a primary range for node IP addresses and two secondary\nranges for Pod and Service IP addresses. When you manually configure the Pod IP\naddress range, it can be difficult to know the exact size of the IP address\nrange to set. If you don't allocate enough IP addresses, you might restrict\ncluster scaling and the creation of new Pods. Conversely, if you over-allocate\nIP addresses, you risk wasting valuable IP address space that other resources\ncould utilize.\n\nGKE auto IPAM resolves this issue and has the following\nadvantages:\n\n**Reduced complexity**: auto IPAM reduces the complexity of IP address\nallocation by automatically creating a subnet and assigning an appropriate IP\naddress range to that subnet.\n\n**Automatic adjustment of IP address ranges**: when you enable auto IPAM,\nGKE begins with a smaller IP address range for nodes and Pods. As\nthe cluster scales up or down, GKE dynamically adds or removes\nadditional IP address ranges by using multiple IP address ranges that don't\noverlap and are defined at the cluster level. This automated approach optimizes\nIP address health and efficiency throughout the entire GKE\ncluster lifecycle.\n\n**Simplified IP address management**: auto IPAM reduces the need for you to\nmeticulously plan and manage IP address allocation for your GKE\nclusters.\n\nAuto IPAM doesn't add or remove IP address ranges that are already assigned to\n*existing* node pools when these node pools are scaled up or down. When you\ncreate *new* node pools with insufficient IP address space in the cluster, auto\nIPAM creates additional subnets and IP address ranges.\n\nYou can enable auto IPAM when you create a new cluster. You can also enable or\ndisable auto IPAM for existing clusters.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\n### Restrictions and limitations\n\nWhen you use auto IPAM in your GKE cluster, understand the\nfollowing restrictions and limitations:\n\n- Your cluster must be a VPC-native cluster. Routes-based clusters don't support auto IPAM.\n- You can't use Auto IPAM in a cluster with [Shared VPC](/kubernetes-engine/docs/how-to/cluster-shared-vpc).\n- You can enable auto IPAM in a cluster that has [multi-network\n capabilities](/kubernetes-engine/docs/concepts/about-multinetwork-support-for-pods). However, auto IPAM won't work when you create a new node pool with multiple network interfaces. To use auto IPAM, you must disable multi-networking capabilities in your node pool.\n- If your cluster has [IPv4/IPv6 dual-stack networking](/kubernetes-engine/docs/concepts/alias-ips#dual_stack_network), auto IPAM will allocate and manage only the IPv4 addresses in your cluster.\n- When you enable auto IPAM, the default maximum node size for any node pool, including the default node pool, is 252 nodes with a CIDR block of /24.\n- By default, GKE allows up to 48 Pods per node in a cluster with auto IPAM.\n- You can't overprovision a Pod CIDR range in clusters that use auto IPAM.\n\n| **Caution:** Don't use subnets that are created with auto IPAM (identified with the prefix `gke-auto`) for resources that are *not* managed by GKE. GKE automatically deletes and recycles these subnets when they are no longer in use by a GKE cluster, which can lead to unexpected service disruptions. You also won't be able to delete your GKE cluster or node pool if the `gke-auto` subnet is used by other resources.\n\nCreate a cluster with auto IPAM\n-------------------------------\n\nWhen you create a new cluster and enable auto IPAM, you can either have\nGKE create a new subnet, or you can specify an existing subnet to\nuse. If you specify an existing subnet, make sure that there are enough\nsecondary IP address ranges available for the cluster. You don't have to specify\nany IP address ranges when you create a cluster and enable auto IPAM.\n\n1. To create a cluster with auto IPAM and have GKE create a new\n subnet, run the following command:\n\n gcloud container clusters create \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --enable-auto-ipam \\\\\n\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with the name of your cluster.\n\n GKE does the following:\n - Creates a new subnet for the cluster.\n - Sets up an initial IP address allocation for the cluster and automatically allocates new node and Pod IP addresses to the new node pool.\n - Monitors the use of subnets and secondary IP address ranges.\n2. To create a cluster with auto IPAM and specify your own subnet, follow the\n instructions in the [Create a cluster in an existing\n subnet](/kubernetes-engine/docs/how-to/alias-ips#creating_cluster) section and use\n the `--enable-auto-ipam` flag in the command. For example:\n\n gcloud container clusters create \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --enable-auto-ipam \\\\\n --subnetwork=\u003cvar translate=\"no\"\u003eSUBNET_NAME\u003c/var\u003e \\\n\n Replace the following values:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster.\n - \u003cvar translate=\"no\"\u003eSUBNET_NAME\u003c/var\u003e: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster.\n\nUpdate an existing cluster\n--------------------------\n\nYou can enable or disable auto IPAM on an existing cluster.\n\n### Enable auto IPAM\n\nTo enable auto IPAM on an existing cluster, run the following command: \n\n gcloud container clusters update \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --enable-auto-ipam\n\nAfter you run this command, when you create a new node pool without sufficient\nIP address space, GKE creates and manages a new IP\naddress range in your cluster.\n\n### Disable auto IPAM\n\nTo disable auto IPAM on an existing cluster, run the following command: \n\n gcloud container clusters update \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --disable-auto-ipam\n\nReplace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with the name of your cluster.\n\nAfter you disable auto IPAM in your cluster:\n\n- GKE will retain ownership of any subnets and secondary IP address ranges that were created with auto IPAM. These resources are deleted when you delete the GKE cluster.\n- When you create new node pools, GKE automatically assigns the default subnet and the associated secondary IP address range.\n\nWhat's next\n-----------\n\n- Learn about [IP address allocation in GKE](/kubernetes-engine/docs/concepts/network-overview#ip-allocation).\n- Learn how to [create VPC-native clusters](/kubernetes-engine/docs/how-to/alias-ips#create_a_cluster)."]]