[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["# About multi-network support for Pods\n\n[Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page describes multi-network support for Pods, including use cases, relevant\nconcepts, terminology, and benefits.\n\nOverview\n--------\n\nGoogle Cloud supports multiple network interfaces at the virtual\nmachine (VM) instance level. You can connect a VM to up to eight networks with\nmultiple network interfaces including the default network plus seven additional\nnetworks.\n\nGoogle Kubernetes Engine (GKE) networking extends multi-network capabilities to Pods\nthat run on the nodes. With multi-network support for Pods, you can enable\nmultiple interfaces on nodes and Pods in a GKE cluster.\nMulti-network support for Pods removes the single interface limitation for node\npools, which limited the nodes to a single VPC for networking.\n\n[Network Function Optimizer (NFO)](https://cloud.google.com/blog/topics/telecommunications/network-function-optimizer-for-gke-and-gdc-edge) is a network service available to GKE that provides\nmulti-network support, [persistent IP addresses](/kubernetes-engine/docs/concepts/about-persistent-ip-addresses-for-gke-pods) and a high-performance Kubernetes-native dataplane. NFO enables containerized network\nfunctions on GKE. Multi-network is one of the fundamental pillars\nof NFO.\n\nTo use multi-network support for your Pods and nodes, see\n[Set up multi-network support for Pods](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods).\n\nTerminology and concepts\n------------------------\n\nThis page uses the following concepts:\n\n**Primary VPC** : The primary\n[VPC](/vpc/docs/overview) is a pre-configured VPC\nthat comes with a set of default settings and resources. The GKE\ncluster is created in this VPC. If you delete the pre-configured\nVPC, then the GKE cluster is created in the primary\nVPC.\n\n**Subnet** : In Google Cloud, a [subnet](/vpc/docs/subnets) is the way to create\nClassless Inter-Domain Routing (CIDR) with netmasks in a VPC. A\nsubnet has a single primary IP address range which is assigned to the nodes and\ncan have multiple secondary ranges that can belong to Pods and Services.\n\n**Node-network**: The node-network refers to a dedicated combination of a\nVPC and subnet pair. Within this node-network, the nodes\nbelonging to the node pool are allocated IP addresses from the primary IP\naddress range.\n\n**Secondary range**: A Google Cloud secondary range is a CIDR and netmask\nbelonging to a subnet. GKE uses this as a Layer 3 Pod-network. A\nPod can connect to multiple Pod-networks.\n\n**Pod-network** : A Network object that serves as a connection point for Pods.\nThe connection can either be of type `Layer 3` or of type `Device`. You can\nconfigure `Device` type Networks in `netdevice` or Data Plane Development Kit\n(DPDK) mode.\n\n`Layer 3` networks correspond to a secondary range on a subnet. `Device` network\ncorrespond to a subnet on a VPC. The data model for the\nPod-network in GKE multi-network is as follows:\n\n- For `Layer 3` Network: **VPC -\\\u003e Subnet Name -\\\u003e Secondary Range Name**\n\n- For `Device` Network: **VPC -\\\u003e Subnet Name**\n\n**Default Pod-network**: Google Cloud creates a default Pod-network during\ncluster creation. The default Pod-network uses the primary VPC as\nthe node-network. The default Pod-network is available on all cluster nodes and\nPods, by default.\n\n**Pods with multiple interfaces**: Multiple interfaces on the Pods cannot\nconnect to the same Pod network.\n\nThe following diagram shows a typical GKE cluster architecture\nwith `Layer 3` Networks:\n\nFor `Device` typed Networks, which can be configured in `netdevice` or `DPDK` mode,\nthe VM vNIC is managed as a resource and passed to the Pod. The Pod-network maps\ndirectly to Node-network in this case. Secondary ranges are not required for\n`Device` typed Networks.\n\nUse cases\n---------\n\nMulti-network support for Pods address the following use cases:\n\n- **Deploy Containerized Network Functions:** If you run the Network Functions in Containers, which have separate data and management planes. Multi-network for Pods isolates networks for different user planes, high performance or low latency from specific interfaces, or network level multi-tenancy. This is needed for compliance, QoS, and security.\n- **Connect VPC within the same organization and project:** You want to create GKE clusters in a VPC and need to connect to services in another VPC. You can use the multi-NIC nodes option for direct connectivity. This might be due to a hub-and-spoke model, in which a centralized service (logging, authentication) operates within a Hub VPC, and the spokes require private connectivity to access it. You can use the multi-network support for Pods to connect the Pods running in the GKE cluster to the hub VPC directly.\n- **Run DPDK applications with VFIO:** You want to run DPDK applications that require access to NIC on the node through the VFIO driver. You can achieve the optimal packet rate by bypassing the kernel, Kubernetes and GKE Dataplane V2 completely.\n- **Enable direct access to vNIC bypassing Kubernetes and GKE Dataplane V2:** You run the Network Functions in Containers that requires direct access to the Network Interface Card (NIC) on the node. For example, High-Performance Computing (HPC) applications that want to bypass Kubernetes and GKE Dataplane V2 to achieve lowest latency. Some applications also want access to PCIe topology information of the NIC to collocate with other devices like GPU.\n\nBenefits\n--------\n\nMulti-network support for Pods provide the following benefits:\n\n- **Traffic isolation**: Multi-network support for Pods lets you isolate traffic in a GKE cluster. You can create Pods with multiple network interfaces, to separate traffic based on capability, such as management and dataplane, within Pods running specific Cloud Native Functions (CNFs).\n- **Dual homing**: Dual homing lets a Pod to have multiple interfaces and route traffic to different VPCs, allowing the Pod to establish connections with both a primary and secondary VPC. If one VPC experiences issues, the application can fall back to the secondary VPC.\n- **Network segmentation**: Pods can connect to internal or external networks based on workload needs. Depending on the specific requirements of your workloads, you can choose which Pods or groups of Pods connect to each network. For example, you can use an internal network for east-west communication and an external network for internet access. This lets you to tailor the network connectivity of your workloads based on their specific needs.\n- **Optimal performance with DPDK**: Multi-network support for Pods in GKE lets DPDK applications to run in GKE Pods, which provides optimal packet processing performance.\n- **Host NIC directly available in Pod** : The `netdevice` mode NIC support with multi-network passes VM NIC directly to Pod, bypassing Kubernetes and GKE Dataplane V2. This can achieve the lowest latency for collaboration between devices.\n- **Performance**: To improve your applications' performance, you can connect the applications to the network that is best suited for the applications' needs.\n\nWhat's next\n-----------\n\n- [Set up multi-network support for Pods](/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods)\n- [Read the GKE network overview](/kubernetes-engine/docs/concepts/network-overview)"]]