Set up multi-network support for Pods


This page shows you how to enable multiple interfaces on nodes and Pods in a Google Kubernetes Engine (GKE) cluster using multi-network support for Pods. Multi-network support is only available for projects enabled with GKE Enterprise.

You should already be familiar with general networking concepts, terminology and concepts specific to this feature, and requirements and limitations for multi-network support for Pods.

For more information, see About multi-network support for Pods.

Requirements and limitations

Multi-network support for Pods has the following requirements and limitations:

Requirements

  • GKE version 1.28 or later.
  • Enable Google Kubernetes Engine (GKE) Enterprise edition
  • Multi-network support for Pods uses the same VM-level specifications as multi-NIC for Compute Engine.
  • Multi-network support for Pods requires GKE Dataplane V2.
  • Multi-network support for Pods is only available for Container-Optimized OS nodes that are running version m101 or later.

General limitations

  • Multi-network support for Pods doesn't work for clusters that are enabled for dual-stack networking.
  • Multi-Pod CIDR is not available for clusters with multi-network enabled.
  • Any Pod-networks in a single GKE cluster can't have overlapping CIDR ranges.
  • When you enable multi-network support for Pods, you can't add or remove node-network interfaces or Pod-networks after creating a node pool. To change these settings, you must recreate the node pool.
  • By default, internet access is not available on additional interfaces of Pod-networks inside the Pod. However, you can enable it manually using Cloud NAT.
  • You cannot change the default Gateway inside a Pod with multiple interfaces through the API. The default Gateway must be connected to the default Pod-network.
  • The default Pod-network must always be included in Pods, even if you create additional Pod-networks or interfaces.
  • You cannot configure the multi-network feature when Managed Hubble has been configured.

Device and Data Plane Development Kit (DPDK) limitations

  • A VM NIC passed into a Pod as a Device type NIC is not available to other Pods on the same node.
  • Pods that use DPDK mode must be run in privileged mode to access VFIO devices.
  • In DPDK mode, a device is treated as a node resource and is only attached to the first container (non-init) in the Pod. If you want to split multiple DPDK devices among containers in the same Pod, you need to run those containers in separate Pods.

Scaling limitations

GKE provides a flexible network architecture that lets you to scale your cluster. You can add additional node-networks and Pod-networks to your cluster. You can scale your cluster as follows:

  • You can add up to 7 additional node-networks to each GKE node pool. This is the same scale limit for Compute Engine VMs.
  • Each pod must have less than 7 additional networks attached.
  • You can configure up to 35 Pod-networks across the 8 node-networks within a single node pool. You can break it down into different combinations, such as:
    • 7 node-networks with 5 Pod-networks each
    • 5 node-networks with 7 Pod-networks each
    • 1 node-network with 30 Pod-networks. The limit for secondary ranges per subnet is 30.
  • You can configure up to 50 Pod-networks per cluster.
  • You can configure up to a maximum of 32 multi-network Pods per node.
  • You can have up to 3000 nodes with multiple interfaces.
  • You can have up to 100,000 additional interfaces across all Pods.
  • You can configure up to a maximum of 1000 nodes with Device type networks.
  • You can configure up to a maximum of 4 Device type networks per node.

Pricing

Network Function Optimizer (NFO) features including multi-network and high performance support for Pods are supported only on clusters that are in Projects enabled with GKE Enterprise. To understand the charges that apply for enabling Google Kubernetes Engine (GKE) Enterprise edition, see GKE Enterprise Pricing.

Deploy multi-network Pods

To deploy multi-network Pods, do the following:

  1. Prepare an additional VPC, subnet (node-network), and secondary ranges (Pod-network).
  2. Create a multi-network enabled GKE cluster using the Google Cloud CLI command.
  3. Create a new GKE node pool that is connected to the additional node-network and Pod-network using the Google Cloud CLI command.
  4. Create Pod network and reference the correct VPC, subnet, and secondary ranges in multi-network objects using the Kubernetes API.
  5. In your workload configuration, reference the prepared Network Kubernetes object using the Kubernetes API.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Prepare an additional VPC

Google Cloud creates a default Pod network during cluster creation associated with the GKE node pool used during the initial creation of the GKE cluster. The default Pod network is available on all cluster nodes and Pods. To facilitate multi-network capabilities within the node pool, you must prepare existing or new VPCs, which support Layer 3 and Device type networks.

For preparing additional VPC, consider the following requirements:

  • Layer 3 and Netdevice type network:

    • Create a secondary range if you are using Layer 3 type networks.
    • Ensure that the CIDR size for the secondary range is large enough to satisfy the number of nodes in the node pool and the number of Pods per node you want to have.
    • Similar to the default Pod-network, the other Pod-networks use IP address overprovisioning. The secondary IP address range must have twice as many IP addresses per node as the number of Pods per node.
  • Device type network requirements: Create a regular subnet on a VPC. You don't require a secondary subnet.

To enable multi-network capabilities in the node pool, you must prepare the VPCs to which you want to establish additional connections. You can use an existing VPC or Create a new VPC specifically for the node pool.

Create a VPC network that supports Layer 3 type device

To create a VPC network that supports Layer 3 type device, do the following:

  • Ensure that the CIDR size for the secondary range is large enough to satisfy the number of nodes in the node pool and the number of Pods per node you want to have.
  • Similar to the default Pod-network, the other Pod-networks use IP address overprovisioning. The secondary IP address range must have twice as many IP addresses per node as the number of Pods per node.

gcloud

gcloud compute networks subnets create SUBNET_NAME \
    --project=PROJECT_ID \
    --range=SUBNET_RANGE \
    --network=NETWORK_NAME \
    --region=REGION \
    --secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>

Replace the following:

  • SUBNET_NAME: the name of the subnet.
  • PROJECT_ID: the ID of the project that contains the VPC network where the subnet is created.
  • SUBNET_RANGE: the primary IPv4 address range for the new subnet, in CIDR notation.
  • NETWORK_NAME: the name of the VPC network that contains the new subnet.
  • REGION: the Google Cloud region in which the new subnet is created.
  • SECONDARY_RANGE_NAME: the name for the secondary range.
  • SECONDARY_IP_RANGE the secondary IPv4 address range in CIDR notation.

Console

  1. In the Google Cloud console, go to the VPC networks page.

  2. Click Create VPC network.

  3. In the Name field, enter the name of the network. For example, l3-vpc.

  4. From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.

  5. In the Subnet creation mode section, choose Custom.

  6. Click ADD SUBNET.

  7. In the New subnet section, specify the following configuration parameters for a subnet:

    1. Provide a Name. For example, l3-subnet.

    2. Select a Region.

    3. Enter an IP address range. This is the primary IPv4 range for the subnet.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. To define a secondary range for the subnet, click Create secondary IP address range.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    5. Private Google access: You can enable Private Google Access for the subnet when you create it or later by editing it.

    6. Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.

    7. Click Done.

  8. In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.

    The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.

  9. Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom, click EDIT.

    You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.

    The allow-custom firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.

  10. In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.

  11. Click Create.

Create a VPC network that supports Netdevice or DPDK type devices

gcloud

gcloud compute networks subnets create SUBNET_NAME \
    --project=PROJECT_ID \
    --range=SUBNET_RANGE \
    --network=NETWORK_NAME \
    --region=REGION \
    --secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>

Replace the following:

  • SUBNET_NAME: the name of the subnet.
  • PROJECT_ID: the ID of the project that contains the VPC network where the subnet is created.
  • SUBNET_RANGE: the primary IPv4 address range for the new subnet, in CIDR notation.
  • NETWORK_NAME: the name of the VPC network that contains the new subnet.
  • REGION: the Google Cloud region in which the new subnet is created.
  • SECONDARY_RANGE_NAME: the name for the secondary range.
  • SECONDARY_IP_RANGE the secondary IPv4 address range in CIDR notation.

Console

  1. In the Google Cloud console, go to the VPC networks page.

  2. Click Create VPC network.

  3. In the Name field, enter the name of the network. For example, netdevice-vpc or dpdk-vpc.

  4. From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.

  5. In the Subnet creation mode section, choose Custom.

  6. In the New subnet section, specify the following configuration parameters for a subnet:

    1. Provide a Name. For example, netdevice-subnet or dpdk-vpc.

    2. Select a Region.

    3. Enter an IP address range. This is the primary IPv4 range for the subnet.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. Private Google Access: Choose whether to enable Private Google Access for the subnet when you create it or later by editing it.

    5. Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.

    6. Click Done.

  7. In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.

    The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.

  8. Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom, click EDIT.

    You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.

    The allow-custom firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.

  9. In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.

  10. Click Create.

Create a GKE cluster with multi-network capabilities

gcloud

To create a GKE cluster with multi-network capabilities:

gcloud container clusters create CLUSTER_NAME \
    --cluster-version=CLUSTER_VERSION \
    --enable-dataplane-v2 \
    --enable-ip-alias \
    --enable-multi-networking

Replace the following:

  • CLUSTER_NAME: the name of the cluster.
  • CLUSTER_VERSION: the version of the cluster.

This command includes the following flags:

  • --enable-multi-networking: enables multi-networking Custom Resource Definitions (CRDs) in the API server for this cluster, and deploys a network-controller-manager which contains the reconciliation and lifecycle management for multi-network objects.
  • --enable-dataplane-v2: enables GKE Dataplane V2. This flag is required to enable multi-network.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. Configure your Standard cluster. For more information, see Create a zonal cluster or Create a regional cluster. While creating the cluster, select the appropriate Network and Node subnet.

  4. From the navigation pane, under Cluster, click Networking.

  5. Select Enable Dataplane V2 checkbox.

  6. Select Enable Multi-Network.

  7. Click Create.

Enabling multi-networking for a cluster adds the necessary CustomResourceDefinitions (CRDs) to the API server for that cluster. It also deploys a network-controller-manager, which is responsible for reconciling and managing multi-network objects. You cannot modify the cluster configuration after it is created.

Create a GKE node pool connected to additional VPCs

Create a node pool that includes nodes connected to the node-network (VPC and subnet) and Pod-network (secondary range) created in Create Pod network.

To create the new node pool and associate it with the additional networks in the GKE cluster:

gcloud

gcloud container node-pools create POOL_NAME \
  --cluster=CLUSTER_NAME \
  --additional-node-network network=NETWORK_NAME,subnetwork=SUBNET_NAME \
  --additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=POD_IP_RANGE,max-pods-per-node=NUMBER_OF_PODS \
  --additional-node-network network=highperformance,subnetwork=subnet-highperf

Replace the following:

  • POOL_NAME with the name of the new node pool.
  • CLUSTER_NAME with the name of the existing cluster to which you are adding the node pool.
  • NETWORK_NAME with the name of the network to attach the node pool's nodes.
  • SUBNET_NAME with the name of the subnet within the network to use for the nodes.
  • POD_IP_RANGE the Pod IP address range within the subnet.
  • NUMBER_OF_PODS maximum number of Pods per node.

This command contains the following flags:

  • --additional-node-network: Defines details of the additional network interface, network, and subnetwork. This is used to specify the node-networks for connecting to the node pool nodes. Specify this parameter when you want to connect to another VPC. If you don't specify this parameter, the default VPC associated with the cluster is used. For Layer 3 type networks, specify the additional-pod-network flag that defines the Pod-network, which is exposed inside the GKE cluster as the Network object. When using the --additional-node-network flag, you must provide a network and subnetwork as mandatory parameters. Make sure to separate the network and subnetwork values with a comma and avoid using spaces.
  • --additional-pod-network: Specifies the details of the secondary range to be used for the Pod-network. This parameter is not required if you use a Device type network. This argument specifies the following key values: subnetwork, pod-ipv4-range, and max-pods-per-node. When using the --additional-pod-network, you must provide the pod-ipv4-range and max-pods-per-node values, separated by commas and without spaces.
    • subnetwork: links the node-network with Pod-network. The subnetwork is optional. If you don't specify it, the additional Pod-network is associated with the default subnetwork provided during cluster creation.
    • --max-pods-per-node: The max-pods-per-node must be specified and has to be a power of 2. The minimum value is 4. The max-pods-per-node must not be more than the max-pods-per-node value on the node pool.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Clusters.

  3. In the Kubernetes clusters section, click the cluster you created.

  4. At the top of the page, to create your node pool, click Add Node Pool.

  5. In the Node pool details section, complete the following:

    1. Enter a Name for the node pool.
    2. Enter the Number of nodes to create in the node pool.
  6. From the navigation pane, under Node Pools, click Nodes.

    1. From the Image type drop-down list, select the Container-Optimized OS with containerd (cos_containerd) node image.

  7. When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. For example, a machine type like e2-standard-4 contains 4 vCPUs, therefore can support up to 4 VPCs total. There are several machine families you can choose from and each machine family is further organized into machine series and predefined or custom machine types within each series. Each machine type is billed differently. For more information, refer to the machine type price sheet.

  8. From the navigation pane, select Networking.

  9. In the section Node Networking, specify the maximum number of Pods per node. The Node Networks section displays the VPC network utilized to create the cluster. It is necessary to designate extra Node Networks that correlate with previously established VPC Networks and Device types.

  10. Create node pool association:

    1. For Layer 3 type device:
      1. In the Node Networks section, click ADD A NODE NETWORK.
      2. From the network drop-down list select the VPC that supports Layer 3 type device.
      3. Select the subnet created for Layer 3 VPC.
      4. In the section Alias Pod IP address ranges, click Add Pod IP address range.
      5. Select the Secondary subnet and indicate the Max number of Pods per node.
      6. Select Done.
    2. For Netdevice and DPDK type device:
      1. In the Node Networks section, click ADD A NODE NETWORK.
      2. From the network drop-down list select the VPC that supports Netdevice or DPDK type devices.
      3. Select the subnet created for Netdevice or DPDK VPC.
      4. Select Done.
  11. Click Create.

Notes:

  • If multiple additional Pod-networks are specified within the same node-network, they must be in the same subnet.
  • You can't reference the same secondary range of a subnet multiple times.

Example The following example creates a node pool named pool-multi-net that attaches two additional networks to the nodes, datapalane (Layer 3 type network) and highperformance (netdevice type network). This example assumes that you already created a GKE cluster named cluster-1:

gcloud container node-pools create pool-multi-net \
  --project my-project \
  --cluster cluster-1 \
  --zone us-central1-c \
  --additional-node-network network=dataplane,subnetwork=subnet-dp \
  --additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
  --additional-node-network network=highperformance,subnetwork=subnet-highperf

To specify additional node-network and Pod-network interfaces, define the --additional-node-network and --additional-pod-network parameters multiple times as shown in the following example:

--additional-node-network network=dataplane,subnetwork=subnet-dp \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-green,max-pods-per-node=8 \
--additional-node-network network=managementdataplane,subnetwork=subnet-mp \
--additional-pod-network subnetwork=subnet-mp,pod-ipv4-range=sec-range-red,max-pods-per-node=4

To specify additional Pod-networks directly on the primary VPC interface of the node pool, as shown in the following example:

--additional-pod-network subnetwork=subnet-def,pod-ipv4-range=sec-range-multinet,max-pods-per-node=8

Create Pod network

Define the Pod networks that the Pods will access by defining Kubernetes objects and linking them to the corresponding Compute Engine resources, such as VPCs, subnets, and secondary ranges.

To create Pod network, you must define the Network CRD objects in the cluster.

Configure Layer 3 VPC network

YAML

For the Layer 3 VPC, create Network and GKENetworkParamSet objects:

  1. Save the following sample manifest as blue-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: blue-network
    spec:
      type: "L3"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "l3-vpc"
    

    The manifest defines a Network resource named blue-network of the type Layer 3. The Network object references the GKENetworkParamSet object called l3-vpc, which associates a network with Compute Engine resources.

  2. Apply the manifest to the cluster:

    kubectl apply -f blue-network.yaml
    
  3. Save the following manifest as dataplane.yaml :

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: "l3-vpc"
    spec:
      vpc: "l3-vpc"
      vpcSubnet: "subnet-dp"
      podIPv4Ranges:
        rangeNames:
        - "sec-range-blue"
    

    This manifest defines the GKENetworkParamSet object named dataplane, sets the VPC name as dataplane, subnet name as subnet-dp, and secondary IPv4 address range for Pods called sec-range-blue.

  4. Apply the manifest to the cluster:

    kubectl apply -f dataplane.yaml
    

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. Click Enable GKE Enterprise.

  4. At the top of the page, click Create to create your Pod network.

  5. In the Before you begin section, verify the details.

  6. Click NEXT: POD NETWORK LOCATION.

  7. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  8. Click NEXT: VPC NETWORK REFERENCE.

  9. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for Layer 3 multinic Pods.

  10. Click NEXT: POD NETWORK TYPE.

  11. In the Pod network type section, select L3 and enter the Pod network name.

  12. Click NEXT: POD NETWORK SECONDARY RANGE.

  13. In the Pod network secondary range section, enter the Secondary range.

  14. Click NEXT: POD NETWORK ROUTES.

  15. In the Pod network routes section, to define Custom routes, select ADD ROUTE.

  16. Click CREATE POD NETWORK.

Configure DPDK network

YAML

For DPDK VPC, create Network and GKENetworkParamSet objects.

  1. Save the following sample manifest as dpdk-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: dpdk-network
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "dpdk"
    

    This manifest defines a Network resource named dpdk-network with a type of Device. The Network resource references a GKENetworkParamSet object called dpdk for its configuration.

  2. Apply the manifest to the cluster:

    kubectl apply -f dpdk-network.yaml
    
  3. For the GKENetworkParamSet object, save the following manifest as dpdk.yaml:

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: "dpdk"
    spec:
      vpc: "dpdk"
      vpcSubnet: "subnet-dpdk"
      deviceMode: "DPDK-VFIO"
    

    This manifest defines the GKENetworkParamSet object named dpdk, sets the VPC name as dpdk, subnet name as subnet-dpdk, and deviceMode name as DPDK-VFIO.

  4. Apply the manifest to the cluster:

    kubectl apply -f dpdk-network.yaml
    

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. At the top of the page, click Create to create your Pod network.

  4. In the Before you begin section, verify the details.

  5. Click NEXT: POD NETWORK LOCATION.

  6. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  7. Click NEXT: VPC NETWORK REFERENCE.

  8. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for dpdk multinic Pods.

  9. Click NEXT: POD NETWORK TYPE.

  10. In the Pod network type section, select DPDK-VFIO (Device) and enter the Pod network name.

  11. Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable

  12. Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, select ADD ROUTE to define custom routes

  13. Click CREATE POD NETWORK.

Configure netdevice network

For the netdevice VPC, create Network and GKENetworkParamSet objects.

YAML

  1. Save the following sample manifest as netdevice-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
        name: netdevice-network
    spec:
        type: "Device"
        parametersRef:
          group: networking.gke.io
          kind: GKENetworkParamSet
          name: "netdevice"
    

    This manifest defines a Network resource named netdevice-network with a type of Device. It references the GKENetworkParamSet object named netdevice.

  2. Apply the manifest to the cluster:

    kubectl apply -f netdevice-network.yaml
    
  3. Save the following manifest as netdevice.yaml :

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: netdevice
    spec:
      vpc: netdevice
      vpcSubnet: subnet-netdevice
      deviceMode: NetDevice
    

    This manifest defines a GKENetworkParamSet resource named netdevice, sets the VPC name as netdevice, the subnet name as subnet-netdevice, and specifies the device mode as NetDevice.

  4. Apply the manifest to the cluster:

    kubectl apply -f netdevice.yaml
    

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. At the top of the page, click Create to create your Pod network.

  4. In the Before you begin section, verify the details.

  5. Click NEXT: POD NETWORK LOCATION.

  6. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  7. Click NEXT: VPC NETWORK REFERENCE.

  8. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for netdevice multinic Pods.

  9. Click NEXT: POD NETWORK TYPE.

  10. In the Pod network type section, select NetDevice (Device) and enter the Pod network name.

  11. Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable

  12. Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, to define custom routes, select ADD ROUTE.

  13. Click CREATE POD NETWORK.

Configuring network routes

Configuring network route lets you to define custom routes for a specific network, which are setup on the Pods to direct traffic to the corresponding interface within the Pod.

YAML

  1. Save the following manifest as red-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: red-network
    spec:
      type: "L3"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "management"
      routes:
      -   to: "10.0.2.0/28"
    

    This manifest defines a Network resource named red-network with a type of Layer 3 and custom route "10.0.2.0/28" through that Network interface.

  2. Apply the manifest to the cluster:

    kubectl apply -f red-network.yaml
    

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. From the navigation pane, click Network Function Optimizer.

  4. In the Kubernetes clusters section, click the cluster you created.

  5. At the top of the page, click Create to create your Pod network.

  6. In the Pod network routes section, define Custom routes.

  7. Click CREATE POD NETWORK.

Reference the prepared Network

In your workload configuration, reference the prepared Network Kubernetes object using the Kubernetes API.

Connect Pod to specific networks

To connect Pods to the specified networks, you must include the names of the Network objects as annotations inside the Pod configuration. Make sure to include both the default Network and the selected additional networks in the annotations to establish the connections.

  1. Save the following sample manifest as sample-l3-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: sample-l3-pod
      annotations:
        networking.gke.io/default-interface: 'eth0'
        networking.gke.io/interfaces: |
          [
            {"interfaceName":"eth0","network":"default"},
            {"interfaceName":"eth1","network":"l3-network"}
          ]
    spec:
      containers:
      - name: sample-l3-pod
        image: busybox
        command: ["sleep", "10m"]
        ports:
        - containerPort: 80
      restartPolicy: Always
    

    This manifest creates a Pod named sample-l3-pod with two network interfaces, eth0 and eth1, associated with the default and blue-network networks, respectively.

  2. Apply the manifest to the cluster:

    kubectl apply -f sample-l3-pod.yaml
    

Connect Pod with multiple networks

  1. Save the following sample manifest as sample-l3-netdevice-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: sample-l3-netdevice-pod
      annotations:
        networking.gke.io/default-interface: 'eth0'
        networking.gke.io/interfaces: |
          [
            {"interfaceName":"eth0","network":"default"},
            {"interfaceName":"eth1","network":"l3-network"},
            {"interfaceName":"eth2","network":"netdevice-network"}
          ]
    spec:
      containers:
      - name: sample-l3-netdevice-pod
        image: busybox
        command: ["sleep", "10m"]
        ports:
        - containerPort: 80
      restartPolicy: Always
    

    This manifest creates a Pod named sample-l3-netdevice-pod with three network interfaces, eth0, eth1 and eth2 associated with the default, l3-network, and netdevice networks, respectively.

  2. Apply the manifest to the cluster:

    kubectl apply -f sample-l3-netdevice-pod.yaml
    

You can use the same annotation in any ReplicaSet (Deployment or DaemonSet) in the template's annotation section.

When you create a Pod with a multi-network specification, the dataplane components automatically generate the Pod's interfaces configuration and save them to the NetworkInterface CRs. Create one NetworkInterface CR for each non-default Network specified in the Pod specification.

For example, the following manifest shows details from a NetworkInterface manifest:

apiVersion: v1
items:
-   apiVersion: networking.gke.io/v1
  kind: NetworkInterface
  metadata:
    labels:
      podName: samplepod
    name: "samplepod-c0b60cbe"
    namespace: default
  spec:
    networkName: "blue-network"
  status:
    gateway4: 172.16.1.1
    ipAddresses:
    -   172.16.1.2/32
    macAddress: 82:89:96:0b:92:54
    podName: samplepod

This manifest includes the network name, gateway address, assigned IP addresses, MAC address, and the Pod name.

Sample configuration of a Pod with multiple interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default
    link/ether 2a:92:4a:e5:da:35 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.60.45.4/24 brd 10.60.45.255 scope global eth0
      valid_lft forever preferred_lft forever
10: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether ba:f0:4d:eb:e8:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.1.2/32 scope global eth1
      valid_lft forever preferred_lft forever

Verification

  • Ensure that you create clusters with --enable-multi-networking only if --enable-dataplane-v2 is enabled.
  • Verify that all node pools in the cluster are running Container-Optimized OS images at the time of cluster and node pool creation.
  • Verify that node pools are created with --additional-node-network or --additional-pod-network only if multi-networking is enabled on the cluster.
  • Ensure that the same subnet is not specified twice as --additional-node-network argument to a node pool.
  • Verify that the same secondary range is not specified as the --additional-pod-network argument to a node pool.
  • Follow the scale limits specified for network objects, considering the maximum number of nodes, Pods, and IP addresses allowed.
  • Verify that there is only one GKENetworkParamSet object which refers to a particular subnet and secondary range.
  • Verify that each network object refers to a different GKENetworkParamSet object.
  • Verify that the network object if it is created with a specific subnet with Device network, is not being used on the same node with another network with a secondary range. You can only validate this at runtime.
  • Verify that the various secondary ranges assigned to the node pools don't have overlapping IP addresses.

Troubleshoot multi-networking parameters in GKE

When you create a cluster and node pool, Google Cloud implements certain checks to ensure that only valid multi-networking parameters are allowed. This ensures that the network is set up correctly for the cluster.

If you fail to create multi-network workloads, you can check the Pod status and events for more information:

kubectl describe pods samplepod

The output is similar to the following:

Name:         samplepod
Namespace:    default
Status:       Running
IP:           192.168.6.130
IPs:
  IP:  192.168.6.130
...
Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               26m   default-scheduler  Successfully assigned default/samplepod to node-n1-04
  Warning  FailedCreatePodSandBox  26m   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e16c58a443ab70d671159509e36894b57493300ea21b6c24c14bdc412b0fdbe6": Unable to create endpoint: [PUT /endpoint/{id}][400] putEndpointIdInvalid  failed getting interface and network CR for pod "default/samplepod": failed creating interface CR default/samplepod-c0b60cbe: admission webhook "vnetworkinterface.networking.gke.io" denied the request: NetworkInterface.networking.gke.io "samplepod-c0b60cbe" is invalid: Spec.NetworkName: Internal error: failed to get the referenced network "sample-network": Network.networking.gke.io "sample-network" not found
...

The following are general reasons for Pod creation failure:

  • Failed to schedule Pod due to multi-networking resource requirements not met
  • Failed to identify specified networks
  • Failed to configure and create Network interface object for Pod

To inspect whether Google Cloud has created the NetworkInterface objects in the API server, run the following command:

kubectl get networkinterfaces.networking.gke.io -l podName=samplepod

Troubleshoot creation of Kubernetes networks

After you successfully create a network, nodes that should have access to the configured network are annotated with a network-status annotation.

To observe annotations, run the following command:

kubectl describe node NODE_NAME

Replace NODE_NAME with the name of the node.

The output is similar to the following:

networking.gke.io/network-status: [{"name":"default"},{"name":"dp-network"}]

The output lists each network available on the node. If the expected network status is not seen on the node, do the following:

Check if the node can access the network

If the network is not showing up in the node's network-status annotation:

  1. Verify that the node is part of a pool configured for multi-networking.
  2. Check the node's interfaces to see if it has an interface for the network you're configuring.
  3. If a node is missing network-status and has only one network interface, you must still create a pool of nodes with multi-networking enabled.
  4. If your node contains the interface for the network you're configuring but it is not seen in the network status annotation, check the Network and GKENetworkParamSet (GNP) resources.

Check the Network and GKENetworkParamSet resources

The status of both Network and GKENetworkParamSet (GNP) resources includes a conditions field for reporting configuration errors. We recommended checking GNP first, as it does not rely on another resource to be valid.

To inspect the conditions field, run the following command:

kubectl get gkenetworkparamsets GNP_NAME -o yaml

Replace GNP_NAME with the name of the GKENetworkParamSet resource.

When the Ready condition is equal to true, the configuration is valid and the output is similar to the following:

apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
  podIPv4Ranges:
    rangeNames:
    -   sec-range-blue
  vpc: dataplane
  vpcSubnet: subnet-dp
status:
  conditions:
  -   lastTransitionTime: "2023-06-26T17:38:04Z"
    message: ""
    reason: GNPReady
    status: "True"
    type: Ready
  networkName: dp-network
  podCIDRs:
    cidrBlocks:
    -   172.16.1.0/24

When the Ready condition is equal to false, the output displays the reason and is similar to the following:

apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
  podIPv4Ranges:
    rangeNames:
    -   sec-range-blue
  vpc: dataplane
  vpcSubnet: subnet-nonexist
status:
  conditions:
  -   lastTransitionTime: "2023-06-26T17:37:57Z"
    message: 'subnet: subnet-nonexist not found in VPC: dataplane'
    reason: SubnetNotFound
    status: "False"
    type: Ready
  networkName: ""

If you encounter a similar message, ensure your GNP was configured correctly. If it already is, ensure your Google Cloud network configuration is correct. After updating Google Cloud network configuration, you may need to recreate the GNP resource to manually trigger a resync. This is to avoid infinite polling of the Google Cloud API.

Once the GNP is ready, check the Network resource.

kubectl get networks NETWORK_NAME -o yaml

Replace NETWORK_NAME with the name of the Network resource.

The output of a valid configuration is similar to the following:

apiVersion: networking.gke.io/v1
kind: Network
...
spec:
  parametersRef:
    group: networking.gke.io
    kind: GKENetworkParamSet
    name: dp-gnp
  type: L3
status:
  conditions:
  -   lastTransitionTime: "2023-06-07T19:31:42Z"
    message: ""
    reason: GNPParamsReady
    status: "True"
    type: ParamsReady
  -   lastTransitionTime: "2023-06-07T19:31:51Z"
    message: ""
    reason: NetworkReady
    status: "True"
    type: Ready
  • reason: NetworkReady indicates that the Network resource is configured correctly. reason: NetworkReady does not imply that the Network resource is necessarily available on a specific node or actively being used.
  • If there is a misconfiguration or error, the reason field in the condition specifies the exact reason for the issue. In such cases, adjust the configuration accordingly.
  • GKE populates the ParamsReady field, if the parametersRef field is set to an GKENetworkParamSet resource that exists in the cluster. If you've specified a GKENetworkParamSet type parametersRef and the condition isn't appearing, make sure the name, kind, and group match the GNP resource that exists within your cluster.

What's next