Network requirements

Network requirements

External network requirements

Google Distributed Cloud requires an internet connection for operational purposes. Google Distributed Cloud retrieves cluster components from Container Registry, and the cluster is registered with Connect.

You can connect to Google by using the public internet through HTTPS, a virtual private network (VPN), or a Dedicated Interconnect connection.

If the machines you are using for your admin workstation and cluster nodes use a proxy server to access the internet, your proxy server must allow some specific connections. For details, see the prerequisites section of Install behind a proxy.

Internal network requirements

Google Distributed Cloud can work with Layer 2 or Layer 3 connectivity between cluster nodes, but requires load balancer nodes to have Layer 2 connectivity. The load balancer nodes can be the control plane nodes or a dedicated set of nodes. For more information, see Choosing and configuring load balancers.

The Layer 2 connectivity requirement applies whether you run the load balancer on the control plane node pool or in a dedicated set of nodes.

The requirements for load balancer machines are as follows:

  • All load balancers for a given cluster are in the same Layer 2 domain.
  • All virtual IP addresses (VIPs) must be in the load balancer machine subnet and routable to the gateway of the subnet.
  • Users are responsible to allow ingress load balancer traffic.

Pod networking

Google Distributed Cloud 1.7.0 and later versions allows you to configure up to 250 pods per node. Kubernetes assigns a Classless Inter-Domain Routing (CIDR) block to each node so that each pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of pods per node. The following table lists the size of the CIDR block that Kubernetes assigns to each node based on the configured maximum pods per node:

Maximum pods per node CIDR block per node Number of IP addresses
32 /26 64
33 – 64 /25 128
65 – 128 /24 256
129 - 250 /23 512

Running 250 pods per node requires Kubernetes to reserve a /23 CIDR block for each node. Assuming that your cluster uses the default value of /16 for the clusterNetwork.pods.cidrBlocks field, your cluster has a limit of (2(23-16))=128 nodes. If you intend to grow the cluster beyond this limit, you can either increase the value of clusterNetwork.pods.cidrBlocks or decrease the value of nodeConfig.podDensity.maxPodsPerNode.

Single user cluster deployment with high availability

The following diagram illustrates a number of key networking concepts for Google Distributed Cloud in one possible network configuration.

Google Distributed Cloud typical network configuration

Consider the following information to meet the network requirements:

  • The control plane nodes run the load balancers, and they all have Layer 2 connectivity, while other connections, including worker nodes, only require Layer 3 connectivity.
  • Configuration files define IP addresses for worker node pools. Configuration files also define VIPs for the following purposes:
    • Services
    • Ingress
    • Control plane access through the Kubernetes API
  • You require a connection to Google Cloud.

Port usage

This section shows how UDP and TCP ports are used on cluster and load balancer nodes.

Control plane nodes

ProtocolDirectionPort rangePurposeUsed by
UDPInbound6081GENEVE EncapsulationSelf
TCPInbound22Provisioning and updates of admin cluster nodesAdmin workstation
TCPInbound6444Kubernetes API serverAll
TCPInbound2379 - 2380etcd server client APIkube-apiserver and etcd
TCPInbound10250kubelet APISelf and control plane
TCPInbound10251kube-schedulerSelf
TCPInbound10252kube-controller-managerSelf
TCPInbound10256Node health checkAll
TCPBoth4240CNI health checkAll

Worker nodes

ProtocolDirectionPort rangePurposeUsed by
TCPInbound22Provisioning and updates of user cluster nodesAdmin cluster nodes
UDPInbound6081GENEVE EncapsulationSelf
TCPInbound10250kubelet APISelf and control plane
TCPInbound10256Node health checkAll
TCPInbound30000 - 32767NodePort servicesSelf
TCPBoth4240CNI health checkAll

Load balancer nodes

ProtocolDirectionPort rangePurposeUsed by
TCPInbound22Provisioning and updates of user cluster nodesAdmin cluster nodes
UDPInbound6081GENEVE EncapsulationSelf
TCPInbound443*Cluster managementAll
TCPBoth4240CNI health checkAll
TCPInbound7946Metal LB health checkload balancer nodes
UDPInbound7946Metal LB health checkload balancer nodes
TCPInbound10256Node health checkAll

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Multi-cluster port requirements

In a multi-cluster configuration, added clusters must include the following ports to communicate with the admin cluster.

ProtocolDirectionPort rangePurposeUsed by
TCPInbound22Provisioning and updates of cluster nodesAll nodes
TCPInbound443*Kubernetes API server for added clusterControl plane and load balancer nodes

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Configure firewalld ports

You are not required to disable firewalld to run Google Distributed Cloud on Red Hat Enterprise Linux (RHEL) or CentOS. To use firewalld, you must open the UDP and TCP ports used by control plane, worker, and load balancer nodes as described in Port usage on this page. The following example configurations show how you can open ports with firewall-cmd, the firewalld command-line utility. You should run the commands as the root user.

Control plane node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running control plane nodes:

firewall-cmd --permanent --zone=public --add-port=22/tcp
firewall-cmd --permanent --zone=public --add-port=4240/tcp
firewall-cmd --permanent --zone=public --add-port=6444/tcp
firewall-cmd --permanent --zone=public --add-port=6081/udp
firewall-cmd --permanent --zone=public --add-port=10250-10252/tcp
firewall-cmd --permanent --zone=public --add-port=10256/tcp
firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
firewall-cmd --permanent --new-zone=k8s-pods
firewall-cmd --permanent --zone=k8s-pods --add-source PODS_CIDR
firewall-cmd --permanent --zone=k8s-pods --set-target=ACCEPT
firewall-cmd --reload

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16.

Worker node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running worker nodes:

firewall-cmd --permanent --zone=public --add-port=22/tcp
firewall-cmd --permanent --zone=public --add-port=4240/tcp
firewall-cmd --permanent --zone=public --add-port=6444/tcp
firewall-cmd --permanent --zone=public --add-port=6081/udp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --permanent --zone=public --add-port=10256/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
firewall-cmd --permanent --new-zone=k8s-pods
firewall-cmd --permanent --zone=k8s-pods --add-source PODS_CIDR
firewall-cmd --permanent --zone=k8s-pods --set-target=ACCEPT
firewall-cmd --reload

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16.

Load balancer node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running load balancer nodes:

firewall-cmd --permanent --zone=public --add-port=22/tcp
firewall-cmd --permanent --zone=public --add-port=4240/tcp
firewall-cmd --permanent --zone=public --add-port=6444/tcp
firewall-cmd --permanent --zone=public --add-port=7946/tcp
firewall-cmd --permanent --zone=public --add-port=7946/udp
firewall-cmd --permanent --zone=public --add-port=6081/udp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --permanent --zone=public --add-port=10256/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
firewall-cmd --permanent --new-zone=k8s-pods
firewall-cmd --permanent --zone=k8s-pods --add-source PODS_CIDR
firewall-cmd --permanent --zone=k8s-pods --set-target=ACCEPT
firewall-cmd --reload

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16.

Confirm your port configuration

To verify your port configuration, use the following steps on control plane, worker, and load balancer nodes:

  1. Run the following Network Mapper command to see what ports are open:

    nmap localhost
    
  2. Run the following commands to get your firewalld configuration settings:

    firewall-cmd --zone=public --list-all-policies
    firewall-cmd --zone=public --list-ports
    firewall-cmd --zone=public --list-services
    firewall-cmd --zone=k8s-pods --list-all-policies
    firewall-cmd --zone=k8s-pods --list-ports
    firewall-cmd --zone=k8s-pods --list-services
    
  3. If necessary, rerun the commands from the preceding sections to configure your nodes properly. You may need to run the commands as the root user.