This page describes the networking features of Google Distributed Cloud connected, including subnetworks, BGP peering sessions, and load balancing.
The procedures on this page only apply to Distributed Cloud connected racks, except load balancing, which applies to both Distributed Cloud connected racks and Distributed Cloud connected servers.
Enable the Distributed Cloud Edge Network API
Before you can configure networking on a connected deployment of Distributed Cloud, you must enable the Distributed Cloud Edge Network API. To do so, complete the steps in this section. By default, Distributed Cloud connected servers ship with the Distributed Cloud Edge Network API already enabled.
Console
In the Google Cloud console, go to the Distributed Cloud Edge Network API page.
Click Enable.
gcloud
Use the following command:
gcloud services enable edgenetwork.googleapis.com
Configure networking on Distributed Cloud connected
This section describes how to configure the networking components on your Distributed Cloud connected deployment.
The following limitations apply to Distributed Cloud connected servers:
- You can only configure subnetworks, and
- Subnetworks only support VLAN IDs; CIDR-based subnetworks are not supported.
A typical network configuration for Distributed Cloud connected consists of the following steps:
Optional: Initialize the network configuration of the target zone, if necessary.
Create a network.
Create one or more subnetworks within the network.
Establish northbound BGP peering sessions with your PE routers by using the corresponding interconnect attachments.
Establish southbound BGP peering sessions with the Pods that run your workloads by using the corresponding subnetworks.
Optional: Establish loopback BGP peering sessions for high availability.
Test your configuration.
Connect your Pods to the network.
Optional: Initialize the network configuration of the Distributed Cloud zone
You must initialize the network configuration of your Distributed Cloud connected zone in the following cases:
- Immediately after your Distributed Cloud connected hardware has been installed on your premises.
- You upgraded to Distributed Cloud connected version 1.3.0 or later on an existing Distributed Cloud connected deployment but did not participate in the private preview of the Distributed Cloud Edge Network API.
Initializing the network configuration of a zone creates a default router named
default
and a default network named default
. It also configures the
default
router to peer with all of the interconnects that you requested when
you ordered the Distributed Cloud connected hardware by creating
corresponding interconnect attachments. This configuration provides your
Distributed Cloud connected deployment with basic uplink connectivity
to your local network.
Initializing the network configuration of a zone is a one-time procedure. For complete instructions, see Initialize the network configuration of a zone.
Create a network
To create a new network, follow the instructions in Create a network. You must also create at least one subnetwork within the network to allow Distributed Cloud connected nodes to connect to the network.
Create one or more subnetworks
To create a subnetwork, follow the instructions in Create a subnetwork. You must create at least one subnetwork in your network to allow nodes to access the network. The VLAN corresponding to each subnetwork that you create is automatically available to all nodes in the zone.
For Distributed Cloud connected servers, you can only configure subnetworks using VLAN IDs. CIDR-based subnetworks are not supported.
Establish northbound BGP peering sessions
When you create a network and its corresponding subnetworks, they are local to their Distributed Cloud connected zone. To enable outbound connectivity, you must establish at least one northbound BGP peering session between the network and your peering edge routers.
To establish a northbound BGP peering session, do the following:
List the interconnects available in your zone and then select the target interconnect for this peering session.
Create one or more interconnect attachments on the selected interconnect. Interconnect attachments link the router that you create in the next step with the selected interconnect.
Create a router. This router routes traffic between the interconnect and your network by using the interconnect attachments that you created in the previous step.
Add an interface to the router for each interconnect attachment that you created earlier in this procedure. For each interface, use the IP address of the corresponding top-of-rack (ToR) switch in your Distributed Cloud connected rack. For instructions, see Establish a northbound peering session.
Add a peer for each interface that you created on the router in the previous step.
Establish southbound BGP peering sessions
To enable inbound connectivity to your workloads from your local network, you must establish one or more southbound BGP peering sessions between your peering edge routers and the subnetwork to which your Pods belong. The gateway IP address for each subnetwork is the IP address of the corresponding ToR switch in your Distributed Cloud connected rack.
To establish a southbound BGP peering session, do the following:
Add an interface to the router in the target network for each subnetwork that you want to provision with inbound connectivity. For instructions, see Establish a southbound peering session.
Add a peer for each interface that you created on the router in the previous step.
Optional: Establish loopback BGP peering sessions
To enable high-availability connectivity between your workloads and your local network, you can establish a loopback BGP peering session between the target Pod and both ToR switches in your Distributed Cloud connected rack. A loopback peering session establishes two independent peering sessions for the Pod, one with each ToR switch.
To establish a loopback BGP peering session, do the following:
Add a loopback interface to the router in the target network. For instructions, see Establish a loopback peering session.
Add a peer for the loopback interface.
Test your configuration
To test your configuration of the network components that you created, do the following:
Connect your Pods to the network
To connect your Pods to the network and configure advanced network functions, follow the instructions in Network Function operator.
Load balancing
Distributed Cloud ships with a bundled network load balancing solution based on MetalLB in Layer 2 mode. You can use this solution to expose services that run in your Distributed Cloud zone to the outside world by using virtual IP addresses (VIPs) as follows:
- Your network administrator plans the network topology and specifies the
required virtual IPv4 address subnetwork when ordering
Distributed Cloud. Google configures your
Distributed Cloud hardware accordingly before delivery.
Keep the following in mind:
- This VIP subnetwork is shared among all Kubernetes clusters that run within your Distributed Cloud zone.
- A route for the requested VIP subnetwork is advertised through the BGP sessions between the Distributed Cloud zone and your local network.
- The first (network ID), second (default gateway), and last (broadcast address) addresses in the subnetwork are reserved for core system functionality. Do not assign those addresses to your MetalLB configurations' address pools.
- Each cluster must use a separate VIP range that falls within the configured VIP subnetwork.
- When you create a cluster in your Distributed Cloud zone,
your cluster administrator specifies the Pod and ClusterIP Service address
pools by using CIDR notation. Your network administrator provides the
appropriate
LoadBalancer
VIP subnetwork to your cluster administrator. After the cluster is created, the cluster administrator configures the corresponding VIP pools. For remote control plane clusters, you must edit the
metallb-config
ConfigMap in themetallb-system
namespace by using thekubectl edit
orkubectl replace
command. Do not use thekubectl apply
command because Distributed Cloud overwrites your changes if you do.The following example illustrates such a configuration:
# metallb-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: metallb-config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.2-192.168.1.254
For local control plane clusters, you must specify the VIP pools by using the
--external-lb-ipv4-address-pools
flag when you create the cluster. For more information, see Survivability mode.The cluster administrator creates the appropriate Kubernetes
LoadBalancer
Services.
Distributed Cloud nodes in a single node pool share a common Layer 2 domain and are therefore also MetalLB load balancer nodes. Distributed Cloud control plane nodes that run on Google Cloud do not function as load balancer nodes.
Distributed Cloud ingress
In addition to load balancing, Distributed Cloud connected also
supports Kubernetes Ingress
resources. A Kubernetes Ingress
resource
controls the flow of HTTP(S) traffic to Kubernetes Services that run on your
Distributed Cloud connected clusters. The following example
illustrates a typical Ingress
resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: my-service
port:
number: 80
path: /foo
pathType: Prefix
When configured, network traffic flows through the istio-ingress
Service,
which by default is assigned a random IP address from the VIP pools specified
in your MetalLB configuration. You can select a specific IP address or a virtual
IP address from the MetalLB configuration by using the loadBalancerIP
field in
the istio-ingress
Service definition. For example:
apiVersion: v1
kind: Service
metadata:
labels:
istio: ingress-gke-system
release: istio
name: istio-ingress
namespace: gke-system
spec:
loadBalancerIP: <targetLoadBalancerIPaddress>
This functionality is not available on Distributed Cloud connected servers.
Disable the default Distributed Cloud Ingress
resource
By default, when you create a Distributed Cloud connected cluster,
Distributed Cloud automatically configures the
istio-ingress
Service for the cluster. You have the option to create a
Distributed Cloud connected cluster without the istio-ingress
Service. To do so, complete the following steps:
gcloud
Create a YAML configuration file named
SystemsAddonConfig.yaml
with the following contents:systemAddonsConfig: ingress: disabled: true
Pass the
SystemsAddonConfig.yaml
file by using the--system-addons-config
flag in your cluster creation command. You must use thegcloud alpha
version to use this feature. For example:gcloud alpha edge-cloud container clusters create MyGDCECluster1 --location us-west1 \ --system-addons-config=SystemsAddonConfig.yaml
For more information about creating a Distributed Cloud cluster, see Create a cluster.
API
Add the following JSON content to the JSON payload in your cluster creation request:
"systemAddonConfig" { "ingress" { "disabled": true } }
Submit the cluster creation request as described in Create a cluster.
SCTP support
Distributed Cloud connected supports the Stream Control Transmission
Protocol (SCTP) on the primary network interface for both internal and external
networking. SCTP support includes the NodePort
, LoadBalancer
, and
ClusterIP
Service types. Pods can use SCTP to communicate with other Pods and
external resources. The following example illustrates how to
configure IPERF as a ClusterIP
Service by using SCTP:
apiVersion: v1
kind: Pod
metadata:
name: iperf3-sctp-server-client
labels:
app.kubernetes.io/name: iperf3-sctp-server-client
spec:
containers:
- name: iperf3-sctp-server
args: ['-s', '-p 31390']
ports:
- containerPort: 31390
protocol: SCTP
name: server-sctp
- name: iperf3-sctp-client
...
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-sctp-svc
spec:
selector:
app.kubernetes.io/name: iperf3-sctp-server-client
ports:
- port: 31390
protocol: SCTP
targetPort: server-sctp
This functionality is not available on Distributed Cloud connected servers.
SCTP kernel modules
Starting with version 1.5.0, Distributed Cloud connected configures
the sctp
Edge OS kernel module as loadable. This allows you to load your own
SCTP protocol stacks in the kernel user space.
Additionally, Distributed Cloud connected loads the following modules into the kernel by default:
Module name | Config name |
---|---|
fou |
CONFIG_NET_FOU |
nf_conntrack_proto_gre |
CONFIG_NF_CT_PROTO_GRE |
nf_conntrack_proto_sctp |
CONFIG_NF_CT_PROTO_SCTP |
inotify |
CONFIG_INOTIFY_USER |
xt_redirect |
CONFIG_NETFILTER_XT_TARGET_REDIRECT |
xt_u32 |
CONFIG_NETFILTER_XT_MATCH_U32 |
xt_multiport |
CONFIG_NETFILTER_XT_MATCH_MULTIPORT |
xt_statistic |
CONFIG_NETFILTER_XT_MATCH_STATISTIC |
xt_owner |
CONFIG_NETFILTER_XT_MATCH_OWNER |
xt_conntrack |
CONFIG_NETFILTER_XT_MATCH_CONNTRACK |
xt_mark |
CONFIG_NETFILTER_XT_MARK |
ip6table_mangle |
CONFIG_IP6_NF_MANGLE |
ip6_tables |
CONFIG_IP6_NF_IPTABLES |
ip6table_filter |
CONFIG_IP6_NF_FILTER |
ip6t_reject |
CONFIG_IP6_NF_TARGET_REJECT |
iptable_mangle |
CONFIG_IP_NF_MANGLE |
ip_tables |
CONFIG_IP_NF_IPTABLES |
iptable_filter |
CONFIG_IP_NF_FILTER |
ClusterDNS
resource
Distributed Cloud connected supports the Google Distributed Cloud
ClusterDNS
resource for configuring upstream name servers for specific domains
by using the spec.domains
section. For more information about configuring this
resource, see
spec.domains
.
This functionality is not available on Distributed Cloud connected servers.