This page describes the networking features of Google Distributed Cloud Edge.
Enable the Distributed Cloud Edge Network API
Before you can configure Distributed Cloud Edge networking, you must enable the Distributed Cloud Edge Network API by completing the steps in this section.
console
To enable the Distributed Cloud Edge Network API in the Google Cloud console, do the following:
Go to the Distributed Cloud Edge Network API page in the Google Cloud console.
Click Enable.
gcloud
To enable the Distributed Cloud Edge Network API, use the following command:
gcloud services enable edgenetwork.googleapis.com
Configure Distributed Cloud Edge networking
This section describes how to configure the Distributed Cloud Edge networking components.
A typical network configuration for Distributed Cloud Edge consists of the following steps:
(Optional) Initialize the network configuration of the target Zone, if necessary.
Create a Network.
Create one or more Subnetworks within the Network.
Establish northbound BGP peering sessions with your PE routers using the corresponding InterconnectAttachments.
Establish southbound BGP peering sessions with the Pods running your workloads using the corresponding Subnetworks.
(Optional) Establish loopback BGP peering sessions for high availability.
Test your configuration.
Connect your Pods to the Network.
(Optional) Initialize the network configuration of the Distributed Cloud Edge Zone
You must initialize the network configuration of your Distributed Cloud Edge Zone in the following cases:
- Immediately after your Distributed Cloud Edge hardware has been installed on your premises.
- You have upgraded to Distributed Cloud Edge version 1.3.0 on an existing Distributed Cloud Edge deployment but did not participate in the private preview of the Distributed Cloud Edge Network API.
Initializing the network configuration of a Zone creates a default Router named default
, a default Network named default
,
and configures the default
Router to peer with all of the Interconnects you requested when you ordered the
Distributed Cloud Edge hardware by creating corresponding InterconnectAttachments. This provides your
Distributed Cloud Edge deployment with basic uplink connectivity to your local network.
Initializing the network configuration of a Zone is a one-time procedure. For instructions, see Initialize the network configuration of a Zone.
Create a Network
To create a new Network, follow the instructions in Create a Network. You must also create at least one Subnetwork within the Network to allow Distributed Cloud Edge Nodes to connect to the Network.
Create one or more Subnetworks
To create a Subnetwork, follow the instructions in Create a Subnetwork. You must create at least one Subnetwork in your Network to allow Nodes to access the Network. The VLAN corresponding to each Subnetwork you create is automatically available to all Nodes in the Zone.
Establish northbound BGP peering sessions
When you create a Network and its corresponding Subnetworks, they are local to their Distributed Cloud Edge Zone. To enable outbound connectivity, you must establish at least one northbound BGP peering session between the Network and your peering edge routers.
List the Interconnects available in your Zone and select the target Interconnect for this peering session. For instructions, see List Interconnects.
Create one or more InterconnectAttachment on the selected Interconnect. InterconnectAttachments link the Router you will create in the next steps with the selected Interconnect. For instructions, see Create an InterconnectAttachment.
Create a Router. This Router will route traffic between the Interconnect and your Network using the InterconnectAttachments you created in the previous step. For instructions, see Create a Router.
Add an interface to the Router for each InterconnectAttachment you have created earlies in this procedure. For each interface, use the IP of the corresponding ToR switch in your Distributed Cloud Edge rack. For instructions, see Establish a northbound peering session.
Add a peer for each interface you have created on the Router in the previous step. For instructions, see Add a peer to a BGP peering session.
Establish southbound BGP peering sessions
To enable inbound connectivity to your workloads from your local network, you must establish one or more southbound BGP peering sessions between your peering edge routers and the Subnetwork to which your Pods belong. The gateway IP address for each Subnetwork is the IP of the corresponding ToR switch in your Distributed Cloud Edge rack.
Add an interface to the Router in the target Network for each Subnetwork you want to provision with inbound connectivity. For instructions, see Establish a southbound peering session.
Add a peer for each interface you have created on the Router in the previous step. For instructions, see Add a peer to a BGP peering session.
(Optional) Establish loopback BGP peering sessions
To enable high-availability connectivity between your workloads and your local network, you can establish a loopback BGP peering session between the target Pod and both ToR switches in your Distributed Cloud Edge rack. A loopback peering session establishes two independent peering sessions for the Pod, one with each ToR switch.
Add a loopback interface to the Router in the target Network. For instructions, see Establish a loopback peering session.
Add a peer for the loopback interface. For instructions, see Add a peer to a BGP peering session.
Test your configuration
To test your configuration, check the operational status of the network components you have configured earlier in this section as follows:
Check the operational status of the Network by following the instructions in Get information about a Network.
Check the provisioning status of each Subnetwork by following the instructions in Get information about a Subnetwork.
Check the operational status of the Interconnects by following the instructions in Get information about an Interconnect.
Check the operational status of the InterconnectAttachments by following the instructions in Get information about an InterconnectAttachment.
Check the operational status of the Router by following the instructions in Get information about a Router.
Connect your Pods to the Network
To connect your Pods to the network and configure advanced network functions, follow the instructions in Network Function operator.
Load balancing
Distributed Cloud Edge ships with a bundled network load balancing solution based on MetalLB in Layer 2 mode. You can use this solution to expose services running in your Distributed Cloud Edge Zone to the outside world using virtual IPs as follows:
- Your network administrator plans the network topology and specifies the
required virtual IPv4 address subnetwork when ordering
Distributed Cloud Edge. Google configures your
Distributed Cloud Edge hardware accordingly before delivery.
Keep the following in mind:
- This virtual IP address subnetwork is shared among all Kubernetes Clusters running within your Distributed Cloud Edge Zone.
- A route for the requested virtual IP address subnetwork is advertised through the BGP sessions between the Distributed Cloud Edge Zone and your local network.
- The first (network ID), second (default gateway), and last (broadcast address) addresses in the subnetwork are reserved for core system functionality. Do not assign those addresses to your MetalLB configurations' address pools.
- Each Cluster must use a separate virtual IP address range that falls within the configured virtual IP address subnetwork.
- When creating a Cluster in your Distributed Cloud Edge Zone,
your Cluster administrator specifies the Pod and ClusterIP Service address
pools using CIDR notation. Your network administrator provides the
appropriate
LoadBalancer
virtual IP address subnetwork to your Cluster administrator. After the Cluster has been created, the Cluster administrator configures the virtual IP address pools, using the
kubectl
tool, to edit themetallb-config
ConfigMap in themetallb-system
namespace, and applies the configuration to the Cluster. The following example illustrates such a configuration:# metallb-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: metallb-config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.2-192.168.1.254
The Cluster administrator creates the appropriate Kubernetes
LoadBalancer
services.
Distributed Cloud Edge Nodes in a single NodePool share a common Layer 2 domain and are therefore also MetalLB load balancer nodes. Distributed Cloud Edge control plane nodes running on Google Cloud do not function as load balancer nodes.
Distributed Cloud Edge ingress
In addition to load balancing, Distributed Cloud Edge also supports
Kubernetes Ingress
resources. A Kubernetes ingress controls the flow of
HTTP(S) traffic to Kubernetes services running on your Distributed Cloud Edge
Clusters. The example below illustrates a typical Ingress
resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: my-service
port:
number: 80
path: /foo
pathType: Prefix
When configured, network traffic flows through the istio-ingress
service, which by
default is assigned a random IP address from the virtual IP address pools specified
in your MetalLB configuration. You can select a specific IP or virtual IP address from the MetalLB
configuration using the loadBalancerIP
field in the istio-ingress
service definition. For example:
apiVersion: v1
kind: Service
metadata:
labels:
istio: ingress-gke-system
release: istio
name: istio-ingress
namespace: gke-system
spec:
loadBalancerIP: <targetLoadBalancerIPaddress>
SCTP support
Distributed Cloud Edge supports the Stream Control Transmission Protocol (SCTP)
on the primary network interface for both internal and external networking. SCTP
support includes the NodePort
, LoadBalancer
, and ClusterIP
service types. Pods can use SCTP
to communicate with other Pods and external resources. The following example illustrates how to
configure IPERF as a ClusterIP
service using SCTP:
apiVersion: v1
kind: Pod
metadata:
name: iperf3-sctp-server-client
labels:
app.kubernetes.io/name: iperf3-sctp-server-client
spec:
containers:
- name: iperf3-sctp-server
args: ['-s', '-p 31390']
ports:
- containerPort: 31390
protocol: SCTP
name: server-sctp
- name: iperf3-sctp-client
...
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-sctp-svc
spec:
selector:
app.kubernetes.io/name: iperf3-sctp-server-client
ports:
- port: 31390
protocol: SCTP
targetPort: server-sctp
ClusterDNS
resource
Distributed Cloud Edge supports the Anthos on bare metal ClusterDNS
resource
for configuring upstream name servers for specific domains using the spec.domains
section. For more
information about configuring this resource, see
spec.domains
.