Distributed Cloud Edge consists of the following components:
- The Distributed Cloud Edge infrastructure. Google provides, deploys, and maintains the Distributed Cloud Edge hardware, including remote management by a dedicated Google team.
- The Distributed Cloud Edge service. This service allows you
to manage your Distributed Cloud Edge Clusters and NodePools using
the gcloud CLI and the Distributed Cloud Edge API. The
Distributed Cloud Edge Clusters are registered in your Fleet and
you can interact with them using the Kubernetes
Distributed Cloud Edge infrastructure
Google provides, deploys, operates, and maintains a rack of dedicated hardware that runs your Distributed Cloud Edge Zone. This hardware consists of rack-mounted server machines and two top-of-rack (ToR) switches that interconnect the machines with your local network. The Distributed Cloud Edge Nodes that execute your workloads run exclusively on this hardware.
The hardware runs a number of Nodes grouped into NodePools, which you can assign to Clusters within your Distributed Cloud Edge Zone. You can configure your network so that workloads running on Distributed Cloud Edge Clusters are available only to your local users or accessible from the internet. You can also configure your network to only allow Distributed Cloud Edge Nodes to use local resources or to communicate with workloads such as Compute Engine virtual machines and Kubernetes Pods running in a VPC network over a secure Cloud VPN network connection to a VPC.
Distributed Cloud Edge management
Distributed Cloud Edge Nodes are not standalone resources and must remain connected to Google Cloud for control plane management and monitoring purposes. The Distributed Cloud Edge control plane Nodes are hosted in the designated Google Cloud region. The on-premises Distributed Cloud Edge Nodes require a constant network connection to Google Cloud.
Google remotely manages the physical machines and top-of-rack switches that constitute your Distributed Cloud Edge installation. This includes installing software updates and security patches, and resolving configuration issues. Your network administrator can also monitor the health and performance of Distributed Cloud Edge Clusters and Nodes, and work with Google to resolve any issues.
After Google has successfully deployed the Distributed Cloud Edge hardware in your designated location, your Cluster administrator can begin configuring the Distributed Cloud Edge Cluster in a way similar to a conventional Kubernetes Cluster. They can assign Machines to NodePools, and NodePools to Clusters, and grant application owners access as required by their roles. The Cluster administrator must, however, keep in mind the processing and storage limitations of the machines in your Distributed Cloud Edge rack and plan Cluster and workload configuration accordingly.
Distributed Cloud Edge provides an API for configuring Clusters and NodePools.
Accessing the Distributed Cloud Edge Zone
You can configure your network to allow the desired level of access to your Distributed Cloud Edge Zone, both from your local network and the internet.
You can also grant your Distributed Cloud Edge Zone access to Google Cloud services by connecting it to your Virtual Private Cloud network. Distributed Cloud Edge uses Cloud VPN to connect to Google service endpoints. Your network administrator must configure your network to allow this.
Distributed Cloud Edge personas
The following personas are involved in the deployment and operation of your Distributed Cloud Edge Zone:
Google field technician. Delivers, installs, and activates the Distributed Cloud Edge hardware in your designated location. Your network administrator works with the Google technicians to connect the hardware to your power source and connect it to your network.
Google site reliability engineer (SRE). Monitors and manages the Distributed Cloud Edge hardware. This includes resolving configuration issues, installing patches and updates, and maintaining security.
Network administrator. Configures and maintains network connectivity and access control between the Distributed Cloud Edge hardware and your local network. This includes configuring your routing and firewall rules to ensure that all required types of network traffic can freely flow between the Distributed Cloud Edge hardware, Google Cloud, the clients that consume your Distributed Cloud Edge workloads, internal and external data repositories, and so on. The network administrator must have access to the Google Cloud console to monitor the status of your Distributed Cloud Edge machines.
Cluster administrator. Deploys and maintains Distributed Cloud Edge Clusters within your organization. This includes configuring permissions, logging, and provisioning workloads for each Cluster. For Distributed Cloud Edge. the Cluster administrator assigns nodes to NodePools and NodePools to Distributed Cloud Edge Clusters. The Cluster administrator must understand the operational differences between the Distributed Cloud Edge Cluster and a traditional Kubernetes Cluster, such as the processing and storage capabilities of the Distributed Cloud Edge hardware, in order to correctly configure and deploy your workloads.
Application owner. A software engineer responsible for developing and/or deploying and monitoring an application running on a Cluster on a Distributed Cloud Edge Cluster. Application owners that own applications on a Distributed Cloud Edge Cluster must understand the limitations on the size and location of the Clusters as well as the ramifications of deploying an application at the edge, such as performance and latency.
Distributed Cloud Edge hardware
The following diagram depicts a typical Distributed Cloud Edge configuration:
The components of a Distributed Cloud Edge installation are as follows:
Google Cloud. Traffic between your Distributed Cloud Edge installation and Google Cloud includes hardware management traffic, control plane traffic, and Cloud VPN traffic to Google Cloud services and any workloads you are running there. It can also include VPC traffic, if applicable.
Internet. The internet. Encrypted management and control plane traffic between your Distributed Cloud Edge installation and Google Cloud travels over the internet.
Local network. The local network external to the Distributed Cloud Edge rack that connects the peering edge routers to the internet.
Peering edge routers. Your local network routers that interface with the Distributed Cloud Edge ToR switches. Depending on the physical location you chose for your Distributed Cloud Edge installation, the peering edge routers can be owned and maintained by your organization or your co-location facility. You must configure these routers to peer with the ToR switches using BGP and advertise a default route to your Distributed Cloud Edge hardware. You must also configure these routers, as well as any corresponding firewalls, to allow Google's device management traffic, the Distributed Cloud Edge control plane traffic, and Cloud VPN traffic, if applicable.
Depending on your business requirements, you can configure these routers as follows:
- Allow your Distributed Cloud Edge nodes to access the internet using public network address translation (NAT) or exposed directly at public IP addresses.
- Allow a VPN connection to your VPC network and any desired Google Cloud services.
Top-of-Rack (ToR) switches. The Layer 3 switches that connect the machines within the rack and interface with your local network. These switches are Border Gateway Protocol (BGP) speakers and handle network traffic between the Distributed Cloud Edge rack and your local network equipment. They connect to peering edge routers using Link Aggregation Control Protocol (LACP) bundles.
Machines. The physical machines running Distributed Cloud Edge software and executing your workloads. Each physical machine is a Node within the Distributed Cloud Edge Cluster.
Distributed Cloud Edge service
The Distributed Cloud Edge service runs on Google Cloud and serves as a control plane for the nodes and Clusters running on your Distributed Cloud Edge hardware. Distributed Cloud Edge must be able to connect to Google Cloud at all times and cannot function without that connection.
This control plane instantiates and configures your Distributed Cloud Edge Zone. The specific Google datacenter to which your Distributed Cloud Edge hardware connects for management is chosen according its proximity to your Distributed Cloud Edge installation.
A Distributed Cloud Edge Zone consists of a number of Machines equal to the number of physical machines installed in your Distributed Cloud Edge rack. You can assign these Machines, instantiated as Kubernetes Nodes, to a NodePool, and the NodePool to a Distributed Cloud Edge Cluster.
The following diagram depicts the logical organization of Distributed Cloud Edge entities:
The entities are as follows:
Google Cloud region. The Google Cloud region for your Distributed Cloud Edge Zone is determined by the location of the Google data center that is the closest to your Distributed Cloud Edge installation.
Kubernetes control plane. The Kubernetes control plane for each Distributed Cloud Edge Cluster runs remotely in a Google data center in the Google Cloud region to which your Distributed Cloud Edge has been assigned. This allows Distributed Cloud Edge to benefit from a secure and highly available control plane without taking up processing capacity on the Distributed Cloud Edge physical machines. This also makes Distributed Cloud Edge tolerant to brief periods of network connectivity loss between Distributed Cloud Edge Machines and their respective control planes, but Cluster functionality degrades if the outage lasts for over two minutes.
Distributed Cloud Edge Zone. A logical abstraction that represents the Distributed Cloud Edge hardware installed in your Distributed Cloud Edge rack. A Distributed Cloud Edge Zone covers a single rack of Distributed Cloud Edge hardware. The physical machines in the Zone are instantiated as Distributed Cloud Edge Machines in the Google Cloud console. The Machines in a Distributed Cloud Edge Zone share a single network fabric or a single fault domain. Google creates your Machines before delivering your Distributed Cloud Edge hardware. You cannot create, delete, or modify Distributed Cloud Edge Machines.
Node. A Node is a Kubernetes resource that instantiates a Distributed Cloud Edge physical machine into the Kubernetes realm when you create a NodePool, making it available to run workloads by assigning the NodePool to a Distributed Cloud Edge Cluster. The Kubernetes control plane for each Node runs on Google Cloud.
NodePool. A logical grouping of Distributed Cloud Edge Nodes within a single Distributed Cloud Edge Zone that allows you to assign Distributed Cloud Edge Nodes to Distributed Cloud Edge Clusters.
Cluster. A Distributed Cloud Edge Cluster that consists of a control plane and one or more NodePools.
VPN Connection. A VPN tunnel to a VPC running in a Cloud project. This tunnel allows your Distributed Cloud Edge workloads to access Compute Engine resources connected to that VPC. You must create at least one NodePool in your Zone before you can create a VPN Connection.
Distributed Cloud Edge provides 4TiB of storage per physical machine in the Distributed Cloud Edge rack. This storage is configured as Linux logical volumes. When you create a Cluster, Distributed Cloud Edge creates one or more PersistentVolumes and exposes them as block volumes that you can assign to a workload using PersistentVolumeClaims. Keep in mind that these PersistentVolumes do not provide data durability and are only suitable for ephemeral data. For information on working with block volumes, see PersistentVolumeClaim requesting a Raw Block Volume.
Distributed Cloud Edge uses LUKS to encrypt local machine storage and supports customer-managed encryption keys (CMEK). For more information, see Security best practices.
This section describes the network connectivity requirements and features of Distributed Cloud Edge.
Connectivity to your local network
For outbound traffic to resources on your local network, Pods in a Distributed Cloud Edge Cluster use the default routes advertised by your peering edge routers. Distributed Cloud Edge uses its built-in NAT to connect Pods to those resources.
For inbound traffic from resources on your local network, your network administrator must configure routing policies that match your business requirements to control access to Pods in each of your Distributed Cloud Edge Clusters. This means, at the minimum, completing the steps in Firewall configuration, and configuring additional policies as required by your workloads. For example, you can set up allow/deny policies for individual Node sub-networks or virtual IP addresses exposed by Distributed Cloud Edge's built-in load balancer. The Distributed Cloud Edge Pod and Distributed Cloud Edge service CIDR blocks are not directly accessible.
Connectivity to the internet
For outbound traffic to resources on the internet, Pods in a Distributed Cloud Edge Cluster use the default route advertised by your routers to the Distributed Cloud Edge ToR switches. Distributed Cloud Edge Clusters. This means, at the minimum, completing the steps in Firewall configuration, and configuring additional policies as required by your workloads. Distributed Cloud Edge uses its built-in NAT to connect Pods to those resources. You can optionally configure your own layer of NAT on top of Distributed Cloud Edge's built in layer.
For inbound traffic, you must configure your WAN routers according to your business requirements. These requirements dictate the level of access you need to provide from the public internet to the Pods in your Distributed Cloud Edge Clusters. Distributed Cloud Edge uses its built-in NAT for Pod CIDR blocks and service management CIDR blocks, so those CIDR blocks will not be accessible from the internet.
Connectivity to Virtual Private Cloud
Distributed Cloud Edge includes a built in VPN solution that allows you to connect a Distributed Cloud Edge Cluster directly to a VPC instance if that instance is in the same Cloud project as the Distributed Cloud Edge Cluster.
If you're using Cloud Interconnect to connect your local network to a VPC instance, your Distributed Cloud Edge Clusters can reach that instance using the standard northbound eBGP peering. Your peering edge routers must be able to reach the appropriate VPC prefixes and that your Cloud Interconnect routers correctly announce your Distributed Cloud Edge prefixes, such as Distributed Cloud Edge load balancer, management, and system subnetworks.
After you have established a VPN connection between your Distributed Cloud Edge Cluster and your VPC, the following connectivity rules apply by default:
- Your VPC can access all Pods in the Distributed Cloud Edge Cluster.
- All Pods in the Distributed Cloud Edge Cluster can access all Pods in your VPC-native Clusters. For routes-based Clusters, you must manually configure custom route advertisements.
- All Pods in the Distributed Cloud Edge can access Virtual Machine subnetworks in your VPC.
Connectivity to Google Cloud APIs and services
After you have configured a VPN Connection to your VPC, workloads running on your Distributed Cloud Edge installation can access Google Cloud APIs and services.
You can additionally configure the following features if your business requirements call for them:
- Private Google Access to access Google Cloud APIs and services.
- Private Service Connect to access Google Cloud APIs using private service endpoints.
Your business requirements and your organization's network security policy will dictate the steps necessary to secure network traffic flowing in and out of your Distributed Cloud Edge installation. For more information, see Security best practices.
Other networking features
Distributed Cloud Edge supports the following networking features:
- Load balancing. For more information, see Load balancing.
Ingressresources. For more information, see Distributed Cloud Edge Ingress.
High-performance networking support
Distributed Cloud Edge supports the execution of workloads that require the best possible networking performance. To this end, Distributed Cloud Edge ships with a specialized Network Function operator and a set of Kubernetes CustomResourceDefinitions (CRDs) that implement the features required for high-performance workload execution.
Distributed Cloud Edge also supports virtualizing network interfaces using SR-IOV.
For more information, see Network Function operator.
- Installation requirements
- Order Distributed Cloud Edge
- Deploy workloads on Google Distributed Cloud Edge
- Security best practices
- Availability best practices
- Network Function operator