This page provides an overview of Google Distributed Cloud Edge, including information about when to use it and its limitations and known issues.
Distributed Cloud Edge enables you to run Google Kubernetes Engine (GKE) clusters on dedicated hardware provided and maintained by Google that is separate from the traditional Google Cloud data center. Google delivers and installs the Distributed Cloud Edge hardware on your premises.
Deploying workloads on a Distributed Cloud Edge installation functions in a similar way to deploying workloads on cloud-based GKE clusters. After the hardware has been deployed, your cluster administrator provisions Distributed Cloud Edge clusters by using the Google Cloud console or the Google Cloud CLI. In addition, your network administrator configures the Distributed Cloud Edge networking components so that your workloads can communicate with your local network and each other. Your application owners can then deploy workloads to those clusters. Distributed Cloud Edge supports running workloads in Kubernetes containers and on virtual machines, including GPU-based workloads, which run on NVIDIA Tesla T4 GPUs.
Google remotely monitors and maintains your Distributed Cloud Edge installation, which includes installing software updates and security patches, resolving configuration issues, and diagnosing the Distributed Cloud Edge hardware. To resolve an issue that can't be resolved remotely, you must provide Google's authorized personnel physical access to the Distributed Cloud Edge hardware.
Your Distributed Cloud Edge deployment uses a secure Cloud VPN connection to access Google Cloud services and your applications that run within Google Cloud and your Virtual Private Cloud (VPC) network.
For a technical overview of Distributed Cloud Edge, see How Distributed Cloud Edge works.
When to use Distributed Cloud Edge
Distributed Cloud Edge is specifically designed to address the following scenarios in which conventional Google Cloud deployments might not be sufficient:
- Your applications require a very stable network connection and cannot tolerate potential traffic disruptions that commonly occur when transferring data over the internet.
Your applications require the lowest attainable network latency and are sensitive to latency spikes or jitter. Distributed Cloud Edge also supports high-performance network technologies such as single root input/output virtualization (SR-IOV) and the Data Plane Development Kit (DPDK) for even more advanced scenarios that utilize the Network function operator.
Your applications generate large amounts of data that would be performance-prohibitive or cost-prohibitive to transfer to and from Google Cloud.
Your local laws or regulations dictate that your data must remain on-premises and must not be stored either outside of your business or outside of a specific geographic jurisdiction.
Limitations of Distributed Cloud Edge
A Distributed Cloud zone has the following limitations compared to a conventional cloud-based GKE zone:
- Processing capacity. Unlike a conventional cloud-based zone, your Distributed Cloud Edge installation has limited processing capacity. Be mindful of this limitation when planning and deploying your workloads.
- Workload restrictions. Distributed Cloud Edge places several restrictions on your workloads.
- GKE Enterprise features. Distributed Cloud Edge does not support GKE Enterprise features such as Cloud Service Mesh except for the ConfigSync feature of Config Management.
Known issues in this release of Distributed Cloud Edge
This release of Distributed Cloud Edge has the following known issues:
- A large number of webhook calls might cause the Konnectivity proxy to temporarily fail.
- The metrics agents running on Distributed Cloud Edge nodes can accumulate a backlog of events and stall, preventing the capture of further metrics.
- Garbage collection intermittently fails to clean up terminated Pods.
- BGP sessions do not recover when the corresponding network interface goes down and then comes back up.