Google Distributed Cloud Edge enables you to run Kubernetes clusters on dedicated hardware provided and maintained by Google that is separate from the traditional Google Cloud data center. Google delivers and installs the Distributed Cloud Edge hardware on your premises.
Deploying workloads on a Distributed Cloud Edge installation functions in a similar way to deploying workloads on cloud-based GKE clusters. After the hardware has been deployed, your cluster administrator provisions Distributed Cloud Edge clusters by using the Google Cloud console or the Google Cloud CLI, and your network administrator configures the Distributed Cloud Edge networking components so that your workloads can communicate with your local network and each other. Your application owners can then deploy workloads to those clusters. Distributed Cloud Edge supports running workloads in Kubernetes containers and in virtual machines, including GPU-based workloads, which run on NVIDIA Tesla T4 GPUs.
Google remotely monitors and maintains your Distributed Cloud Edge installation, which includes installing software updates and security patches, resolving configuration issues, and diagnosing the Distributed Cloud Edge hardware. To resolve an issue that can't be resolved remotely, you must provide Google's authorized personnel physical access to the Distributed Cloud Edge hardware.
Your Distributed Cloud Edge deployment can access Google Cloud services and your applications running within Google Cloud and your Virtual Private Cloud through a secure Cloud VPN connection.
For a technical overview of Distributed Cloud Edge, see How Distributed Cloud Edge works.
When to use Distributed Cloud Edge
Distributed Cloud Edge is specifically designed to address the following scenarios in which conventional Google Cloud deployments may not be sufficient:
- Your applications require a very stable network connection and cannot tolerate potential traffic disruptions that commonly occur when transferring data over the internet.
Your applications require the lowest attainable network latency and are sensitive to latency spikes or jitter. Distributed Cloud Edge also supports high-performance network technologies such as single root input/output virtualization (SR-IOV) and the Data Plane Development Kit (DPDK) for even more advanced scenarios that utilize the Network function operator.
Your applications generate large amounts of data that would be performance-prohibitive and/or cost-prohibitive to transfer to and from Google Cloud.
Your local laws or regulations dictate that your data must remain on-premises and must not be stored either outside of your business or outside of a specific geographic jurisdiction.
Limitations of Distributed Cloud Edge
A Distributed Cloud Zone has the following limitations compared to a conventional cloud-based GKE Zone:
- Processing capacity. Unlike a conventional cloud-based Zone, your Distributed Cloud Edge installation has limited processing capacity. Be mindful of this limitation when planning and deploying your workloads.
- Workload restrictions. Distributed Cloud Edge places a number of restrictions on your workloads, as described in Limitations for Distributed Cloud Edge workloads.
- Anthos features. Distributed Cloud Edge does not support Anthos features such as Anthos Service Mesh or Anthos Config Management.
Known issues in this release of Distributed Cloud Edge
This release of Distributed Cloud Edge has the following known issues:
- A large number of webhook calls might cause the Konnectivity proxy to temporarily fail.
- The metrics agents running on Distributed Cloud Edge Nodes can accumulate a backlog of events and stall, preventing the capture of further metrics.
- Garbage collection intermittently fails to clean up terminated Pods.
- How Distributed Cloud Edge works
- Installation requirements
- Order Distributed Cloud Edge
- Deploy workloads on Distributed Cloud Edge
- Security best practices
- Availability best practices
- Network Function operator