This page contains release notes for releases, features, and updates to Google Distributed Cloud Edge.
You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.
To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly.
August 07, 2024
This is a patch release of Google Distributed Cloud connected (version 1.7.1).
Security mitigations for the following vulnerabilities have been implemented in this release of Google Distributed Cloud connected:
- CVE-2024-36971, CVE-2024-36901, CVE-2024-36969, CVE-2024-36902 CVE-2024-36893, CVE-2024-36897, CVE-2024-35984, CVE-2024-35997, CVE-2024-6387 (GCP-2024-040), CVE-2024-38433, CVE-2024-0172
The following Google Distributed Cloud connected components have been updated:
- GKE on Bare Metal has been updated from version 1.28.500 to version 1.28.700. (This component was formerly known as Anthos Clusters on Bare Metal.)
- Kubernetes has been updated from version 1.28.8 to version 1.28.10.
The following issues have been resolved in this release of Google Distributed Cloud connected:
Nodes no longer get stuck in
Ready,SchedulingDisabled
state after applying configuration changes. Applying or deleting theNodeSystemConfigUpdate
orSriovNetworkNodePolicy
resources no longer results in a node that's stuck in theReady,Scheduling Disabled
state after it reboots.Cluster software upgrades are no longer affected by GKE Identity Service (GKE IS) Pods that get stuck in a
Failed
state after a machine reboot.Virtual machine workloads no longer temporarily go down when upgrading Google Distributed Cloud connected software. The virtual machine workloads now remain running while a Google Distributed Cloud software upgrade completes.
Changes to the
VMRuntime
resource are no longer required before upgrading to Google Distributed Cloud connected version 1.7.1 or higher.Excessive CPU load on nodes undergoing live virtual machine migration during software upgrades has been resolved. When completing a live virtual machine migration during a Google Distributed Cloud connected software upgrade, nodes no longer experience CPU spikes that might affect workloads running on them.
Virtual machines no longer lose connectivity to each other during a live migration. When completing a live migration of virtual machines from one node to another, the virtual machines now retain network connectivity to each other across the source and destination nodes.
Virtual machines are now properly scheduled after recovery from a network partition. When a loss of network connectivity across multiple nodes occurs and causes a stall in storage I/O operations, the virtual machine fencing logic now properly marks the affected virtual machines as failed. Such virtual machines are now properly scheduled back onto the affected nodes when network connectivity is restored.
This release of Google Distributed Cloud connected contains the following known issues:
Virtual machine management can fail after a node has been powered down for an extended time. If you power down your Google Distributed Cloud connected machines for an extended period of time, you might not be able to manage the virtual machines scheduled on the corresponding nodes after you power the machines back up, even though those virtual machine workloads continue to run. To remedy this issue, contact Google Support.
The Kubernetes API server might return 404 errors when attempting to access
virt-api
endpoints. To work around this issue, contact Google Support.Storage operations hang when volume replicas are deleted from a cluster without removing the corresponding Symcloud Storage persistent volume intent. If you delete the volume replicas for a Symcloud Storage persistent volume but do not remove the corresponding intent, TCMU devices on the worker node hang, causing storage operations to stall indefinitely. This can affect both your workload data availability as well core system functionality of Google Distributed Cloud connected. To prevent this, you must always remove a Symcloud Storage persistent volume before deleting its associated volume replicas. If you need to resolve this issue on an affected node, contact Google Support.
Cluster deletion can fail due to stale Symcloud Storage data. When attempting to delete a cluster during disaster recovery or cluster reset, the deletion might fail due to the corresponding Symcloud Storage volumes not having been cleaned up. To resolve this issue, contact Google Support.
Cluster upgrades might fail with an "
ABM upgrade timed out
" error. Under certain conditions, if the GKE token expires while a cluster upgrade is in progress, the upgrade fails with an "ABM upgrade timed out
" error and a missinggkehub.memberships.update
permission is recorded in the logs. If you encounter this issue, contact Google Support.Removing the
NodeSelector
node label value in theNodeSystemConfigUpdate
resource after reconciliation does not reset the node status to default. If you remove the node label value in theNodeSelector
field of theNodeSystemConfigUpdate
resource after the resource has been successfully reconciled, the node does not revert to its default configuration.
The following new functionality has been introduced in this release of Google Distributed Cloud connected:
Active backup network redundancy. On Google Distributed Cloud connected servers, you can now opt in to active backup network redundancy mode for each Google Distributed Cloud connected zone. This mode improves resilience to network interruptions when you have a redundant network link available. To enable this feature, contact Google Support.
Single-node configuration for GDCc servers. You now have the option to order and deploy Google Distributed Cloud servers in single-node configurations. For more information, contact your Google field sales representative.
July 08, 2024
This is a minor release of Google Distributed Cloud connected (version 1.7.0).
The following new functionality has been introduced in this release of Google Distributed Cloud connected:
Customer-sourced hardware. You now have the option to purchase the Google Distributed Cloud connected hardware from a Google-partnered System Integrator (SI) and retain full ownership instead of leasing it from Google. For more information, contact your Google field sales representative.
Refreshed machine hardware. The server machines comprising Google Distributed Cloud connected racks have been updated to a more powerful hardware configuration. For more information, see Plan the hardware configuration.
Flexible rack configuration. You can now order a Google Distributed Cloud connected rack with 3, 6, 9, or 12 server machines. For more information, contact Google Support.
IPv4/IPv6 dual-stack networking. Google Distributed Cloud connected now supports IPv6 networking in addition to IPv4 networking. For more information, see IPv4/IPv6 dual-stack networking.
VM support on GDC connected servers. Google Distributed Cloud connected servers now support running virtual machine workloads. For more information, see Manage virtual machines on Distributed Cloud connected servers.
Pod image caching. Google Distributed Cloud connected now supports local caching of Pod images. For more information, see Configure a Pod for image caching.
Kafka support. Google Distributed Cloud now supports collecting workload metrics with Apache Kafka. For more information, see Logs and metrics.
Cluster connection state indication. You can now check whether a cluster is connected, disconnected, or reconnected and synchronizing with Google Cloud Platform. For more information, see Survivability mode.
Cluster maintenance exclusion windows. You can now specify one or more maintenance exclusion windows for a cluster. This prevents Google from performing maintenance or software upgrades on the cluster during the specified times. For more information, see Understand software updates and maintenance windows.
GDC Hardware Management API. You can now place orders for Google Distributed Cloud connected hardware programmatically using the GDC Hardware Management API. For more information, see Google Distributed Cloud connected CLI and API reference. This is a Preview-level feature.
The following changes to existing functionality have been introduced in this release of Google Distributed Cloud connected:
Bastion host GA. The bastion host feature of Google Distributed Cloud connected is now generally available. For more information, see Configure a bastion host.
Worker node software upgrades are now staggered. Google Distributed Cloud connected now upgrades worker node software in stages instead of all at once. This allows your workloads to continue running on some nodes, while others are upgrading. You have the option to specify the number of worker nodes that can go down for a software upgrade simultaneously. For more information, see Software update staggering.
GPU support is now automatically enabled. You no longer have to modify the
VMRuntime
resource to enable GPU support on Google Distributed Cloud connected. GPU support is now automatically enabled if a GPU is detected on a Google Distributed Cloud connected machine.Google Distributed Cloud connected component updates:
- GKE on Bare Metal. This component has been updated from version 1.1.6.1 to version 1.28.500.
- Kubernetes control plane. This component has been updated from version 1.27.9 to version 1.28.8.
- Symcloud Storage. This component has been updated from version 5.4.6 to version 5.4.8.
Anthos branding has been replaced with Google Kubernetes Service branding. Anthos features and services that Google Distributed Cloud connected relies on, such as Anthos Identity Service, have been rebranded to Google Kubernetes Service. You might still see references to the legacy branding in Google Distributed Cloud connected command output and error messages.
The following functionality has been deprecated in this release of Google Distributed Cloud connected:
Cloud control plane cluster support. As of this release, Google Distributed Cloud connected no longer supports Cloud control plane clusters. Local control plane clusters are now the only supported cluster type.
Raw block storage for virtual machine workloads. As of this release, you can no longer provision virtual machine workloads with raw block storage. Symcloud Storage is now the only supported storage type for virtual machine workloads.
The following issues have been resolved in this release of Google Distributed Cloud connected:
Symcloud Storage volume clean-up now functions correctly. Single node failures, such as power loss or network disconnection, no longer cause rescheduling failures for virtual machines that use Symcloud Storage volumes. When a node fails, virtual machines are automatically rescheduled onto another node and then scheduled back onto the original node once that node returns to operation.
Virtual machines no longer enter a stuck state when node network connections are intermittent. Virtual machines no longer get stuck in container creation state when their network connections repeatedly disconnect and reconnect. When all three nodes in a Google Distributed Cloud connected server group regain network connectivity, the affected virtual machines are automatically rescheduled back onto their original nodes.
Virtual machine restore operations now complete successfully. Problems related to taking subsequent snapshots of virtual machines after the initial ones have been resolved. These problems caused virtual machine restore operations to fail.
Virtual machine heartbeat has been tuned to increase failover resilience. Occasionally, when a node failed, virtual machines on other nodes in the cluster would fail multiple successive heartbeats to the Kubernetes control plane that ran on the failed node. The heartbeat configuration has been tuned to mitigate this and increase failover resilience.
Intermittent SR-IOV device availability on large deployments has been resolved. SR-IOV devices are no longer intermittently unavailable on large, long-uptime deployments of Google Distributed Cloud connected after creating SR-IOV network node policies.
Security mitigations for the following vulnerabilities have been implemented in this release of Google Distributed Cloud connected:
- CVE-2024-26934, CVE-2024-27013, CVE-2024-26884, CVE-2024-26902, CVE-2022-48659, CVE-2024-26901, CVE-2024-26910, CVE-2024-26883, CVE-2024-26898, CVE-2024-26882, CVE-2024-26908, CVE-2024-26585, CVE-2021-46904, CVE-2021-46905, CVE-2020-36775, CVE-2021-46909, CVE-2021-46906, CVE-2019-25162, CVE-2024-26606, CVE-2024-26602, CVE-2024-26600, CVE-2023-52469, CVE-2023-52470, CVE-2022-48626, CVE-2024-26597, CVE-2023-52464, CVE-2024-26598, CVE-2024-0340, CVE-2024-23849, CVE-2024-23850, CVE-2024-23851, CVE-2023-52439, CVE-2023-52435, CVE-2023-52443, CVE-2023-46343, CVE-2024-0607, CVE-2024-22705, CVE-2023-46838, CVE-2023-51782, CVE-2023-51781, CVE-2023-51780, CVE-2024-1086, CVE-2024-0584, CVE-2024-0562, CVE-2023-6915, CVE-2024-0646, CVE-2023-6040, CVE-2023-46862, CVE-2023-46813, CVE-2023-6932, CVE-2023-6931, CVE-2023-5178, CVE-2023-5717
This release of Google Distributed Cloud connected contains the following known issues:
Refreshed Google Distributed Cloud connected hardware requires Google Distributed Cloud connected software version 1.7.0 or later. The refreshed Google Distributed Cloud connected hardware does not support versions of Google Distributed Cloud connected prior to release 1.7.0.
Virtual machine workloads might temporarily go down when upgrading Google Distributed Cloud connected software to release 1.7.0. The virtual machine workloads will go back up and be healthy once the Google Distributed Cloud software upgrade completes.
**Cluster upgrades to software release 1.7.0 might fail with an
ABM upgrade timed out
error. Under certain conditions, if the GKE token expires while a cluster upgrade is in progress, the upgrade fails with anABM upgrade timed out
error and a missinggkehub.memberships.update
permission is recorded in the logs. If you encounter this issue, contact Google Support.Storage operations hang when volume replicas are deleted from a cluster without removing the corresponding Symcloud Storage persistent volume intent. If you delete the volume replicas for a Symcloud Storage persistent volume but do not remove the corresponding intent, TCMU devices on the worker node hang, causing storage operations to stall indefinitely. This can affect both your workload data availability as well core system functionality of Google Distributed Cloud connected. To prevent this, you must always remove a Symcloud Storage persistent volume before deleting its associated volume replicas.
Virtual machines might not get scheduled onto nodes after their network has been partitioned. When you partition a network, some virtual machines using that network might not get scheduled back onto their node after the node reconnects to the network. To work around this issue, restart the affected virtual machines or contact Google Support.
Cluster deletion can fail due to stale Symcloud Storage data. When attempting to delete a cluster during disaster recovery or cluster reset, the deletion might fail due to the corresponding Symcloud Storage volumes not having been cleaned up. To resolve this issue, contact Google Support.
Virtual machine management can fail after a node has been powered down for an extended time. If you power down your Google Distributed Cloud connected machines for an extended period of time, you might not be able to manage the virtual machines scheduled on the corresponding nodes after you power the machines back up, even though those virtual machine workloads continue to run. To resolve this issue, contact Google Support.
Nodes can get stuck in
Ready,SchedulingDisabled
state after applying configuration changes. Applying or deleting theNodeSystemConfigUpdate
orSriovNetworkNodePolicy
resources can result in a node that's stuck in theReady, Scheduling Disabled
state after it reboots. To resolve this issue, see Troubleshoot Google Distributed Cloud connected.The Kubernetes API server might return 404 errors when attempting to access
virt-api
endpoints. To work around this issue, contact Google Support.Changes required to
VMRuntime
resource before upgrading to Google Distributed Cloud connected version 1.7.0. To ensure your existing virtual machine workloads successfully upgrade to Google Distributed Cloud connected version 1.7.0, you must modify theVMRuntime
resource before upgrading the cluster as described in Upgrade existing virtual machines to Google Distributed Cloud connected version 1.7.0.The
containerd
daemon state might not be reset after deleting a cluster. In very rare situations, cluster deletion does not reset the state of thecontainerd
daemon. To resolve this issue, contact Google Support.GKE Identity Service (GKE IS) Pods stuck in
Failed
state after machine reboot. Rebooting a machine might spawn one or more GKE IS (formerly branded as Anthos IS) Pods stuck in aFailed
state, even though the GKE IS deployment is healthy and running. This does not impact the cluster nor the GKE IS functionality. Since GKE IS Pods are deployed into a protected namespace, contact Google Support to resolve this issue.Cluster software upgrades might fail. If there are GKE IS pods stuck in a
Failed
state after a machine reboot, you might experience the following behavior on the affected cluster:- Automatic software upgrades never start.
- Manually initiated software upgrades stall and enter a
Paused
state.
Workloads on the cluster continue to run and the cluster remains healthy. To resolve this issue, contact Google Support.
March 14, 2024
This is a patch release of Google Distributed Cloud connected (version 1.6.1).
The following new features have been introduced in this release of Google Distributed Cloud connected:
Multi-rack deployments. Distributed Cloud Edge now supports aggregating the resources of multiple Distributed Cloud Edge Racks into a single zone. You can now create clusters that span nodes across multiple Distributed Cloud Edge Racks. A single multi-rack deployment supports one Distributed Cloud Edge Base Rack and up to 10 Distributed Cloud Edge Standalone Racks. For more information, see How Distributed Cloud Edge works.
Distributed Cloud Edge Base Rack. We are now shipping a new form factor of Distributed Cloud Edge Rack hardware, the Distributed Cloud Edge Base Rack. This form factor is a pair of existing Distributed Cloud Edge Standalone Rack hardware with the addition of four network switches that aggregate network traffic from up to 10 Distributed Cloud Edge Standalone Racks.
Prometheus integration. You can now use the Prometheus metrics solution to collect Distributed Cloud Edge metrics and workload metrics on local control plane clusters running in survivability mode. For more information, see Collect metrics with Prometheus.
Node labels. You can now assign unique labels to individual nodes when creating a node pool. For more information, see Create a node pool.
The following changes have been introduced in this release of Google Distributed Cloud connected:
Cloud control plane clusters can no longer be created in subsequent releases of Distributed Cloud Edge. Distributed Cloud Edge version 1.6.1 is the last release of Distributed Cloud Edge in which you can create Cloud control plane clusters. Creation of cloud control plane clusters will be disabled in the next minor release of Distributed Cloud Edge. Existing cloud control plane clusters will continue to run workloads.
Release channel requirement for specifying cluster software versions. If you want to specify a Distributed Cloud Edge software version when creating a cluster, you must now set the cluster's release channel to
NONE
. If you do not specify a release channel or explicitly set it toREGULAR
, the cluster automatically upgrades to the latest version of Distributed Cloud Edge software and specifying a software version is not possible.
Security mitigations for the following vulnerabilities have been implemented in this release of Google Distributed Cloud connected:
- CVE-2023-51043, CVE-2023-51042, CVE-2024-0775, CVE-2023-6531, CVE-2023-42752, CVE-2023-44466, CVE-2023-42756, CVE-2023-5197, CVE-2023-4569, CVE-2023-4128, CVE-2023-4147, CVE-2023-4194, CVE-2023-39194, CVE-2023-39189, CVE-2023-42753, CVE-2023-4921, CVE-2023-2163, CVE-2023-3611, CVE-2023-3610, CVE-2023-4004.
This release of Google Distributed Cloud connected contains the following known issues:
Nodes can get stuck in
Ready,SchedulingDisabled
state after applying configuration changes. Applying or deleting theNodeSystemConfigUpdate
orSriovNetworkNodePolicy
resources can result in a node that's stuck in theReady, Scheduling Disabled
state after it reboots. To resolve this issue, see Troubleshoot Distributed Cloud Edge.Deleting clusters and node pools fails when a node is not ready. If a node in a cluster or node pool that you want to delete is in the
NotReady
state, the deletion can fail. Contact Google Support to remedy this condition.Nodes using Symcloud Storage report the file system as read-only after reboot. When multiple nodes that use Symcloud Storage reboot at once in a cluster, they can incorrectly mark the file system as read-only. Contact Google Support to remedy this condition.
December 19, 2023
This is a minor release of Google Distributed Cloud Edge (version 1.6.0).
The following new features have been introduced in this release of Distributed Cloud Edge:
Distributed Cloud Edge Servers. You can now order Distributed Cloud Edge in sets of three clustered server machines without ToR switches in addition to the existing fully configured rack offering. These three-machine clusters connect directly to your local network.
Configuration 8 standalone server machines. Distributed Cloud Edge now offers a new hardware option, the Configuration 8 standalone server machine. You can order this hardware option in sets of three. Each Configuration 8 machine offers 16 processor cores (32 vCPUs), 64GB of RAM, and 1.6TB of SSD storage. The machine is housed in a half-depth 1U rackmount chassis.
Symcloud Storage support for virtual machine workloads. Distributed Cloud Edge now supports configuring virtual machine workloads with Symcloud Storage.
Locally cached system images. Distributed Cloud Edge clusters can now access system images while disconnected from Google Cloud. This allows Pods to transfer onto another node using locally cached system images.
Manage clusters using the
kubectl
tool while disconnected from Google Cloud. You can now contact Google Cloud support to request emergency credentials to authenticate to a Distributed Cloud Edge cluster and manage it using thekubectl
command-line tool from your local network while disconnected from Google Cloud.
The following changes have been introduced in this release of Distributed Cloud Edge:
Reduced outbound network traffic. Distributed Cloud Edge has reduced the amount of outbound data it sends to Google Cloud. For a typical 3-machine deployment, outbound traffic bandwidth is now below 4Mbps.
Nodes can now rejoin clusters after rebooting while disconnected from Google Cloud. When creating a local control plane cluster, you can now configure it so that when a node reboots while your Distributed Cloud Edge deployment is disconnected from Google Cloud, the node rejoins its cluster after the reboot is complete and resumes running its designated workloads. For more information, see Create a cluster. This is a preview-level feature.
November 03, 2023
This is a patch release of Google Distributed Cloud Edge (version 1.5.1).
The following changes have been introduced in this release of Distributed Cloud Edge:
Cluster software version upgrades for local control plane clusters. You can now trigger a software version upgrade on a local control plane cluster to a specific version of Distributed Cloud Edge software, starting with version 1.5.1. This feature is not available for Cloud control plane clusters. For instructions, see Upgrade the software version on a local control plane cluster.
Cluster software version pinning for local control plane clusters. You can now pin a local control plane cluster to a specific version of Distributed Cloud Edge software, starting with version 1.5.0. A cluster pinned to a specific version does not automatically upgrade when new Distributed Cloud Software becomes available. This feature is not available for Cloud control plane clusters. For instructions, see Create a cluster.
Cluster status. The
gcloud edge-cloud container describe
command now returns the operational status of a Distributed Cloud Edge cluster.
The following issues have been resolved in this release of Distributed Cloud Edge:
- CVE-2022-40982 "Downfall" remediation. The CVE-2022-40982 vulnerability, also known as "Downfall," has been patched.
This release of Distributed Cloud Edge contains the following known issues:
Cloud SDK version 450.0.0 or later is required. You must upgrade your Cloud SDK to version 450.0.0 or later to create local control plane clusters with Distributed Cloud Edge software version 1.5.0. Otherwise, creating such clusters will fail.
Node and machine labels are not applied when upgrading to Distributed Cloud Edge version 1.5.1. When upgrading to Distributed Cloud Edge version 1.5.1, system-required labels might not be applied to nodes and machines within existing node pools. To work around this issue, either modify the affected node pool to update its corresponding resource definition, or delete and re-add the affected nodes. For instructions, see Create and manage node pools.
September 07, 2023
This is a minor release of Google Distributed Cloud Edge (version 1.5.0).
The following features have been introduced in this release of Distributed Cloud Edge:
Bastion host support. Distributed Cloud Edge now allows you to set up one or more bastion host virtual machines. The bastion host feature allows Google support engineers to connect to your Distributed Cloud Edge deployment and work with you to diagnose and resolve issues. For more information, see Configure a bastion host. This is a preview-level feature.
Selectable cluster software versions. You now have the option to create a cluster running a specific version of Distributed Cloud Edge software, starting with version 1.5.0. For more information, see Create and manage clusters. This is a preview-level feature.
Container image registry access over secondary networks. Distributed Cloud Edge now allows you to specify the network interface in the
spec.containerRuntimeDNSConfig
field of theNodeSystemConfigUpdate
resource. This allows you to specify a container image registry IP/domain pair for a network interface other than the primary. For more information, seeNodeSystemConfigUpdate
resource. This is a preview-level feature.CMEK support for local control plane nodes. You can now configure Cloud KMS integration for storage on nodes running local control planes for Distributed Cloud Edge clusters. For more information, see Enable support for customer-managed encryption keys (CMEK) for local storage.
The following changes have been introduced in this release of Distributed Cloud Edge:
Survivability mode is now generally available. For more information, see Distributed Cloud Edge survivability mode. After your Distributed Cloud Edge deployment has been upgraded from version 1.4.0 to version 1.5.0, you must manually delete and recreate all local control plane clusters you have created with Distributed Cloud Edge version 1.4.0 or 1.4.1. Otherwise, unexpected behavior and data loss can occur. Clusters configured to use a cloud control plane continue to run normally after upgrading Distributed Cloud Edge to version 1.5.0.
Symcloud Storage integration is now generally available. For more information, see Configure Distributed Cloud Edge for Symcloud Storage.
Local control plane clusters now support virtual machines and GPU workloads. For more information, see Manage virtual machines and Manage GPU workloads.
Loadable SCTP kernel modules. Distributed Cloud Edge now configures the sctp kernel module as loadable. This allows you to load custom networking stacks into the kernel's user space. For more information, see SCTP kernel modules. This is a preview-level feature.
The following issues have been resolved in this release of Distributed Cloud Edge:
When creating a Cloud control plane cluster, creating a node pool that includes nodes that were previously part of a local control plane cluster no longer fails.
BGP sessions now properly recover when the associated network interface goes down and then comes back up.
This release of Distributed Cloud Edge contains the following known issues:
When creating a local control plane cluster, Distributed Cloud Edge instantiates dummy
BGPPeer
andBGPLoadBalancer
resources. You can ignore these resources.Distributed Cloud Edge does not support BGP peering to multiple VLANs within the same virtual router. You must set up a separate virtual router with a unique loopback IP addresses for each affected VLAN to allow concurrent BGP peering sessions.
June 30, 2023
This is a patch release of Google Distributed Cloud Edge (version 1.4.1).
The following changes have been introduced in this release of Distributed Cloud Edge:
- The IP addresses of local control plane endpoints are now accessible on your local network. You must ensure that your local network's security configuration prevents external access to those IP addresses.
The following issues have been resolved in this release of Distributed Cloud Edge:
- Resource utilization metrics that were previously not exported to Cloud Monitoring are now exported as expected.
- The status of the
kube-apiserver
mirrored Pods is no longer erroneously reported as "Pending."
May 19, 2023
This is a minor release of Google Distributed Cloud Edge (version 1.4.0).
The following features have been introduced in this release of Distributed Cloud Edge:
Survivability mode. Distributed Cloud Edge now allows you to create clusters with the Kubernetes control plane running locally on your Distributed Cloud Edge hardware. This improves the reliability of Distributed Cloud Edge when your connection to Google Cloud is intermittent. This is a Public Preview feature. For more information, see Distributed Cloud Edge survivability mode.
Symcloud Storage integration. You can now integrate Distributed Cloud Edge with Rakuten Symcloud Storage, a third-party storage abstraction solution that allows Pods to access local storage on different Distributed Cloud Edge nodes. This is a Public Preview feature. For more information, see Configure Distributed Cloud Edge for Symcloud Storage.
Enhanced rNDC security. Distributed Cloud Edge has replaced the
bond0
interface with thegdcenet0
interface that allows you to use the physical management network interface card for your application workloads while maintaining complete separation from Distributed Cloud Edge control and management traffic. You must manually reconfigure any existing network resources that reference thebond0
interface to use thegdcenet0
interface. For more information, see Upgrade CustomNetworkInterfaceConfig resources from Distributed Cloud Edge 1.3.0 to 1.4.0 and Upgrade NetworkAttachmentDefinition resources to Distributed Cloud Edge 1.4.0.Cloud Router reuse for VPN connections. When creating a VPN connection, Distributed Cloud Edge now automatically reuses any Cloud Router resource it has automatically created for a VPN connection. You can also specify a custom Cloud Router resource when creating a VPN connection. Existing VPN connections are not affected. For more information, see Manage VPN connections.
The following changes have been introduced in this release of Distributed Cloud Edge:
The cross-project VPN connection functionality is now generally available. For more information, see Manage cross-project VPN connections.
The default behavior of the
gcloud edge-cloud clusters get credentials
command has changed. The command now requires the `gke-gcloud-auth-plugin
plugin, which replaces the legacyin-tree-auth-plugin
plugin. For more information about thegke-gcloud-auth-plugin
plugin, see Important changes to Kubectl authentication are coming in GKE v1.26. You have the option to revert to the legacyin-tree-auth-plugin
plugin by setting theUSE_GKE_CLOUD_AUTH_PLUGIN
environment flag tofalse
.The Kubernetes control plane has been updated to version 1.25.5-gke.1001 for all clusters.
The Kubernetes container daemon (
containerd
) has been updated to version 1.6.6-gke.1 for remote control plane clusters and to 1.6.12-gke.0 for survivability mode clusters.The Kubernetes worker node agent (
kubelet
) has been updated to version 1.24.7.gke.1700 for remote control plane clusters and 1.25.5-gke.1001 for local control plane clusters.Distributed Cloud Edge now supports the ConfigSync feature of Anthos Config Management. Distributed Cloud Edge does not support any other Anthos features.
The following issues have been resolved in this release of Distributed Cloud Edge:
Distributed Cloud Edge now supports dynamic IPAM for multi-networking configurations.
Disabling the Anthos VM Runtime virtual machine subsystem no longer removes the
network-controller-manager
container. You can now disable the subsystem without affecting Distributed Cloud Edge networking features.
This release of Distributed Cloud Edge contains the following known issues:
BGP sessions do not recover when the associated network interface goes down and then comes back up.
In the
CustomNetworkInterfaceConfig
resource, setting theifname
field togdcenet0
while themasterInterface
field is also set togdcenet0
causes the resource to not apply to the cluster.When configuring a
CustomNetworkInterfaceConfig
resource, you must explicitly set the MTU size to be no greater than the MTU size of its parent network interface. Otherwise, unpredictable behavior might result.If you reboot a node running a local control plane workload for a local control plane cluster, the cluster loses its GKEConnect connection to GKEHub until the node fully starts up again. The workloads deployed on the cluster continue to run.
If you are creating a remote control plane plane cluster, creating a node pool using nodes that were previously part of a local control plane cluster might fail. If you encounter this issue, contact Google Support for assistance.
March 27, 2023
This is a patch release of Google Distributed Cloud Edge (version 1.3.1).
The following changes have been introduced in this release of Distributed Cloud Edge:
- The Kubernetes control plane has been updated to version 1.24.9-gke.2500.
- The Kubernetes container daemon (
containerd
) has been updated to version 1.6.6-gke.1. - The Kubernetes worker node agent (
kubelet
) has been updated to version 1.24.7-gke.5.
The following issues have been resolved in this release of Distributed Cloud Edge:
- Errors in the
NodeSystemConfigUpdate
custom resource definition that shipped with Distributed Cloud Edge 1.3.0 have been corrected. The outputs of the affected status fields are now accurate.
This release of Distributed Cloud Edge contains the following known issues:
- If you have enabled the Anthos VM Runtime virtual machine subsystem, disabling it removes the
network-controller-manager
service and its container. This renders Distributed Cloud Edge networking inoperable. To prevent this, keep the Anthos VM Runtime virtual machine subsystem enabled on your Distributed Cloud Edge deployment. If the subsystem has been disabled, re-enable it by following the steps in Enable the Anthos VM Runtime support on Distributed Cloud Edge to restore Distributed Cloud Edge networking to an operable state.
February 21, 2023
This is a minor release of Google Distributed Cloud Edge (version 1.3.0).
The following new features have been introduced in this release of Google Distributed Cloud Edge:
- Distributed Cloud Edge now exposes the Edge Network API, which allows you to configure the networking components of Distributed Cloud Edge. For more information, see How it works and Distributed Cloud Edge networking features.
The following changes have been introduced in this release of Distributed Cloud Edge:
- Getting information about a Machine resource now returns the version of the Distributed Cloud Edge cluster stack.
- You can now connect Distributed Cloud Edge clusters to a Virtual Private Cloud network in a Cloud project other than your Distributed Cloud Edge cluster project.
- When creating a cross-project VPN connection, you can no longer specify a VPC project service account. Distributed Cloud Edge now uses your cluster project service account.
January 06, 2023
This is a patch release of Google Distributed Cloud Edge (version 1.2.2).
The following changes have been introduced in this release of Distributed Cloud Edge:
- The NVIDIA Tesla T4 GPU driver has been updated to version 515.65.01.
- The NVIDIA Tesla T4 GPU resource name has been changed from
nvidia.com/gpu
tonvidia.com/gpu-pod-TESLA_T4
. If you have existing GPU-based container workloads, you must manually update their configuration to use the new resource name. For more information, see Configure a container to use GPU resources. - The Kubernetes worker node agent (
kubelet
) has been updated to version 1.23.5-gke.1505.
November 07, 2022
This is a minor release of Google Distributed Cloud Edge (version 1.2.0).
The following new features have been introduced in this release of Google Distributed Cloud Edge:
- Anthos VM Runtime replaces Kubevirt in Google Distributed Cloud Edge starting with this release. To continue using your existing virtual machines, you must shut them down and back them up before your Distributed Cloud Edge deployment is upgraded to release 1.2.0, and then re-create them as described in Manage virtual machines.
- A new Google Distributed Cloud Edge hardware configuration is available. This new configuration supports GPU-based workloads that run on NVIDIA Tesla T4 GPUs in both containers and virtual machines. To order a GPU-enabled configuration, see Order Google Distributed Cloud Edge. To learn more about running workloads on GPUs, see Manage GPU workloads.
- Google Distributed Cloud Edge now supports the following networking features:
- (Preview) Cross-project VPN Connections. To learn more, see Manage cross-project VPN Connections.
- (Preview) MacVLAN driver support for creating secondary network interfaces for Pods running containerized workloads. The MacVLAN driver is not supported on Pods running virtual machines. To learn more, see Configure a secondary network interface on a Pod using the MacVLAN driver.
- (Preview) Multi-network support for creating secondary network interfaces for Pods. To learn more, see Configure a secondary network interface on a Pod using Distributed Cloud Edge multi-networking.
- (Preview)
ClusterDNS
resource. To learn more, see ClusterDNS resource.
The following changes have been introduced in this release of Google Distributed Cloud Edge:
- Google Distributed Cloud Edge now ships with the NVIDIA Tesla T4 GPU driver version 470.63.01.
- The Network Function operator feature of Google Distributed Cloud Edge has been updated as follows. To learn more, see Network Function operator.
- The
NodeSystemConfigUpdate
resource now supports additionalsysctls
fields. - The
NodeSystemConfigUpdate
resource now supports fields for specifying the IP address lists and domain lists of private image registries. - The
CustomNetworkInterfaceConfig
resource no longer supports certain previously supported fields. - You can now scope both safe and unsafe
sysctls
parameters to a specific Pod or namespace using thetuning
Container Networking Interface (CNI) plug-in. - Webhook-level enforcement of valid field values is now in effect.
- The
- The Kubernetes control plane has been updated to version 1.23.5-gke.1505.
- The
coredns
service has been updated to version 1.8.6-gke.0.
The following issues have been resolved in this release of Google Distributed Cloud Edge:
- Google Distributed Cloud Edge nodes no longer become temporarily unresponsive due to excessive memory utilization.
September 23, 2022
This is a patch release of Google Distributed Cloud Edge (version 1.1.2).
The following changes have been introduced in this release of Google Distributed Cloud Edge:
cgroups
has been reverted tov1
to retain compatibility with legacy workloads.- The Kubernetes control plane has been updated to version 1.22.8-gke.204.
- The Kubernetes container daemon (
containerd
) has been updated to version 1.5.13-gke.0. - The Kubernetes worker node agent (
kubelet
) has been updated to version 1.22.8-gke.200.
August 26, 2022
This is a patch release of Google Distributed Cloud Edge (version 1.1.1).
The following changes have been introduced in this release of Google Distributed Cloud Edge:
- Google Distributed Cloud Edge worker nodes have been updated to Kubernetes 1.22.
The following issues have been resolved in this release of Google Distributed Cloud Edge:
- The SR-IOV interface no longer fails to start after a Google Distributed Cloud Edge worker node has been rebooted.
July 14, 2022
This is a minor release of Google Distributed Cloud Edge (version 1.1.0).
The following changes have been introduced in this release of Google Distributed Cloud Edge:
- The Kubernetes control plane has been updated to version 1.22.
The following issues have been resolved in this release of Distributed Cloud Edge:
- The Kubernetes control plane no longer becomes intermittently unavailable during Google Distributed Cloud Edge software updates.
- VPN connectivity between non-Anthos gateway nodes and Google Cloud Platform now works reliably.
This release of Distributed Cloud Edge contains the following known issues:
- Garbage collection intermittently fails to clean up terminated Pods.
May 25, 2022
This is a patch release of Google Distributed Cloud Edge (version 1.0.2).
The following changes have been introduced in this release of of Distributed Cloud Edge:
Configuring a maintenance window now controls the scheduling of software updates for the Kubernetes control plane and Kubernetes nodes.
You can now deploy KubeVirt virtual machines on Distributed Cloud Edge in unmanaged namespaces with support for the Containerized Data Importer (CDI) plug-in.
The following issues have been resolved in this release of Distributed Cloud Edge:
Intermittent VPN connection persistence after deletion has been resolved. You no longer need to manually check whether the VPN connection and its associated resources have been successfully deleted.
The
localpv-shared
Persistent Volume has been eliminated. You will no longer see this Persistent Volume on the filesystem of your Distributed Cloud Edge nodes.
This release of Distributed Cloud Edge contains the following known issues:
The NodePort Service is not supported. This release of Distributed Cloud Edge only supports the LoadBalancer and ClusterIP Kubernetes Services.
The Kubernetes control planes associated with Distributed Cloud Clusters can briefly go down during Distributed Cloud Cluster software updates.
A large number of webhook calls might cause the Konnectivity proxy to temporarily fail.
The metrics agents running on Distributed Cloud Edge nodes can accumulate a backlog of events and stall, preventing the capture of further metrics.
March 30, 2022
This is the General Availability release of Google Distributed Cloud Edge (version 1.0.0).
For information about the latest known issues, see Known issues in this release of Distributed Cloud Edge.