Google Distributed Cloud Edge release notes

This page contains release notes for releases, features, and updates to Google Distributed Cloud Edge.

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.

To get the latest product updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/distributed-cloud-edge-release-notes.xml

June 28, 2024

Distributed Cloud connected 1.7.0

This is a minor release of Google Distributed Cloud connected (version 1.7.0).

The following new functionality has been introduced in this release of Google Distributed Cloud connected:

  • Customer-sourced hardware. You now have the option to purchase the Google Distributed Cloud connected hardware from a Google-partnered System Integrator (SI) and retain full ownership instead of leasing it from Google. For more information, contact Google Support.

  • Refreshed machine hardware. The server machines comprising Google Distributed Cloud connected racks have been updated to a more powerful hardware configuration. For more information, contact Google Support.

  • Flexible rack configuration. You can now order a Google Distributed Cloud connected rack with 3, 6, 9, or 12 server machines. For more information, contact Google Support.

  • IPv4/IPv6 dual-stack networking. Google Distributed Cloud connected now supports IPv6 networking in addition to IPv4 networking. For more information, see IPv4/IPv6 dual-stack networking.

  • Pod image caching. Google Distributed Cloud connected now supports local caching of Pod images. For more information, see Configure a Pod for image caching.

  • Kafka support. Google Distributed Cloud now supports collecting workload metrics with Apache Kafka. For more information, see Logs and metrics.

  • Cluster connection state indication. You can now check whether a cluster is connected, disconnected, or reconnected and synchronizing with Google Cloud Platform. For more information, see Survivability mode.

  • Cluster maintenance exclusion windows. You can now specify one or more maintenance exclusion windows for a cluster. This prevents Google from performing maintenance or software upgrades on the cluster during the specified times. For more information, see Understand software updates and maintenance windows.

  • GDC Hardware Management API. You can now place orders for Google Distributed Cloud connected hardware programmatically using the GDC Hardware Management API. For more information, see Google Distributed Cloud connected CLI and API reference. This is a Preview-level feature.

The following changes to existing functionality have been introduced in this release of Google Distributed Cloud connected:

  • Worker node software upgrades are now staggered. Google Distributed Cloud connected now upgrades worker node software in stages instead of all at once. This allows your workloads to continue running on some nodes, while others are upgrading. You have the option to specify the number of worker nodes that can go down for a software upgrade simultaneously. For more information, see Software update staggering.

  • GPU support is now automatically enabled. You no longer have to modify the VMRuntime resource to enable GPU support on Google Distributed Cloud connected. GPU support is now automatically enabled if a GPU is detected on a Google Distributed Cloud connected machine.

  • Google Distributed Cloud connected component updates:

    • GKE on Bare Metal. This component has been updated from version 1.1.6.1 to version 1.28.500.
    • Kubernetes control plane. This component has been updated from version 1.27.9 to version 1.28.8.
    • Symcloud Storage. This component has been updated from version 5.4.6 to version 5.4.8.

The following functionality has been deprecated in this release of Google Distributed Cloud connected:

  • Cloud control plane cluster support. As of this release, Google Distributed Cloud connected no longer supports Cloud control plane clusters. Local control plane clusters are now the only supported cluster type.

  • Raw block storage for virtual machine workloads. As of this release, you can no longer provision virtual machine workloads with raw block storage. Symcloud Storage is now the only supported storage type for virtual machine workloads.

The following issues have been resolved in this release of Google Distributed Cloud connected:

  • Symcloud Storage volume clean-up now functions correctly. Single node failures, such as power loss or network disconnection, no longer cause rescheduling failures for virtual machines that use Symcloud Storage volumes. When a node fails, virtual machines are automatically rescheduled onto another node and then scheduled back onto the original node once that node returns to operation.

  • Virtual machines no longer enter a stuck state when node network connections are intermittent. Virtual machines no longer get stuck in container creation state when their network connections repeatedly disconnect and reconnect. When all three nodes in a Google Distributed Cloud connected server group regain network connectivity, the affected virtual machines are automatically rescheduled back onto their original nodes.

  • Virtual machine restore operations now complete successfully. Problems related to taking subsequent snapshots of virtual machines after the initial ones have been resolved. These problems caused virtual machine restore operations to fail.

  • Virtual machine heartbeat has been tuned to increase failover resilience. Occasionally, when a node failed, virtual machines on other nodes in the cluster would fail multiple successive heartbeats to the Kubernetes control plane that ran on the failed node. The heartbeat configuration has been tuned to mitigate this and increase failover resilience.

  • Intermittent SR-IOV device availability on large deployments has been resolved. SR-IOV devices are no longer intermittently unavailable on large, long-uptime deployments of Google Distributed Cloud connected after creating SR-IOV network node policies.

This release of Google Distributed Cloud connected contains the following known issues:

  • Refreshed Google Distributed Cloud connected hardware requires Google Distributed Cloud connected software version 1.7.0 or later. The refreshed Google Distributed Cloud connected hardware does not support versions of Google Distributed Cloud connected prior to release 1.7.0.

  • Virtual machine workloads might temporarily go down when upgrading Google Distributed Cloud connected software to release 1.7.0. The virtual machine workloads will go back up and be healthy once the Google Distributed Cloud software upgrade completes.

  • **Cluster upgrades to software release 1.7.0 might fail with an ABM upgrade timed out error. Under certain conditions, if the GKE token expires while a cluster upgrade is in progress, the upgrade fails with an ABM upgrade timed out error and a missing gkehub.memberships.update permission is recorded in the logs. If you encounter this issue, contact Google Support.

  • Storage operations hang when volume replicas are deleted from a cluster without removing the corresponding Symcloud Storage persistent volume intent. If you delete the volume replicas for a Symcloud Storage persistent volume but do not remove the corresponding intent, TCMU devices on the worker node hang, causing storage operations to stall indefinitely. This can affect both your workload data availability as well core system functionality of Google Distributed Cloud connected. To prevent this, you must always remove a Symcloud Storage persistent volume before deleting its associated volume replicas.

  • Virtual machines might not get scheduled onto nodes after their network has been partitioned. When you partition a network, some virtual machines using that network might not get scheduled back onto their node after the node reconnects to the network. To work around this issue, restart the affected virtual machines or contact Google Support.

  • Cluster deletion can fail due to stale Symcloud Storage data. When attempting to delete a cluster during disaster recovery or cluster reset, the deletion might fail due to the corresponding Symcloud Storage volumes not having been cleaned up. To remedy this issue, contact Google Support.

  • Virtual machine management can fail after a node has been powered down for an extended time. If you power down your Google Distributed Cloud connected machines for an extended period of time, you might not be able to manage the virtual machines scheduled on the corresponding nodes after you power the machines back up, even though those virtual machine workloads continue to run. To remedy this issue, contact Google Support.

  • Nodes can get stuck in Ready,SchedulingDisabled state after applying configuration changes. Applying or deleting the NodeSystemConfigUpdate or SriovNetworkNodePolicy resources can result in a node that's stuck in the Ready, Scheduling Disabled state after it reboots. To resolve this issue, see Troubleshoot Google Distributed Cloud connected.

  • The Kubernetes API server might return 404 errors when attempting to access virt-api endpoints. To work around this issue, contact Google Support.

  • Changes required to VMRuntime resource before upgrading to Google Distributed Cloud connected version 1.7.0. To ensure your existing virtual machine workloads successfully upgrade to Google Distributed Cloud connected version 1.7.0, you must modify the VMRuntime resource before upgrading the cluster as described in Upgrade existing virtual machines to Google Distributed Cloud connected version 1.7.0.

March 14, 2024

1.6.1

This is a patch release of Google Distributed Cloud connected (version 1.6.1).

The following new features have been introduced in this release of Google Distributed Cloud connected:

  • Multi-rack deployments. Distributed Cloud Edge now supports aggregating the resources of multiple Distributed Cloud Edge Racks into a single zone. You can now create clusters that span nodes across multiple Distributed Cloud Edge Racks. A single multi-rack deployment supports one Distributed Cloud Edge Base Rack and up to 10 Distributed Cloud Edge Standalone Racks. For more information, see How Distributed Cloud Edge works.

  • Distributed Cloud Edge Base Rack. We are now shipping a new form factor of Distributed Cloud Edge Rack hardware, the Distributed Cloud Edge Base Rack. This form factor is a pair of existing Distributed Cloud Edge Standalone Rack hardware with the addition of four network switches that aggregate network traffic from up to 10 Distributed Cloud Edge Standalone Racks.

  • Prometheus integration. You can now use the Prometheus metrics solution to collect Distributed Cloud Edge metrics and workload metrics on local control plane clusters running in survivability mode. For more information, see Collect metrics with Prometheus.

  • Node labels. You can now assign unique labels to individual nodes when creating a node pool. For more information, see Create a node pool.

The following changes have been introduced in this release of Google Distributed Cloud connected:

  • Cloud control plane clusters can no longer be created in subsequent releases of Distributed Cloud Edge. Distributed Cloud Edge version 1.6.1 is the last release of Distributed Cloud Edge in which you can create Cloud control plane clusters. Creation of cloud control plane clusters will be disabled in the next minor release of Distributed Cloud Edge. Existing cloud control plane clusters will continue to run workloads.

  • Release channel requirement for specifying cluster software versions. If you want to specify a Distributed Cloud Edge software version when creating a cluster, you must now set the cluster's release channel to NONE. If you do not specify a release channel or explicitly set it to REGULAR, the cluster automatically upgrades to the latest version of Distributed Cloud Edge software and specifying a software version is not possible.

This release of Google Distributed Cloud connected contains the following known issues:

  • Nodes can get stuck in Ready,SchedulingDisabled state after applying configuration changes. Applying or deleting the NodeSystemConfigUpdate or SriovNetworkNodePolicy resources can result in a node that's stuck in the Ready, Scheduling Disabled state after it reboots. To resolve this issue, see Troubleshoot Distributed Cloud Edge.

  • Deleting clusters and node pools fails when a node is not ready. If a node in a cluster or node pool that you want to delete is in the NotReady state, the deletion can fail. Contact Google Support to remedy this condition.

  • Nodes using Symcloud Storage report the file system as read-only after reboot. When multiple nodes that use Symcloud Storage reboot at once in a cluster, they can incorrectly mark the file system as read-only. Contact Google Support to remedy this condition.

Distributed Cloud Edge management software

Google Distributed Cloud Edge management software has been updated.

December 19, 2023

1.6.0

This is a minor release of Google Distributed Cloud Edge (version 1.6.0).

The following new features have been introduced in this release of Distributed Cloud Edge:

  • Distributed Cloud Edge Servers. You can now order Distributed Cloud Edge in sets of three clustered server machines without ToR switches in addition to the existing fully configured rack offering. These three-machine clusters connect directly to your local network.

  • Configuration 8 standalone server machines. Distributed Cloud Edge now offers a new hardware option, the Configuration 8 standalone server machine. You can order this hardware option in sets of three. Each Configuration 8 machine offers 16 processor cores (32 vCPUs), 64GB of RAM, and 1.6TB of SSD storage. The machine is housed in a half-depth 1U rackmount chassis.

  • Symcloud Storage support for virtual machine workloads. Distributed Cloud Edge now supports configuring virtual machine workloads with Symcloud Storage.

  • Locally cached system images. Distributed Cloud Edge clusters can now access system images while disconnected from Google Cloud. This allows Pods to transfer onto another node using locally cached system images.

  • Manage clusters using the kubectl tool while disconnected from Google Cloud. You can now contact Google Cloud support to request emergency credentials to authenticate to a Distributed Cloud Edge cluster and manage it using the kubectl command-line tool from your local network while disconnected from Google Cloud.

The following changes have been introduced in this release of Distributed Cloud Edge:

  • Reduced outbound network traffic. Distributed Cloud Edge has reduced the amount of outbound data it sends to Google Cloud. For a typical 3-machine deployment, outbound traffic bandwidth is now below 4Mbps.

  • Nodes can now rejoin clusters after rebooting while disconnected from Google Cloud. When creating a local control plane cluster, you can now configure it so that when a node reboots while your Distributed Cloud Edge deployment is disconnected from Google Cloud, the node rejoins its cluster after the reboot is complete and resumes running its designated workloads. For more information, see Create a cluster. This is a preview-level feature.

November 03, 2023

1.5.1

This is a patch release of Google Distributed Cloud Edge (version 1.5.1).

The following changes have been introduced in this release of Distributed Cloud Edge:

  • Cluster software version upgrades for local control plane clusters. You can now trigger a software version upgrade on a local control plane cluster to a specific version of Distributed Cloud Edge software, starting with version 1.5.1. This feature is not available for Cloud control plane clusters. For instructions, see Upgrade the software version on a local control plane cluster.

  • Cluster software version pinning for local control plane clusters. You can now pin a local control plane cluster to a specific version of Distributed Cloud Edge software, starting with version 1.5.0. A cluster pinned to a specific version does not automatically upgrade when new Distributed Cloud Software becomes available. This feature is not available for Cloud control plane clusters. For instructions, see Create a cluster.

  • Cluster status. The gcloud edge-cloud container describe command now returns the operational status of a Distributed Cloud Edge cluster.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • CVE-2022-40982 "Downfall" remediation. The CVE-2022-40982 vulnerability, also known as "Downfall," has been patched.

This release of Distributed Cloud Edge contains the following known issues:

  • Cloud SDK version 450.0.0 or later is required. You must upgrade your Cloud SDK to version 450.0.0 or later to create local control plane clusters with Distributed Cloud Edge software version 1.5.0. Otherwise, creating such clusters will fail.

  • Node and machine labels are not applied when upgrading to Distributed Cloud Edge version 1.5.1. When upgrading to Distributed Cloud Edge version 1.5.1, system-required labels might not be applied to nodes and machines within existing node pools. To work around this issue, either modify the affected node pool to update its corresponding resource definition, or delete and re-add the affected nodes. For instructions, see Create and manage node pools.

September 07, 2023

1.5.0

This is a minor release of Google Distributed Cloud Edge (version 1.5.0).

The following features have been introduced in this release of Distributed Cloud Edge:

  • Bastion host support. Distributed Cloud Edge now allows you to set up one or more bastion host virtual machines. The bastion host feature allows Google support engineers to connect to your Distributed Cloud Edge deployment and work with you to diagnose and resolve issues. For more information, see Configure a bastion host. This is a preview-level feature.

  • Selectable cluster software versions. You now have the option to create a cluster running a specific version of Distributed Cloud Edge software, starting with version 1.5.0. For more information, see Create and manage clusters. This is a preview-level feature.

  • Container image registry access over secondary networks. Distributed Cloud Edge now allows you to specify the network interface in the spec.containerRuntimeDNSConfig field of the NodeSystemConfigUpdate resource. This allows you to specify a container image registry IP/domain pair for a network interface other than the primary. For more information, see NodeSystemConfigUpdate resource. This is a preview-level feature.

  • CMEK support for local control plane nodes. You can now configure Cloud KMS integration for storage on nodes running local control planes for Distributed Cloud Edge clusters. For more information, see Enable support for customer-managed encryption keys (CMEK) for local storage.

The following changes have been introduced in this release of Distributed Cloud Edge:

  • Survivability mode is now generally available. For more information, see Distributed Cloud Edge survivability mode. After your Distributed Cloud Edge deployment has been upgraded from version 1.4.0 to version 1.5.0, you must manually delete and recreate all local control plane clusters you have created with Distributed Cloud Edge version 1.4.0 or 1.4.1. Otherwise, unexpected behavior and data loss can occur. Clusters configured to use a cloud control plane continue to run normally after upgrading Distributed Cloud Edge to version 1.5.0.

  • Symcloud Storage integration is now generally available. For more information, see Configure Distributed Cloud Edge for Symcloud Storage.

  • Local control plane clusters now support virtual machines and GPU workloads. For more information, see Manage virtual machines and Manage GPU workloads.

  • Loadable SCTP kernel modules. Distributed Cloud Edge now configures the sctp kernel module as loadable. This allows you to load custom networking stacks into the kernel's user space. For more information, see SCTP kernel modules. This is a preview-level feature.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • When creating a Cloud control plane cluster, creating a node pool that includes nodes that were previously part of a local control plane cluster no longer fails.

  • BGP sessions now properly recover when the associated network interface goes down and then comes back up.

This release of Distributed Cloud Edge contains the following known issues:

  • When creating a local control plane cluster, Distributed Cloud Edge instantiates dummy BGPPeer and BGPLoadBalancer resources. You can ignore these resources.

  • Distributed Cloud Edge does not support BGP peering to multiple VLANs within the same virtual router. You must set up a separate virtual router with a unique loopback IP addresses for each affected VLAN to allow concurrent BGP peering sessions.

June 30, 2023

1.4.1

This is a patch release of Google Distributed Cloud Edge (version 1.4.1).

The following changes have been introduced in this release of Distributed Cloud Edge:

  • The IP addresses of local control plane endpoints are now accessible on your local network. You must ensure that your local network's security configuration prevents external access to those IP addresses.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • Resource utilization metrics that were previously not exported to Cloud Monitoring are now exported as expected.
  • The status of the kube-apiserver mirrored Pods is no longer erroneously reported as "Pending."

May 19, 2023

1.4.0

This is a minor release of Google Distributed Cloud Edge (version 1.4.0).

The following features have been introduced in this release of Distributed Cloud Edge:

  • Survivability mode. Distributed Cloud Edge now allows you to create clusters with the Kubernetes control plane running locally on your Distributed Cloud Edge hardware. This improves the reliability of Distributed Cloud Edge when your connection to Google Cloud is intermittent. This is a Public Preview feature. For more information, see Distributed Cloud Edge survivability mode.

  • Symcloud Storage integration. You can now integrate Distributed Cloud Edge with Rakuten Symcloud Storage, a third-party storage abstraction solution that allows Pods to access local storage on different Distributed Cloud Edge nodes. This is a Public Preview feature. For more information, see Configure Distributed Cloud Edge for Symcloud Storage.

  • Enhanced rNDC security. Distributed Cloud Edge has replaced the bond0 interface with the gdcenet0 interface that allows you to use the physical management network interface card for your application workloads while maintaining complete separation from Distributed Cloud Edge control and management traffic. You must manually reconfigure any existing network resources that reference the bond0 interface to use the gdcenet0 interface. For more information, see Upgrade CustomNetworkInterfaceConfig resources from Distributed Cloud Edge 1.3.0 to 1.4.0 and Upgrade NetworkAttachmentDefinition resources to Distributed Cloud Edge 1.4.0.

  • Cloud Router reuse for VPN connections. When creating a VPN connection, Distributed Cloud Edge now automatically reuses any Cloud Router resource it has automatically created for a VPN connection. You can also specify a custom Cloud Router resource when creating a VPN connection. Existing VPN connections are not affected. For more information, see Manage VPN connections.

The following changes have been introduced in this release of Distributed Cloud Edge:

  • The cross-project VPN connection functionality is now generally available. For more information, see Manage cross-project VPN connections.

  • The default behavior of the gcloud edge-cloud clusters get credentials command has changed. The command now requires the `gke-gcloud-auth-plugin plugin, which replaces the legacy in-tree-auth-plugin plugin. For more information about the gke-gcloud-auth-plugin plugin, see Important changes to Kubectl authentication are coming in GKE v1.26. You have the option to revert to the legacy in-tree-auth-plugin plugin by setting the USE_GKE_CLOUD_AUTH_PLUGIN environment flag to false.

  • The Kubernetes control plane has been updated to version 1.25.5-gke.1001 for all clusters.

  • The Kubernetes container daemon (containerd) has been updated to version 1.6.6-gke.1 for remote control plane clusters and to 1.6.12-gke.0 for survivability mode clusters.

  • The Kubernetes worker node agent (kubelet) has been updated to version 1.24.7.gke.1700 for remote control plane clusters and 1.25.5-gke.1001 for local control plane clusters.

  • Distributed Cloud Edge now supports the ConfigSync feature of Anthos Config Management. Distributed Cloud Edge does not support any other Anthos features.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • Distributed Cloud Edge now supports dynamic IPAM for multi-networking configurations.

  • Disabling the Anthos VM Runtime virtual machine subsystem no longer removes the network-controller-manager container. You can now disable the subsystem without affecting Distributed Cloud Edge networking features.

This release of Distributed Cloud Edge contains the following known issues:

  • BGP sessions do not recover when the associated network interface goes down and then comes back up.

  • In the CustomNetworkInterfaceConfig resource, setting the ifname field to gdcenet0 while the masterInterface field is also set to gdcenet0 causes the resource to not apply to the cluster.

  • When configuring a CustomNetworkInterfaceConfig resource, you must explicitly set the MTU size to be no greater than the MTU size of its parent network interface. Otherwise, unpredictable behavior might result.

  • If you reboot a node running a local control plane workload for a local control plane cluster, the cluster loses its GKEConnect connection to GKEHub until the node fully starts up again. The workloads deployed on the cluster continue to run.

  • If you are creating a remote control plane plane cluster, creating a node pool using nodes that were previously part of a local control plane cluster might fail. If you encounter this issue, contact Google Support for assistance.

March 27, 2023

1.3.1

This is a patch release of Google Distributed Cloud Edge (version 1.3.1).

The following changes have been introduced in this release of Distributed Cloud Edge:

  • The Kubernetes control plane has been updated to version 1.24.9-gke.2500.
  • The Kubernetes container daemon (containerd) has been updated to version 1.6.6-gke.1.
  • The Kubernetes worker node agent (kubelet) has been updated to version 1.24.7-gke.5.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • Errors in the NodeSystemConfigUpdate custom resource definition that shipped with Distributed Cloud Edge 1.3.0 have been corrected. The outputs of the affected status fields are now accurate.

This release of Distributed Cloud Edge contains the following known issues:

  • If you have enabled the Anthos VM Runtime virtual machine subsystem, disabling it removes the network-controller-manager service and its container. This renders Distributed Cloud Edge networking inoperable. To prevent this, keep the Anthos VM Runtime virtual machine subsystem enabled on your Distributed Cloud Edge deployment. If the subsystem has been disabled, re-enable it by following the steps in Enable the Anthos VM Runtime support on Distributed Cloud Edge to restore Distributed Cloud Edge networking to an operable state.

February 21, 2023

1.3.0

This is a minor release of Google Distributed Cloud Edge (version 1.3.0).

The following new features have been introduced in this release of Google Distributed Cloud Edge:

The following changes have been introduced in this release of Distributed Cloud Edge:

  • Getting information about a Machine resource now returns the version of the Distributed Cloud Edge cluster stack.
  • You can now connect Distributed Cloud Edge clusters to a Virtual Private Cloud network in a Cloud project other than your Distributed Cloud Edge cluster project.
  • When creating a cross-project VPN connection, you can no longer specify a VPC project service account. Distributed Cloud Edge now uses your cluster project service account.

January 06, 2023

1.2.2

This is a patch release of Google Distributed Cloud Edge (version 1.2.2).

The following changes have been introduced in this release of Distributed Cloud Edge:

  • The NVIDIA Tesla T4 GPU driver has been updated to version 515.65.01.
  • The NVIDIA Tesla T4 GPU resource name has been changed from nvidia.com/gpu to nvidia.com/gpu-pod-TESLA_T4. If you have existing GPU-based container workloads, you must manually update their configuration to use the new resource name. For more information, see Configure a container to use GPU resources.
  • The Kubernetes worker node agent (kubelet) has been updated to version 1.23.5-gke.1505.

November 07, 2022

1.2.0

This is a minor release of Google Distributed Cloud Edge (version 1.2.0).

The following new features have been introduced in this release of Google Distributed Cloud Edge:

The following changes have been introduced in this release of Google Distributed Cloud Edge:

  • Google Distributed Cloud Edge now ships with the NVIDIA Tesla T4 GPU driver version 470.63.01.
  • The Network Function operator feature of Google Distributed Cloud Edge has been updated as follows. To learn more, see Network Function operator.
    • The NodeSystemConfigUpdate resource now supports additional sysctls fields.
    • The NodeSystemConfigUpdate resource now supports fields for specifying the IP address lists and domain lists of private image registries.
    • The CustomNetworkInterfaceConfig resource no longer supports certain previously supported fields.
    • You can now scope both safe and unsafe sysctls parameters to a specific Pod or namespace using the tuning Container Networking Interface (CNI) plug-in.
    • Webhook-level enforcement of valid field values is now in effect.
  • The Kubernetes control plane has been updated to version 1.23.5-gke.1505.
  • The coredns service has been updated to version 1.8.6-gke.0.

The following issues have been resolved in this release of Google Distributed Cloud Edge:

  • Google Distributed Cloud Edge nodes no longer become temporarily unresponsive due to excessive memory utilization.

September 23, 2022

1.1.2

This is a patch release of Google Distributed Cloud Edge (version 1.1.2).

The following changes have been introduced in this release of Google Distributed Cloud Edge:

  • cgroups has been reverted to v1 to retain compatibility with legacy workloads.
  • The Kubernetes control plane has been updated to version 1.22.8-gke.204.
  • The Kubernetes container daemon (containerd) has been updated to version 1.5.13-gke.0.
  • The Kubernetes worker node agent (kubelet) has been updated to version 1.22.8-gke.200.

August 26, 2022

1.1.1

This is a patch release of Google Distributed Cloud Edge (version 1.1.1).

The following changes have been introduced in this release of Google Distributed Cloud Edge:

  • Google Distributed Cloud Edge worker nodes have been updated to Kubernetes 1.22.

The following issues have been resolved in this release of Google Distributed Cloud Edge:

  • The SR-IOV interface no longer fails to start after a Google Distributed Cloud Edge worker node has been rebooted.

July 14, 2022

1.1.0

This is a minor release of Google Distributed Cloud Edge (version 1.1.0).

The following changes have been introduced in this release of Google Distributed Cloud Edge:

  • The Kubernetes control plane has been updated to version 1.22.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • The Kubernetes control plane no longer becomes intermittently unavailable during Google Distributed Cloud Edge software updates.
  • VPN connectivity between non-Anthos gateway nodes and Google Cloud Platform now works reliably.

This release of Distributed Cloud Edge contains the following known issues:

  • Garbage collection intermittently fails to clean up terminated Pods.

May 25, 2022

1.0.2

This is a patch release of Google Distributed Cloud Edge (version 1.0.2).

The following changes have been introduced in this release of of Distributed Cloud Edge:

  • Configuring a maintenance window now controls the scheduling of software updates for the Kubernetes control plane and Kubernetes nodes.

  • You can now deploy KubeVirt virtual machines on Distributed Cloud Edge in unmanaged namespaces with support for the Containerized Data Importer (CDI) plug-in.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • Intermittent VPN connection persistence after deletion has been resolved. You no longer need to manually check whether the VPN connection and its associated resources have been successfully deleted.

  • The localpv-shared Persistent Volume has been eliminated. You will no longer see this Persistent Volume on the filesystem of your Distributed Cloud Edge nodes.

This release of Distributed Cloud Edge contains the following known issues:

  • The NodePort Service is not supported. This release of Distributed Cloud Edge only supports the LoadBalancer and ClusterIP Kubernetes Services.

  • The Kubernetes control planes associated with Distributed Cloud Clusters can briefly go down during Distributed Cloud Cluster software updates.

  • A large number of webhook calls might cause the Konnectivity proxy to temporarily fail.

  • The metrics agents running on Distributed Cloud Edge nodes can accumulate a backlog of events and stall, preventing the capture of further metrics.

March 30, 2022

1.0

This is the General Availability release of Google Distributed Cloud Edge (version 1.0.0).

For information about the latest known issues, see Known issues in this release of Distributed Cloud Edge.