Google Distributed Cloud air-gapped 1.9.2 release notes

March 31, 2023 [GDCH 1.9.2]


Google Distributed Cloud air-gapped 1.9.2 is now released.

See the product overview to learn about the features of Google Distributed Cloud air-gapped.


Updated GKE on Bare Metal version to 1.14.3-gke.8 to apply the latest security patches and important updates.

See GKE on Bare Metal 1.14.3 release notes for details.


Between 1070 and 1072 the NVIDIA driver was not signed to match the kernel and therefore the NVIDIA driver couldn't load during a secured boot. Updated Canonical Ubuntu OS image version to 20230309 with matching signed NVIDIA driver to address the problem.


In the Firewall operable component, add a default authentication profile for all non-emergency access administrators. This profile enforces a limit of three (3) consecutive failed login attempts before the account is locked. Only an emergency access admin account can restore access.
This change is a requirement from the Security Technical Implementation Guide (STIG) V-228639.


In the Firewall operable component, GDC 1.9.2 implements a default deny rule for traffic flows, and only system-required flows are explicitly allowed. You can use the Firewall API to create a firewall policy to allow additional flows.

For information about using the Firewall API to create a firewall policy, see:



In Google Distributed Cloud air-gapped 1.9.2, the Node and Operating System component uses an auto restart on configuration feature of the VM to resolve the issue of potential failure to use a new VM disk for a VM after stopping and restarting KVM during cluster upgrade.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where role-based access control (RBAC) and schema settings in the VM manager is stopping users from starting VM backup and restore processes.


Google Distributed Cloud air-gapped 1.9.0 has a known issue where remote server management software is occasionally unable to retrieve the key from HSM.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where using the standard-block storage class might prevent virtual machines (VMs) from starting or restarting.


During an upgrade from Google Distributed Cloud air-gapped 1.9.1 to 1.9.2, operations to Artifact Registry may fail with Unauthorized errors.


Unable to retrieve logs for pod due to a missing image.


A server is stuck in the available state and its encryption configuration job keeps failing due to an SSH key error.


Provisioning a user cluster through GUI gets stuck.


At bootstrap, Google Distributed Cloud air-gapped 1.9.2 fails to return metrics from Cortex.


Google Distributed Cloud air-gapped 1.9.2 has a known issue during the Node OS upgrade where a the server is stuck in deprovisioning because boot.ipxe URL is invalid.


Google Distributed Cloud air-gapped 1.9.2 has a known issue during the Node OS upgrade where a node fails the machine-init job.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where the upgrade from 1.9.0 to 1.9.1 is blocked because the ods-fleet add-on failed to install.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where the vm-runtime addon is stuck during the upgrade of the gpu-org-system-cluster from 1.9.1 to 1.9.2 because the kubevm-gpu-driver-daemonset pods are in the CrashLoopBackOff state.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where a user cluster does not become ready in time.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where an OrganizationUpgrade status does not get updated.


Google Distributed Cloud air-gapped 1.9.2 has a known issue in the UI that lets you select an incompatible coupling of GPU to VM type.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where VMs with memory greater than 32 GB require a memory override due to an incorrect QEMU overhead calculation.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where the kube-state-metrics deployment crash loops.


Google Distributed Cloud air-gapped 1.9.2 has a known issue where alerts in organization system clusters don't reach the ticketing system.