Google Distributed Cloud air-gapped 1.12.2 release notes

April 5, 2024


Google Distributed Cloud (GDC) air-gapped 1.12.2 is available.
See the product overview to learn about the features of Distributed Cloud.


Updated Rocky Linux OS image version to 20240306 to apply the latest security patches and important updates. The following security vulnerabilities are fixed:


The following container image security vulnerabilities are fixed:


Fixed a vulnerability with Microsoft Visual Studio Code in Operations Suite Infrastructure (OI) by updating Microsoft Visual Studio Code to version 1.86.2.


Fixed multiple vulnerabilities with Google Chrome in Operations Suite Infrastructure (OI) by updating to version 122.0.6261.69.


Updated the gcr.io/distroless/libc base image to digest sha256:4f834e207f2721977094aeec4c9daee7032c5daec2083c0be97760f4306e4f88 to apply the latest security patches and important updates.


Fixed the vulnerabilities related to the prebuilt Ubuntu OS.


Cluster management:

  • User clusters with Kubernetes version 1.27.x might have node pools that fail to initialize.
  • The required IPv4 PodCIDR is not available.

File and block storage:

  • When upgrading from 1.11.1 to 1.12.2, the file-netapp-trident subcomponent rollout might fail.

Hardware security module (HSM):

  • A rotatable secret for hardware security modules is in an unknown state.

Lower networking:

  • Network switches preloaded with version lower than 9.3.10 may fail to bootstrap.
  • Some connections to the org-admin node time out.

Object storage:

  • Object storage buckets might not be ready after root org upgrade.

Monitoring:

  • Configuring the ServiceNow webhook results in Lifecycle Management (LCM) re-reconciling and reverting the changes made to the ConfigMap object mon-alertmanager-servicenow-webhook-backend and the Secret object mon-alertmanager-servicenow-webhook-backend in the mon-system namespace.
  • The mon-common subcomponent doesn't deploy the Istio Telemetry object on the mon-system namespace.
  • The metrics storage class is incorrectly defined in the configuration.

Networking:

  • GDC fails to create switch ACLs from traffic policies during the initial bootstrapping process.

NTP server:

  • The Node OS has unsynchronized time.

Physical servers:

  • A NodePool has a server in unknown state during creation.
  • When upgrading from 1.11.x to 1.12.2, NodeUpgrade contains multiple versions for the same hardware model, blocking firmware upgrade verification.
  • The node firmware upgrade fails on the tenant org.

System artifact registry:

  • The job service pod is not ready.

Ticketing system:

  • The ticketing system knowledge base sync fails.
  • The ticketing system has no healthy upstream.

Virtual machines:

  • The importer pods are failing or stuck.
  • Virtual machine disks might take a long time to provision.
  • VMRuntime might not be ready due to network-controller-manager installation failure

Upgrade:

  • The unet-nodenetworkpolicy-infra subcomponent fails during upgrade.
  • System cluster fails during upgrade from 1.11.x to 1.12.2.
  • The file-observability subcomponent fails on the org-1-system-cluster when upgrading from 1.11.x to 1.12.2.
  • The HSMupgrade fails when upgrading from 1.11.x to 1.12.2.
  • Loki pods are stuck in a terminating state for more than 1.5 hours when upgrading from 1.11.x to latest.
  • SSH for a VM with management IP and the cilium logs fails when upgrading from 1.11.x to 1.12.2.
  • The object storage upgrade shows an error during the post flight check.
  • The mz-etcd subcomponent updates spec.deployTarget and spec.Namespace causing the upgrade from 1.11.x to 1.12.x to fail.

Cluster management:

  • Fixed the issue with the namespace deletion operation getting stuck in the Terminating state when deleting a user cluster.

Logging:

  • Fixed the issue with Loki instances not collecting audit logs and operational logs.
  • Fixed the issue with ValidatingWebhookConfiguration, MutatingWebhookConfiguration, and MonitoringRule resources deployed by the Log component failing to upgrade from 1.11.x to 1.12.x.
  • Fixed the issue with Kubernetes API server logs not being forwarded to an external SIEM destination when enabling logs export.

Monitoring:

  • Fixed the issue with the Cortex bucket deletion failure when upgrading from 1.11.x to 1.12.2.

NTP server:

  • Fixed an issue with the NTP relay server pod crash looping.

Physical servers:

  • Fixed the issue where servers were stuck in the Inspecting phase during bootstrap.

Upgrade:

  • Fixed the issue when OS in-place node upgrade might stop responding.
  • Fixed the issue with user cluster upgrades being blocked due to a reconciling error.

Volume backup and restore:

  • Fixed the issue that prevented volume backups from resolving org buckets.

Add-on Manager: