Version 1.9. This version is supported as outlined in the Anthos version support policy, offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware (GKE on-prem). Refer to the release notes for more details. This is the most recent version.

Known issues

This document describes known issues for version 1.9 of Anthos clusters on VMware (GKE on-prem).

ClientConfig custom resource

gkectl update reverts any manual changes that you have made to the ClientConfig custom resource. We strongly recommend that you back up the ClientConfig resource after every manual change.

kubectl describe CSINode and gkectl diagnose snapshot

kubectl describe CSINode and gkectl diagnose snapshot sometimes fail due to the OSS Kubernetes issue on dereferencing nil pointer fields.

OIDC and the CA certificate

The OIDC provider doesn't use the common CA by default. You must explicitly supply the CA certificate.

Upgrading the admin cluster from 1.5 to 1.6.0 breaks 1.5 user clusters that use an OIDC provider and have no value for authentication.oidc.capath in the user cluster configuration file.

To work around this issue, run the following script:

USER_CLUSTER_KUBECONFIG=YOUR_USER_CLUSTER_KUBECONFIG

IDENTITY_PROVIDER=YOUR_OIDC_PROVIDER_ADDRESS

openssl s_client -showcerts -verify 5 -connect $IDENTITY_PROVIDER:443 < /dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{ if(/BEGIN CERTIFICATE/){i++}; out="tmpcert"i".pem"; print >out}'

ROOT_CA_ISSUED_CERT=$(ls tmpcert*.pem | tail -1)

ROOT_CA_CERT="/etc/ssl/certs/$(openssl x509 -in $ROOT_CA_ISSUED_CERT -noout -issuer_hash).0"

cat tmpcert*.pem $ROOT_CA_CERT > certchain.pem CERT=$(echo $(base64 certchain.pem) | sed 's\ \\g') rm tmpcert1.pem tmpcert2.pem

kubectl --kubeconfig $USER_CLUSTER_KUBECONFIG patch clientconfig default -n kube-public --type json -p "[{ \"op\": \"replace\", \"path\": \"/spec/authentication/0/oidc/certificateAuthorityData\", \"value\":\"${CERT}\"}]"

Replace the following:

  • YOUR_OIDC_IDENTITY_PROVICER: The address of your OIDC provider:

  • YOUR_YOUR_USER_CLUSTER_KUBECONFIG: The path of your user cluster kubeconfig file.

gkectl check-config validation fails: can't find F5 BIG-IP partitions

Symptoms

Validation fails because F5 BIG-IP partitions can't be found, even though they exist.

Potential causes

An issue with the F5 BIG-IP API can cause validation to fail.

Resolution

Try running gkectl check-config again.

Disruption for workloads with PodDisruptionBudgets

Upgrading clusters can cause disruption or downtime for workloads that use PodDisruptionBudgets (PDBs).

Nodes fail to complete their upgrade process

If you have PodDisruptionBudget objects configured that are unable to allow any additional disruptions, node upgrades might fail to upgrade to the control plane version after repeated attempts. To prevent this failure, we recommend that you scale up the Deployment or HorizontalPodAutoscaler to allow the node to drain while still respecting the PodDisruptionBudget configuration.

To see all PodDisruptionBudget objects that do not allow any disruptions:

kubectl get poddisruptionbudget --all-namespaces -o jsonpath='{range .items[?(@.status.disruptionsAllowed==0)]}{.metadata.name}/{.metadata.namespace}{"\n"}{end}'

Log Forwarder makes an excessive number of OAuth 2.0 requests

With Anthos clusters on VMware, version 1.7.1, you might experience issues with Log Forwarder consuming memory by making excessive OAuth 2.0 requests. Here is a workaround, in which you downgrade the stackdriver-operator version, clean up the disk, and restart Log Forwarder.

Step 0: Download images to your private registry if appropriate

If you use a private registry, follow these steps to download these images to your private registry before proceeding. Omit this step if you do not use a private registry.

Replace PRIVATE_REGISTRY_HOST with the hostname or IP address of your private Docker registry.

stackdriver-operator

docker pull gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440

docker tag gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440 \
    PRIVATE_REGISTRY_HOST/stackdriver-operator:v0.0.440

docker push PRIVATE_REGISTRY_HOST/stackdriver-operator:v0.0.440

fluent-bit

docker pull gcr.io/gke-on-prem-release/fluent-bit:v1.6.10-gke.3

docker tag gcr.io/gke-on-prem-release/fluent-bit:v1.6.10-gke.3 \
    PRIVATE_REGISTRY_HOST/fluent-bit:v1.6.10-gke.3

docker push PRIVATE_REGISTRY_HOST/fluent-bit:v1.6.10-gke.3

prometheus

docker pull gcr.io/gke-on-prem-release/prometheus:2.18.1-gke.0

docker tag gcr.io/gke-on-prem-release/prometheus:2.18.1-gke.0 \
    PRIVATE_REGISTRY_HOST/prometheus:2.18.1-gke.0

docker push PRIVATE_REGISTRY_HOST/prometheus:2.18.1-gke.0

Step 1: Downgrade the stackdriver-operator version

Run the following command to downgrade your version of stackdriver-operator.

kubectl  --kubeconfig  -n kube-system patch deployment stackdriver-operator -p \
  '{"spec":{"template":{"spec":{"containers":[{"name":"stackdriver-operator","image":"gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440"}]}}}}'

Step 2: Clean up the disk buffer for Log Forwarder

  • Deploy the DaemonSet in the cluster to clean up the buffer.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit-cleanup
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: fluent-bit-cleanup
  template:
    metadata:
      labels:
        app: fluent-bit-cleanup
    spec:
      containers:
      - name: fluent-bit-cleanup
        image: debian:10-slim
        command: ["bash", "-c"]
        args:
        - |
          rm -rf /var/log/fluent-bit-buffers/
          echo "Fluent Bit local buffer is cleaned up."
          sleep 3600
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        securityContext:
          privileged: true
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: node-role.gke.io/observability
        effect: NoSchedule
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
  • Verify the disk buffer is cleaned up.
kubectl --kubeconfig  logs -n kube-system -l app=fluent-bit-cleanup | grep "cleaned up" | wc -l

The output shows the number of nodes in the cluster.

kubectl --kubeconfig  -n kube-system get pods -l app=fluent-bit-cleanup --no-headers | wc -l

The output shows the number of nodes in the cluster.

  • Delete the cleanup DaemonSet.
kubectl --kubeconfig  -n kube-system delete ds fluent-bit-cleanup

Step 3: Restart Log Forwarder

kubectl --kubeconfig  -n kube-system rollout restart ds/stackdriver-log-forwarder

User cluster installation failed because of cert-manager/ca-injector's leader election issue

You might see an installation failure due to cert-manager-cainjector in crashloop, when the apiserver/etcd is slow:

# These are logs from `cert-manager-cainjector`, from the command
# `kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system cert-manager-cainjector-xxx`

I0923 16:19:27.911174       1 leaderelection.go:278] failed to renew lease kube-system/cert-manager-cainjector-leader-election: timed out waiting for the condition
E0923 16:19:27.911110       1 leaderelection.go:321] error retrieving resource lock kube-system/cert-manager-cainjector-leader-election-core: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/cert-manager-cainjector-leader-election-core": context deadline exceeded
I0923 16:19:27.911593       1 leaderelection.go:278] failed to renew lease kube-system/cert-manager-cainjector-leader-election-core: timed out waiting for the condition
E0923 16:19:27.911629       1 start.go:163] cert-manager/ca-injector "msg"="error running core-only manager" "error"="leader election lost"

Run the following commands to mitigate the problem.

First Scale down the monitoring-operator so it will not revert the changes to the cert-manager Deployment.

kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system scale deployment monitoring-operator --replicas=0

Second, edit the cert-manager-cainjector Deployment to disable leader election, because we only have one replica running. It is not required for a single replica.

# Add a command line flag for cainjector: `--leader-elect=false`
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG edit -n kube-system deployment cert-manager-cainjector

The relevant yaml snippet for cert-manager-cainjector deployment should looks like this: ... apiVersion: apps/v1 kind: Deployment metadata: name: cert-manager-cainjector namespace: kube-system ... spec: ... template: ... spec: ... containers: - name: cert-manager image: "gcr.io/gke-on-prem-staging/cert-manager-cainjector:v1.0.3-gke.0" args: ... - --leader-elect=false ...

Keep monitoring-operator replicas at 0 as a mitigation until the installation is finished. Otherwise it will revert the change.

After the installation is finished and the cluster is up and running, turn on the monitoring-operator for day-2 operations:

kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system scale deployment monitoring-operator --replicas=1

After each upgrade, the changes will be reverted. Perform the same steps again to mitigate the issue until this is fixed in a future release.

Logs and metrics are not sent to project specified by stackdriver.projectID

In Anthos clusters on VMware 1.7, logs are sent to the parent project of the service account specified in the stackdriver.serviceAccountKeyPath field of your cluster configuration file. The value of stackdriver.projectID is ignored. This issue will be fixed in an upcoming release.

As a workaround, view logs in the parent project of your logging-monitoring service account.

Renewal of certificates might be required before an admin cluster upgrade

Before you begin the admin cluster upgrade process, you should make sure that your admin cluster certificates are currently valid, and renew these certificates if they are not.

Admin cluster certificate renewal process

  1. Make sure that OpenSSL is installed on the admin workstation before you begin.

  2. Set the KUBECONFIG variable:

    KUBECONFIG=ABSOLUTE_PATH_ADMIN_CLUSTER_KUBECONFIG
    

    Replace ABSOLUTE_PATH_ADMIN_CLUSTER_KUBECONFIG with the absolute path to the admin cluster kubeconfig file.

  3. Get the IP address and SSH keys for the admin master node:

    kubectl --kubeconfig "${KUBECONFIG}" get secrets -n kube-system sshkeys \
    -o jsonpath='{.data.vsphere_tmp}' | base64 -d > \
    ~/.ssh/admin-cluster.key && chmod 600 ~/.ssh/admin-cluster.key
    
    export MASTER_NODE_IP=$(kubectl --kubeconfig "${KUBECONFIG}" get nodes -o \
    jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \
    --selector='node-role.kubernetes.io/master')
    
  4. Check if the certificates are expired:

    ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" \
    "sudo kubeadm alpha certs check-expiration"
    

    If the certificates are expired, you must renew them before upgrading the admin cluster.

  5. Back up old certificates:

    This is an optional, but recommended, step.

    # ssh into admin master
    ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
    
    # on admin master
    sudo tar -czvf backup.tar.gz /etc/kubernetes
    logout
    
    # on worker node
    sudo scp -i ~/.ssh/admin-cluster.key \
    ubuntu@"${MASTER_NODE_IP}":/home/ubuntu/backup.tar.gz .
    
  6. Renew the certificates with kubeadm:

     # ssh into admin master
     ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
     # on admin master
     sudo kubeadm alpha certs renew all
     

  7. Restart the admin master node:

      # on admin master
      cd /etc/kubernetes
      sudo mkdir tempdir
      sudo mv manifests/*.yaml tempdir/
      sleep 5
      echo "remove pods"
      # ensure kubelet detect those change remove those pods
      # wait until the result of this command is empty
      sudo docker ps | grep kube-apiserver
    
      # ensure kubelet start those pods again
      echo "start pods again"
      sudo mv tempdir/*.yaml manifests/
      sleep 30
      # ensure kubelet start those pods again
      # should show some results
      sudo docker ps | grep -e kube-apiserver -e kube-controller-manager -e kube-scheduler -e etcd
    
      # clean up
      sudo rm -rf tempdir
    
      logout
     
  8. Because the admin cluster kubeconfig file also expires if the admin certificates expire, you should back up this file before expiration.

    • Back up the admin cluster kubeconfig file:

      ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" 
      "sudo cat /etc/kubernetes/admin.conf" > new_admin.conf vi "${KUBECONFIG}"

    • Replace client-certificate-data and client-key-data in kubeconfig with client-certificate-data and client-key-data in the new_admin.conf file that you created.

  9. You must validate the renewed certificates, and validate the certificate of kube-apiserver.

    • Check certificates expiration:

      ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" 
      "sudo kubeadm alpha certs check-expiration"

    • Check certificate of kube-apiserver:

      # Get the IP address of kube-apiserver
      cat $KUBECONFIG | grep server
      # Get the current kube-apiserver certificate
      openssl s_client -showcerts -connect : 
      | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
      > current-kube-apiserver.crt # check expiration date of this cert openssl x509 -in current-kube-apiserver.crt -noout -enddate

Restarting or upgrading vCenter for versions lower than 7.0U2

If the vCenter, for versions lower than 7.0U2, is restarted, after an upgrade or otherwise, the network name in vm information from vCenter is incorrect, and results in the machine being in an Unavailable state. This eventually leads to the nodes being auto-repaired to create new ones.

Related govmomi bug: https://github.com/vmware/govmomi/issues/2552

This workaround is provided by VMware support:

1. The issue is fixed in vCenter versions 7.0U2 and above.

2. For lower versions:
Right-click the host, and then select Connection > Disconnect. Next, reconnect, which forces an update of the
VM's portgroup.

SSH connection closed by remote host

For Anthos clusters on VMware version 1.7.2 and above, the Ubuntu OS images are hardened with CIS L1 Server Benchmark. To meet the CIS rule "5.2.16 Ensure SSH Idle Timeout Interval is configured", /etc/ssh/sshd_config has the following settings:

ClientAliveInterval 300
ClientAliveCountMax 0

The purpose of these settings is to terminate a client session after 5 minutes of idle time. However, the ClientAliveCountMax 0 value causes unexpected behavior. When you use the ssh session on the admin workstation, or a cluster node, the SSH connection might be disconnected even your ssh client is not idle, such as when running a time-consuming command, and your command could get terminated with the following message:

Connection to [IP] closed by remote host.
Connection to [IP] closed.

As a workaround, you can either:

  • Use nohup to prevent your command being terminated on SSH disconnection,

    nohup gkectl upgrade admin --config admin-cluster.yaml --kubeconfig kubeconfig
    
  • Update the sshd_config to use a non-zero ClientAliveCountMax value. The CIS rule recommends to use a value less than 3.

    sudo sed -i 's/ClientAliveCountMax 0/ClientAliveCountMax 1/g' /etc/ssh/sshd_config
    sudo systemctl restart sshd
    

    Make sure you reconnect your ssh session.

Conflict with cert-manager when upgrading to a version higher than 1.8.2

If you have your own cert-manager installation with Anthos clusters on VMware, you might experience a failure when you attempt to upgrade to versions 1.8.2 or above. This is a result of a conflict between your version of cert-manager, which is likely installed in the cert-manager namespace, and the monitoring-operator version.

If you try to install another copy of cert-manager after upgrading to Anthos clusters on VMware version 1.8.2 or above, the installation might fail due to a conflict with the existing one managed by monitoring-operator.

The metrics-ca cluster issuer, which control-plane and observability components rely on for creation and rotation of cert secrets, requires a metrics-ca cert secret to be stored in the cluster resource namespace. This namespace is kube-system for the monitoring-operator installation, and likely to be cert-manager for your installation.

If you have experienced an installation failure, follow these steps tTo upgrade successfully to version 1.8.2 or later:

  1. Uninstall your version of cert-manager.

  2. Perform the upgrade.

  3. If you want to restore your own installation of cert-manager, follow these steps:

    • Scale the monitoring-operator deployment to 0. For the admin cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n kube-system scale deployment monitoring-operator --replicas=0

      For a user cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n USER_CLUSTER_NAME scale deployment monitoring-operator --replicas=0

    • Scale the cert-manager deployments managed by monitoring-operator to 0.

      For the admin cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager --replicas=0
      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager-cainjector --replicas=0
      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager-webhook --replicas=0
      

      For a user cluster, run this command:

      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager --replicas=0
      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager-cainjector --replicas=0
      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n kube-system scale deployment cert-manager-webhook --replicas=0
      
    • Reinstall your cert-manager.

    • Copy the metrics-ca cert-manager.io/v1 Certificate and the metrics-pki.cluster.local Issuer resources from kube-system to the cluster resource namespace of your installed cert-manager. This namespace might be cert-manager if using the upstream cert-manager release, such as v1.0.3, but that depends on your installation.

Managing a third-party cert-manager when upgrading to version 1.9.2 or higher

If you have experienced an installation failure, follow these steps to upgrade successfully to version 1.9.2 or later:

  1. Uninstall your version of cert-manager.

  2. Perform the upgrade.

  3. If you want to restore your own installation of cert-manager, follow these steps:

    • Scale the monitoring-operator deployment to 0. For the admin cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n kube-system scale deployment monitoring-operator --replicas=0

      For a user cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n USER_CLUSTER_NAME scale deployment monitoring-operator --replicas=0

    • Scale the cert-manager deployments managed by monitoring-operator to 0.

      For the admin cluster, run this command:

      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager --replicas=0
      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager-cainjector --replicas=0
      kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager-webhook --replicas=0
      

      For a user cluster, run this command:

      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager --replicas=0
      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager-cainjector --replicas=0
      kubectl --kubeconfig USER_CLUSTER_KUBECONFIG -n cert-manager scale deployment cert-manager-webhook --replicas=0
      
    • Reinstall your cert-manager.

    • Copy the metrics-ca cert-manager.io/v1 Certificate and the metrics-pki.cluster.local Issuer resources from cert-manager to the cluster resource namespace of your installed cert-manager. This namespace might be cert-manager if using the upstream cert-manager release, such as v1.5.4, but that depends on your installation.

False positives in docker, containerd, and runc vulnerability scanning

The docker, containerd, and runc in the Ubuntu OS images shipped with Anthos clusters on VMware are pinned to special versions using Ubuntu PPA. This ensures that any container runtime changes will be qualified by Anthos clusters on VMware before each release.

However, the special versions are unknown to the Ubuntu CVE Tracker, which is used as the vulnerability feeds by various CVE scanning tools. Therefore, you will see false positives in docker, containerd, and runc vulnerability scanning results.

For example, you might see the following false positives from your CVE scanning results. These CVEs are already fixed in the latest patch versions of Anthos clusters on VMware.

Refer to the release notes for any CVE fixes.

Canonical is aware of this issue, and the fix is tracked at https://github.com/canonical/sec-cvescan/issues/73.

Unhealthy konnectivity server Pods when using the Seesaw or manual mode load balancer

If you are using Seesaw or the manual mode load balancer, you might notice the konnectivity server Pods are unhealthy. This happens because Seesaw does not support reusing an IP address across a service. For manual mode, creating a load balancer service does not automatically provision the service on your load balancer.

SSH tunneling is enabled in version 1.9 clusters. Thus, even if the konnectivity server is not healthy, you can still use the SSH tunnel, so that the connectivity to and within the cluster should not be affected. Therefore, you do not need to be concerned about these unhealthy Pods.

If you plan to upgrade from version 1.9.0 to 1.9.x, it is recommended you delete the unhealthy konnectivity server deployments before upgrading. Run this command.

kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG -n USER_CLUSTER_NAME delete Deployment konnectivity-server

/etc/cron.daily/aide CPU and memory spike issue

Starting from Anthos clusters on VMware version 1.7.2, the Ubuntu OS images are hardened with CIS L1 Server Benchmark.

As a result, the cron script /etc/cron.daily/aide has been installed so that an aide check is scheduled so as to ensure that the CIS L1 Server rule "1.4.2 Ensure filesystem integrity is regularly checked" is followed.

The cron job runs daily at 6:00 AM UTC. Depending on the number of files on the filesystem, you may experience CPU and memory usage spikes around that time that are caused by this aide process.

If the spikes are affecting your workload, you can disable the daily cron job:

`sudo chmod -x /etc/cron.daily/aide`.