This document describes known issues for version 1.7 of Google Distributed Cloud.
User cluster upgrade fails due to 'failed to register to GCP'
Category
Upgrade
Identified Versions
1.7.0+, 1.8.0+
Symptoms
When upgrading user clusters to 1.7 versions, the command gkectl upgrade cluster
fails with error messages similar to
$ gkectl upgrade cluster --kubeconfig kubeconfig --config user-cluster.yaml
…
Upgrading to bundle version: "1.7.1-gke.4"
…
Exit with error:
failed to register to GCP, gcloud output: , error: error running command 'gcloud alpha container hub memberships register foo-cluster --kubeconfig kubeconfig --context cluster --version 20210129-01-00 --enable-workload-identity --has-private-issuer --verbosity=error --quiet': error: exit status 1, stderr: 'Waiting for membership to be created...
The errors indicate that the user cluster upgrade is mostly completed except that the Connect Agent has not been upgraded. However, the functionality of GKE connect should not be affected.
Cause
The Connect Agent version 20210129-01-00
used in 1.7 versions is out of support.
Workaround
Please contact Google support to mitigate the issue.
systemd-timesyncd not running after reboot on Ubuntu Node
Category
OS
Identified Versions
1.7.1-1.7.5, 1.8.0-1.8.4, 1.9.0+
Symptoms
systemctl status systemd-timesyncd
should show that the service is dead:
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Active: inactive (dead)
This could cause time out of sync issues.
Cause
chrony
was incorrectly installed on Ubuntu OS image, and there's conflict
between chrony
and systemd-timesyncd
, where systemd-timesyncd
would become
inactive and chrony
become active everytime Ubuntu VM got rebooted. However,
systemd-timesyncd
should be the default ntp client for the VM.
Workaround
Option 1: Manually run restart systemd-timesyncd
every time when VM got rebooted.
Option 2: Deploy the following Daemonset so that systemd-timesyncd
will always
be restarted if it's dead.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ensure-systemd-timesyncd
spec:
selector:
matchLabels:
name: ensure-systemd-timesyncd
template:
metadata:
labels:
name: ensure-systemd-timesyncd
spec:
hostIPC: true
hostPID: true
containers:
- name: ensure-systemd-timesyncd
# Use your preferred image.
image: ubuntu
command:
- /bin/bash
- -c
- |
while true; do
echo $(date -u)
echo "Checking systemd-timesyncd status..."
chroot /host systemctl status systemd-timesyncd
if (( $? != 0 )) ; then
echo "Restarting systemd-timesyncd..."
chroot /host systemctl start systemd-timesyncd
else
echo "systemd-timesyncd is running."
fi;
sleep 60
done
volumeMounts:
- name: host
mountPath: /host
resources:
requests:
memory: "10Mi"
cpu: "10m"
securityContext:
privileged: true
volumes:
- name: host
hostPath:
path: /
````
## ClientConfig custom resource
`gkectl update` reverts any manual changes that you have made to the ClientConfig
custom resource. We strongly recommend that you back up the ClientConfig
resource after every manual change.
## `kubectl describe CSINode` and `gkectl diagnose snapshot`
`kubectl describe CSINode` and `gkectl diagnose snapshot` sometimes fail due to
the
[OSS Kubernetes issue](https://github.com/kubernetes/kubectl/issues/848){:.external}
on dereferencing nil pointer fields.
## OIDC and the CA certificate
The OIDC provider doesn't use the common CA by default. You must explicitly
supply the CA certificate.
Upgrading the admin cluster from 1.5 to 1.6.0 breaks 1.5 user clusters that use
an OIDC provider and have no value for `authentication.oidc.capath` in the
[user cluster configuration file](/anthos/clusters/docs/on-prem/1.7/how-to/user-cluster-configuration-file).
To work around this issue, run the following script:
<section><pre class="devsite-click-to-copy">
USER_CLUSTER_KUBECONFIG=<var class="edit">YOUR_USER_CLUSTER_KUBECONFIG</var>
IDENTITY_PROVIDER=<var class="edit">YOUR_OIDC_PROVIDER_ADDRESS</var>
openssl s_client -showcerts -verify 5 -connect $IDENTITY_PROVIDER:443 < /dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{ if(/BEGIN CERTIFICATE/){i++}; out="tmpcert"i".pem"; print >out}'
ROOT_CA_ISSUED_CERT=$(ls tmpcert*.pem | tail -1)
ROOT_CA_CERT="/etc/ssl/certs/$(openssl x509 -in $ROOT_CA_ISSUED_CERT -noout -issuer_hash).0"
cat tmpcert*.pem $ROOT_CA_CERT > certchain.pem CERT=$(echo $(base64 certchain.pem) | sed 's\ \\g') rm tmpcert1.pem tmpcert2.pem
kubectl --kubeconfig $USER_CLUSTER_KUBECONFIG patch clientconfig default -n kube-public --type json -p "[{ \"op\": \"replace\", \"path\": \"/spec/authentication/0/oidc/certificateAuthorityData\", \"value\":\"${CERT}\"}]"
</pre></section>
Replace the following:
* <var>YOUR_OIDC_IDENTITY_PROVICER</var>: The address of your OIDC provider:
* <var>YOUR_YOUR_USER_CLUSTER_KUBECONFIG</var>: The path of your user cluster
kubeconfig file.
## gkectl check-config</code> validation fails: can't find F5 BIG-IP partitions
<dl>
<dt>Symptoms</dt>
<dd><p>Validation fails because F5 BIG-IP partitions can't be found, even though they exist.</p></dd>
<dt>Potential causes</dt>
<dd><p>An issue with the F5 BIG-IP API can cause validation to fail.</p></dd>
<dt>Resolution</dt>
<dd><p>Try running <code>gkectl check-config</code> again.</p></dd>
</dl>
## Disruption for workloads with PodDisruptionBudgets {:#workloads_pdbs_disruption}
Upgrading clusters can cause disruption or downtime for workloads that use
[PodDisruptionBudgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/){:.external}
(PDBs).
## Nodes fail to complete their upgrade process
If you have `PodDisruptionBudget` objects configured that are unable to
allow any additional disruptions, node upgrades might fail to upgrade to the
control plane version after repeated attempts. To prevent this failure, we
recommend that you scale up the `Deployment` or `HorizontalPodAutoscaler` to
allow the node to drain while still respecting the `PodDisruptionBudget`
configuration.
To see all `PodDisruptionBudget` objects that do not allow any disruptions:
kubectl get poddisruptionbudget --all-namespaces -o jsonpath='{range .items[?(@.status.disruptionsAllowed==0)]}{.metadata.name}/{.metadata.namespace}{"\n"}{end}' ```
Log Forwarder makes an excessive number of OAuth 2.0 requests
With Google Distributed Cloud, version 1.7.1, you might experience issues with Log Forwarder consuming memory by making excessive OAuth 2.0 requests. Here is a workaround, in which you downgrade the stackdriver-operator
version, clean up the disk, and restart Log Forwarder.
Step 0: Download images to your private registry if appropriate
If you use a private registry, follow these steps to download these images to your private registry before proceeding. Omit this step if you do not use a private registry.
Replace PRIVATE_REGISTRY_HOST with the hostname or IP address of your private Docker registry.
stackdriver-operator
docker pull gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440 docker tag gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440 \ PRIVATE_REGISTRY_HOST/stackdriver-operator:v0.0.440 docker push PRIVATE_REGISTRY_HOST/stackdriver-operator:v0.0.440
fluent-bit
docker pull gcr.io/gke-on-prem-release/fluent-bit:v1.6.10-gke.3 docker tag gcr.io/gke-on-prem-release/fluent-bit:v1.6.10-gke.3 \ PRIVATE_REGISTRY_HOST/fluent-bit:v1.6.10-gke.3 docker push PRIVATE_REGISTRY_HOST/fluent-bit:v1.6.10-gke.3
prometheus
docker pull gcr.io/gke-on-prem-release/prometheus:2.18.1-gke.0 docker tag gcr.io/gke-on-prem-release/prometheus:2.18.1-gke.0 \ PRIVATE_REGISTRY_HOST/prometheus:2.18.1-gke.0 docker push PRIVATE_REGISTRY_HOST/prometheus:2.18.1-gke.0
Step 1: Downgrade the stackdriver-operator version
- Run the following command to downgrade your version of stackdriver-operator.
kubectl --kubeconfig [CLUSTER_KUBECONFIG] -n kube-system patch deployment stackdriver-operator -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"stackdriver-operator","image":"gcr.io/gke-on-prem-release/stackdriver-operator:v0.0.440"}]}}}}'
Step 2: Clean up the disk buffer for Log Forwarder
- Deploy the DaemonSet in the cluster to clean up the buffer.
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluent-bit-cleanup namespace: kube-system spec: selector: matchLabels: app: fluent-bit-cleanup template: metadata: labels: app: fluent-bit-cleanup spec: containers: - name: fluent-bit-cleanup image: debian:10-slim command: ["bash", "-c"] args: - | rm -rf /var/log/fluent-bit-buffers/ echo "Fluent Bit local buffer is cleaned up." sleep 3600 volumeMounts: - name: varlog mountPath: /var/log securityContext: privileged: true tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - key: node-role.kubernetes.io/master effect: NoSchedule - key: node-role.gke.io/observability effect: NoSchedule volumes: - name: varlog hostPath: path: /var/log
- Verify the disk buffer is cleaned up.
kubectl --kubeconfig [CLUSTER_KUBECONFIG] logs -n kube-system -l app=fluent-bit-cleanup | grep "cleaned up" | wc -l
The output shows the number of nodes in the cluster.
kubectl --kubeconfig [CLUSTER_KUBECONFIG] -n kube-system get pods -l app=fluent-bit-cleanup --no-headers | wc -l
The output shows the number of nodes in the cluster.
- Delete the cleanup DaemonSet.
kubectl --kubeconfig [CLUSTER_KUBECONFIG] -n kube-system delete ds fluent-bit-cleanup
Step 3: Restart Log Forwarder
kubectl --kubeconfig [CLUSTER_KUBECONFIG] -n kube-system rollout restart ds/stackdriver-log-forwarder
Logs and metrics are not sent to project specified by stackdriver.projectID
In Google Distributed Cloud 1.7, logs are sent to the parent project of the service account specified in the stackdriver.serviceAccountKeyPath
field of your cluster configuration file. The value of stackdriver.projectID
is ignored. This issue will be fixed in an upcoming release.
As a workaround, view logs in the parent project of your logging-monitoring service account.
Renewal of certificates might be required before an admin cluster upgrade
Before you begin the admin cluster upgrade process, you should make sure that your admin cluster certificates are currently valid, and renew these certificates if they are not.
Admin cluster certificate renewal process
Make sure that OpenSSL is installed on the admin workstation before you begin.
Get the IP address and SSH keys for the admin master node:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get secrets -n kube-system sshkeys \ -o jsonpath='{.data.vsphere_tmp}' | base64 -d > \ ~/.ssh/admin-cluster.key && chmod 600 ~/.ssh/admin-cluster.key export MASTER_NODE_IP=$(kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get nodes -o \ jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \ --selector='node-role.kubernetes.io/master')
Check if the certificates are expired:
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" \ "sudo kubeadm alpha certs check-expiration"
If the certificates are expired, you must renew them before upgrading the admin cluster.
Because the admin cluster kubeconfig file also expires if the admin certificates expire, you should back up this file before expiration.
Back up the admin cluster kubeconfig file:
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
"sudo cat /etc/kubernetes/admin.conf" > new_admin.conf vi [ADMIN_CLUSTER_KUBECONFIG]Replace
client-certificate-data
andclient-key-data
in kubeconfig withclient-certificate-data
andclient-key-data
in thenew_admin.conf
file that you created.
Back up old certificates:
This is an optional, but recommended, step.
# ssh into admin master if you didn't in the previous step ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" # on admin master sudo tar -czvf backup.tar.gz /etc/kubernetes logout # on worker node sudo scp -i ~/.ssh/admin-cluster.key \ ubuntu@"${MASTER_NODE_IP}":/home/ubuntu/backup.tar.gz .
Renew the certificates with kubeadm:
# ssh into admin master ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" # on admin master sudo kubeadm alpha certs renew all
Restart static Pods running on the admin master node:
# on admin master cd /etc/kubernetes sudo mkdir tempdir sudo mv manifests/*.yaml tempdir/ sleep 5 echo "remove pods" # ensure kubelet detect those change remove those pods # wait until the result of this command is empty sudo docker ps | grep kube-apiserver # ensure kubelet start those pods again echo "start pods again" sudo mv tempdir/*.yaml manifests/ sleep 30 # ensure kubelet start those pods again # should show some results sudo docker ps | grep -e kube-apiserver -e kube-controller-manager -e kube-scheduler -e etcd # clean up sudo rm -rf tempdir logout
Renew the certificates of admin cluster worker nodes
Check node certificates expiration date
kubectl get nodes -o wide # find the oldest node, fill NODE_IP with the internal ip of that node ssh -i ~/.ssh/admin-cluster.key ubuntu@"${NODE_IP}" openssl x509 -enddate -noout -in /var/lib/kubelet/pki/kubelet-client-current.pem logout
If the certificate is about to expire, renew node certificates by manual node repair.
You must validate the renewed certificates, and validate the certificate of kube-apiserver.
Check certificates expiration:
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
"sudo kubeadm alpha certs check-expiration"Check certificate of kube-apiserver:
# Get the IP address of kube-apiserver cat [ADMIN_CLUSTER_KUBECONFIG] | grep server # Get the current kube-apiserver certificate openssl s_client -showcerts -connect
:
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
> current-kube-apiserver.crt # check expiration date of this cert openssl x509 -in current-kube-apiserver.crt -noout -enddate
/etc/cron.daily/aide script uses up all space in /run, causing a crashloop in Pods
Starting from Google Distributed Cloud 1.7.2, the Ubuntu OS images are hardened with CIS L1 Server Benchmark.
.
As a result, the cron script /etc/cron.daily/aide
has been installed so that an aide check is scheduled to
ensure the CIS L1 Server rule "1.4.2 Ensure filesystem integrity is regularly checked".
The script uses /run/aide
as a temporary directory to save its cron logs, and over time it could use
up all the space in /run
. See /etc/cron.daily/aide script uses all space in /run for a workaround.
If you see one or more Pods crashlooping on a node, run df -h /run
on the node. If the command output shows 100% space usage,
then you are likely experiencing this issue.
We anticipate a fix in a future release. Meanwhile, you can resolve this issue with either of the following two workarounds:
- Periodically remove the log files at
/run/aide/cron.daily.old*
(recommended). - Follow the steps mentioned in the above external link. (Note: this workaround could potentially affect the node compliance state).
Using Google Distributed Cloud with Anthos Service Mesh version 1.7 or later
If you use Google Distributed Cloud with Anthos Service Mesh version 1.7 or later, and you want to upgrade to Google Distributed Cloud version 1.6.0-1.6.3 or Google Distributed Cloud version 1.7.0-1.7.2, you must remove the bundle.gke.io/component-name
and bundle.gke.io/component-version
labels from the following Custom Resource Definitions (CRDs):
destinationrules.networking.istio.io
envoyfilters.networking.istio.io
serviceentries.networking.istio.io
virtualservices.networking.istio.io
Run this command to update the CRD
destinationrules.networking.istio.io
in your user cluster:kubectl edit crd destinationrules.networking.istio.io --kubeconfig USER_CLUSTER_KUBECONFIG
Remove the
bundle.gke.io/component-version
andbundle.gke.io/component-name
labels from the CRD.
Alternatively, you can wait for the 1.6.4 and 1.7.3 release, and then upgrade to 1.6.4 or 1.7.3 directly.
Cannot log in to admin workstation due to password expiry issue
You might experience this issue if you are using one of the following versions of Google Distributed Cloud.
- 1.7.2-gke.2
- 1.7.3-gke.2
- 1.8.0-gke.21
- 1.8.0-gke.24
- 1.8.0-gke.25
- 1.8.1-gke.7
- 1.8.2-gke.8
You might get the following error when you attempt to SSH into your Anthos VMs, including the admin workstation, cluster nodes, and Seesaw nodes:
WARNING: Your password has expired.
This error occurs because the ubuntu user password on the VMs has expired. You must manually reset the user password's expiration time to a large value before logging into the VMs.
Prevention of password expiry error
If you are running the affected versions listed above, and the user password hasn't expired yet, you should extend the expiration time before seeing the SSH error.
Run the following command on each Anthos VM:
sudo chage -M 99999 ubuntu
Mitigation of password expiry error
If the user password has already expired and you can't log in to the VMs to extend the expiration time, perform the following mitigation steps for each component.
Admin workstation
Use a temporary VM to perform the following steps. You can create an admin workstation using the 1.7.1-gke.4 version to use as the temporary VM.
Ensure the temporary VM and the admin workstation are in a power off state.
Attach the boot disk of the admin workstation to the temporary VM. The boot disk is the one with the label "Hard disk 1".
Mount the boot disk inside the VM by running these commands. Substitute your own boot disk identifier for
dev/sdc1
.sudo mkdir -p /mnt/boot-disk sudo mount /dev/sdc1 /mnt/boot-disk
Set the ubuntu user expiration date to a large value such as
99999
days.sudo chroot /mnt/boot-disk chage -M 99999 ubuntu
Shut down the temporary VM.
Power on the admin workstation. You should now be able to SSH as usual.
As cleanup, delete the temporary VM.
Admin cluster control plane VM
Follow the instructions to recreate the admin cluster control plane VM.
Admin cluster addon VMs
Run the following command from the admin workstation to recreate the VM:
kubectl --kubeconfig=ADMIN_CLUSTER_KUBECONFIG patch machinedeployment gke-admin-node --type=json -p="[{'op': 'add', 'path': '/spec/template/spec/metadata/annotations', 'value': {"kubectl.kubernetes.io/restartedAt": "version1"}}]"
After you run this command, wait for the admin cluster addon VMs to finish recreation and to be ready before you continue with the next steps.
User cluster control plane VMs
Run the following command from the admin workstation to recreate the VMs:
usermaster=`kubectl --kubeconfig=ADMIN_CLUSTER_KUBECONFIG get machinedeployments -l set=user-master -o name` && kubectl --kubeconfig=ADMIN_CLUSTER_KUBECONFIG patch $usermaster --type=json -p="[{'op': 'add', 'path': '/spec/template/spec/metadata/annotations', 'value': {"kubectl.kubernetes.io/restartedAt": "version1"}}]"
After you run this command, wait for the user cluster control plane VMs to finish recreation and to be ready before you continue with the next steps.
User cluster worker VMs
Run the following command from the admin workstation to recreate the VMs.
for md in `kubectl --kubeconfig=USER_CLUSTER_KUBECONFIG get machinedeployments -l set=node -o name`; do kubectl patch --kubeconfig=USER_CLUSTER_KUBECONFIG $md --type=json -p="[{'op': 'add', 'path': '/spec/template/spec/metadata/annotations', 'value': {"kubectl.kubernetes.io/restartedAt": "version1"}}]"; done
Seesaw VMs
Run the following commands from the admin workstation to recreate the Seesaw VMs. There will be some downtime. If HA is enabled for the load balancer, the maximum down time is two seconds.
gkectl upgrade loadbalancer --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config ADMIN_CLUSTER_CONFIG --admin-cluster --no-diff gkectl upgrade loadbalancer --kubeconfig ADMIN_CLUSTER_KUBECONFIG --config USER_CLUSTER_CONFIG --no-diff
Restarting or upgrading vCenter for versions lower than 7.0U2
If the vCenter, for versions lower than 7.0U2, is restarted, after an upgrade or otherwise,
the network name in vm information from vCenter is incorrect, and results in the machine being in an Unavailable
state. This eventually leads to the nodes being auto-repaired to create new ones.
Related govmomi bug: https://github.com/vmware/govmomi/issues/2552
This workaround is provided by VMware support:
1. The issue is fixed in vCenter versions 7.0U2 and above. 2. For lower versions: Right-click the host, and then select Connection > Disconnect. Next, reconnect, which forces an update of the VM's portgroup.
SSH connection closed by remote host
For Google Distributed Cloud version 1.7.2 and above, the Ubuntu OS images are hardened with CIS L1 Server Benchmark.
To meet the CIS rule "5.2.16 Ensure SSH Idle Timeout Interval is configured", /etc/ssh/sshd_config
has the following settings:
ClientAliveInterval 300 ClientAliveCountMax 0
The purpose of these settings is to terminate a client session after 5 minutes of idle time. However, the ClientAliveCountMax 0
value causes
unexpected behavior. When you use the ssh session on the admin workstation, or a cluster node, the SSH connection might be disconnected
even your ssh client is not idle, such as when running a time-consuming command, and your command could get terminated with the following message:
Connection to [IP] closed by remote host. Connection to [IP] closed.
As a workaround, you can either:
Use
nohup
to prevent your command being terminated on SSH disconnection,nohup gkectl upgrade admin --config admin-cluster.yaml --kubeconfig kubeconfig
Update the
sshd_config
to use a non-zeroClientAliveCountMax
value. The CIS rule recommends to use a value less than 3.sudo sed -i 's/ClientAliveCountMax 0/ClientAliveCountMax 1/g' /etc/ssh/sshd_config sudo systemctl restart sshd
Make sure you reconnect your ssh session.
False positives in docker, containerd, and runc vulnerability scanning
The docker, containerd, and runc in the Ubuntu OS images shipped with Google Distributed Cloud are pinned to special versions using Ubuntu PPA. This ensures that any container runtime changes will be qualified by Google Distributed Cloud before each release.
However, the special versions are unknown to the Ubuntu CVE Tracker, which is used as the vulnerability feeds by various CVE scanning tools. Therefore, you will see false positives in docker, containerd, and runc vulnerability scanning results.
For example, you might see the following false positives from your CVE scanning results. These CVEs are already fixed in the latest patch versions of Google Distributed Cloud.
Refer to the release notes for any CVE fixes.
Canonical is aware of this issue, and the fix is tracked at https://github.com/canonical/sec-cvescan/issues/73.
/etc/cron.daily/aide
CPU and memory spike issue
Starting from Google Distributed Cloud version 1.7.2, the Ubuntu OS images are hardened with CIS L1 Server Benchmark.
As a result, the cron script /etc/cron.daily/aide
has been installed so that an aide
check is scheduled so as to
ensure that the CIS L1 Server rule "1.4.2 Ensure filesystem integrity is regularly checked" is followed.
The cron job runs daily at 6:25 AM UTC. Depending on the number of files on the filesystem,
you may experience CPU and memory usage spikes around that time that are caused by this aide
process.
If the spikes are affecting your workload, you can disable the daily cron job:
`sudo chmod -x /etc/cron.daily/aide`.