This page describes known issues that you might run into while using Compute Engine. For issues that specifically affect Confidential VMs, see Confidential VM Known issues.
Quotas in the Google Cloud console might be incorrect for
The Google Cloud console might display higher Compute Engine quotas than
your project has authorization for in the
Logs and errors related to quota for the
asia-south1 region are still accurate.
If you encounter
asia-south1 region during this time and need more quota, use the
Google Cloud console to
request an increase in quota.
The CPU utilization observability metric is incorrect for VMs that use one thread per core
If your VM's CPU uses one thread per core, the CPU utilization Cloud Monitoring observability metric in the Compute Engine > VM instances > Observability tab only scales to 50%. Two threads per core is the default for all machine types, except Tau T2D. For more information, see Set number of threads per core.
To view your VM's CPU utilization normalized to 100%, view CPU utilization in Metrics Explorer instead. For more information, see Create charts with Metrics Explorer.
Google Cloud console SSH-in-browser connections might fail if you use custom firewall rules
If you use custom firewall rules to control SSH access to your VM instances, you might not be able to use the SSH-in-browser feature.
To work around this issue, do one of the following:
Enable Identity-Aware Proxy for TCP to continue connecting to VMs using the SSH-in-browser Google Cloud console feature.
Connect to VMs using the Google Cloud CLI instead of SSH-in-browser.
Downsizing or deleting specific reservations stops VMs from consuming other reservations
If you downsize or delete a specific reservation that was consumed by one or more VMs, the orphaned VMs cannot consume any reservations.
Optionally, to prevent this issue, delete VMs or update the
reservationAffinityproperty of VMs until the number of VMs targeting the specific reservation matches the number of VMs planned for the specific reservation. After, you can downsize or delete the specific reservation.
To fix this issue:
Make the number of VMs in the specific reservation equal to the number of VMs that are targeting it by doing one or more of the following: deleting VMs, updating the
reservationAffinityproperty of VMs, upsizing the downsized reservation, or recreating the deleted specific reservation.
Stop and start any remaining VMs.
Moving VMs or disks using the
moveInstance API or the gcloud CLI causes unexpected behavior
Moving virtual machine (VM) instances using the
gcloud compute instances move
command or the
might cause data loss, VM deletion, or other unexpected
behavior. When you move VMs, we recommend that you follow the instructions in
Move a VM instance between zones or regions.
Disks attached to VMs with
n2d-standard-64 machine types do not consistently reach performance limits
Persistent disks attached to VMs with
n2d-standard-64 machine types do not
consistently reach the maximum performance limit of 100,000 IOPS. This is the
case for both read and write IOPS.
Temporary names for disks
During virtual machine (VM) instance updates initiated using the
gcloud compute instances update command
instances.update API method,
Compute Engine might temporarily change the name of your VM's disks, by adding
of the following suffixes to the original name:
Compute Engine removes the suffix and restores the original disk names as the update completes.
Increased latency for some persistent disks caused by disk resizing
In some cases, resizing large persistent disks (~3 TB or larger) might be disruptive to the I/O performance of the disk. If you are impacted by this issue, your persistent disks might experience increased latency during the resize operation. This issue can impact persistent disks of any type.
Known issues for Linux VM instances
repomd.xml signature could not be verified
On Red Hat Enterprise Linux (RHEL) or CentOS 7 based systems, you might see the following error when trying to install or update software using yum. This error shows that you have an expired or incorrect repository GPG key.
[root@centos7 ~]# yum update ... google-cloud-sdk/signature | 1.4 kB 00:00:01 !!! https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk Trying other mirror. ... failure: repodata/repomd.xml from google-cloud-sdk: [Errno 256] No more mirrors to try. https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk
To fix this, disable repository GPG key checking in the yum repo configuration
repo_gpgcheck=0. In supported Compute Engine base images this
setting might be found in
/etc/yum.repos.d/google-cloud.repo file. However,
your VM can have this set in different repository configuration files
or automation tools.
Yum repositories do not usually use GPG keys for repository validation. Instead,
https endpoint is trusted.
To locate and update this setting, complete the following steps:
Look for the setting in your
cat /etc/yum.repos.d/google-cloud.repo [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg [google-cloud-sdk] name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Change all lines that say
sudo sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/google-cloud.repo
Check that the setting is updated.
cat /etc/yum.repos.d/google-cloud.repo [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg [google-cloud-sdk] name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
GPG error: EXPKEYSIG 3746C208A7317B0F when updating packages
On Debian systems and Ubuntu systems where you manually installed the Google Cloud CLI, including your local workstation, you might encounter an error similar to the following example:
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease: The following signatures were invalid: EXPKEYSIG 3746C208A7317B0F Google Cloud Packages Automatic Signing Key <email@example.com>
This error prevents you from obtaining the latest updates for several Google Cloud tools, including the following items:
To resolve this error, get the latest valid
Run the following command:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Run the following command:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
Alternatively, on Compute Engine VM instances running Debian or Ubuntu images, you can get the latest keys if you recreate your instances using the following image versions:
- Image project
debian-9-stretch-v20180401or image family
debian-8-jessie-v20180401or image family
- Image project
ubuntu-1710-artful-v20180315or image family
ubuntu-1604-xenial-v20180323or image family
ubuntu-1404-trusty-v20180308or image family
Instances using OS Login return a login message after connection
On some instances that use OS Login, you might receive the following error message after the connection is established:
/usr/bin/id: cannot find name for group ID 123456789
Ignore the error message.
Known issues for Windows VM instances
- Although Windows instances can use the NVMe interface with local SSDs, support for NVMe on Windows is in Beta, and we do not guarantee the same performance as Linux instances.
- After you create an instance, you cannot connect to it instantly. All new Windows instances use the System preparation (sysprep) tool to set up your instance, which can take 5–10 mins to complete.
- Windows Server images cannot activate without a network connection to
kms.windows.googlecloud.comand stop functioning if they do not initially authenticate within 30 days. Software activated by the KMS must reactivate every 180 days, but the KMS attempts to reactivate every 7 days. Make sure to configure your Windows instances so that they remain activated.
- Kernel software that accesses non-emulated model specific registers will generate general protection faults, which can cause a system crash depending on the guest operating system.
Poor networking throughput when using gVNIC
Windows Server 2022 and Windows 11 VMs that use gVNIC driver GooGet package
1.0.0@44 or earlier might experience poor networking throughput when
using Google Virtual NIC (gVNIC).
To resolve this issue, update the gVNIC driver GooGet package to version
1.0.0@45 or later by doing the following:
Check which driver version is installed on your VM by running the following command from an administrator Command Prompt or Powershell session:
The output looks similar to the following:
Installed packages: ... google-compute-engine-driver-gvnic.x86_64 VERSION_NUMBER ...
google-compute-engine-driver-gvnic.x86_64driver version is
1.0.0@44or earlier, update the GooGet package repository by running the following command from an administrator Command Prompt or Powershell session:
Generic disk error on Windows Server 2016 and 2012 R2 for M3 VMs
The ability to add or resize a persistent disk for a running M3 VM doesn't work as expected on specific Windows guests at this time. Windows Server 2012 R2 and Windows Server 2016, and their corresponding non-server Windows variants, do not respond correctly to the disk attach and disk resize commands.
For example, removing a disk from a running M3 VM disconnects the disk from a Windows Server instance without the Windows operating system recognizing that the disk is gone. Subsequent writes to the disk return a generic error.
You must restart the M3 VM running on Windows after modifying persistent disks for the disk modifications to be recognized by these guests.