Known issues

Stay organized with collections Save and categorize content based on your preferences.

This page describes known issues that you might run into while using Compute Engine. For issues that specifically affect Confidential VMs, see Confidential VM Known issues.

General issues

Quotas in the Google Cloud console might be incorrect for asia-south1 region

The Google Cloud console might display higher Compute Engine quotas than your project has authorization for in the asia-south1 region.

Logs and errors related to quota for the asia-south1 region are still accurate. If you encounter quota errors for the asia-south1 region during this time and need more quota, use the Google Cloud console to request an increase in quota.

The CPU utilization observability metric is incorrect for VMs that use one thread per core

If your VM's CPU uses one thread per core, the CPU utilization Cloud Monitoring observability metric in the Compute Engine > VM instances > Observability tab only scales to 50%. Two threads per core is the default for all machine types, except Tau T2D. For more information, see Set number of threads per core.

To view your VM's CPU utilization normalized to 100%, view CPU utilization in Metrics Explorer instead. For more information, see Create charts with Metrics Explorer.

Google Cloud console SSH-in-browser connections might fail if you use custom firewall rules

If you use custom firewall rules to control SSH access to your VM instances, you might not be able to use the SSH-in-browser feature.

To work around this issue, do one of the following:

Downsizing or deleting specific reservations stops VMs from consuming other reservations

If you downsize or delete a specific reservation that was consumed by one or more VMs, the orphaned VMs cannot consume any reservations.

Learn more about deleting reservations and resizing reservations.

Moving VMs or disks using the moveInstance API or the gcloud CLI causes unexpected behavior

Moving virtual machine (VM) instances using the gcloud compute instances move command or the project.moveInstance method might cause data loss, VM deletion, or other unexpected behavior. When you move VMs, we recommend that you follow the instructions in Move a VM instance between zones or regions.

Disks attached to VMs with n2d-standard-64 machine types do not consistently reach performance limits

Persistent disks attached to VMs with n2d-standard-64 machine types do not consistently reach the maximum performance limit of 100,000 IOPS. This is the case for both read and write IOPS.

Temporary names for disks

During virtual machine (VM) instance updates initiated using the gcloud compute instances update command or the instances.update API method, Compute Engine might temporarily change the name of your VM's disks, by adding of the following suffixes to the original name:

  • -temp
  • -old
  • -new

Compute Engine removes the suffix and restores the original disk names as the update completes.

Increased latency for some persistent disks caused by disk resizing

In some cases, resizing large persistent disks (~3 TB or larger) might be disruptive to the I/O performance of the disk. If you are impacted by this issue, your persistent disks might experience increased latency during the resize operation. This issue can impact persistent disks of any type.

Known issues for Linux VM instances

repomd.xml signature could not be verified

On Red Hat Enterprise Linux (RHEL) or CentOS 7 based systems, you might see the following error when trying to install or update software using yum. This error shows that you have an expired or incorrect repository GPG key.

Sample log:

[root@centos7 ~]# yum update


...

google-cloud-sdk/signature                                                                  | 1.4 kB  00:00:01 !!!
https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk
Trying other mirror.

...

failure: repodata/repomd.xml from google-cloud-sdk: [Errno 256] No more mirrors to try.
https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk

Resolution:

To fix this, disable repository GPG key checking in the yum repo configuration by setting repo_gpgcheck=0. In supported Compute Engine base images this setting might be found in /etc/yum.repos.d/google-cloud.repo file. However, your VM can have this set in different repository configuration files or automation tools.

Yum repositories do not usually use GPG keys for repository validation. Instead, the https endpoint is trusted.

To locate and update this setting, complete the following steps:

  1. Look for the setting in your /etc/yum.repos.d/google-cloud.repo file.

    cat /etc/yum.repos.d/google-cloud.repo
    
    
    [google-compute-engine]
    name=Google Compute Engine
    baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    [google-cloud-sdk]
    name=Google Cloud SDK
    baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    
    
  2. Change all lines that say repo_gpgcheck=1 to repo_gpgcheck=0.

    sudo sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/google-cloud.repo
  3. Check that the setting is updated.

    cat /etc/yum.repos.d/google-cloud.repo
    
    [google-compute-engine]
    name=Google Compute Engine
    baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable
    enabled=1
    gpgcheck=1
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    [google-cloud-sdk]
    name=Google Cloud SDK
    baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    

GPG error: EXPKEYSIG 3746C208A7317B0F when updating packages

On Debian systems and Ubuntu systems where you manually installed the Google Cloud CLI, including your local workstation, you might encounter an error similar to the following example:

W: An error occurred during the signature verification.
The repository is not updated and the previous index files will be used.
GPG error: http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease:
The following signatures were invalid: EXPKEYSIG 3746C208A7317B0F
Google Cloud Packages Automatic Signing Key <gc-team@google.com>

This error prevents you from obtaining the latest updates for several Google Cloud tools, including the following items:

To resolve this error, get the latest valid apt-key.gpg key file from https://packages.cloud.google.com:

Debian systems

Run the following command:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Ubuntu systems

Run the following command:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -

Alternatively, on Compute Engine VM instances running Debian or Ubuntu images, you can get the latest keys if you recreate your instances using the following image versions:

  • Image project debian-cloud:
    • debian-9-stretch-v20180401 or image family debian-9
    • debian-8-jessie-v20180401 or image family debian-8
  • Image project ubuntu-os-cloud:
    • ubuntu-1710-artful-v20180315 or image family ubuntu-1710
    • ubuntu-1604-xenial-v20180323 or image family ubuntu-1604-lts
    • ubuntu-1404-trusty-v20180308 or image family ubuntu-1404-lts

Instances using OS Login return a login message after connection

On some instances that use OS Login, you might receive the following error message after the connection is established:

/usr/bin/id: cannot find name for group ID 123456789

Ignore the error message.

Known issues for Windows VM instances

  • Although Windows instances can use the NVMe interface with local SSDs, support for NVMe on Windows is in Beta, and we do not guarantee the same performance as Linux instances.
  • After you create an instance, you cannot connect to it instantly. All new Windows instances use the System preparation (sysprep) tool to set up your instance, which can take 5–10 mins to complete.
  • Windows Server images cannot activate without a network connection to kms.windows.googlecloud.com and stop functioning if they do not initially authenticate within 30 days. Software activated by the KMS must reactivate every 180 days, but the KMS attempts to reactivate every 7 days. Make sure to configure your Windows instances so that they remain activated.
  • Kernel software that accesses non-emulated model specific registers will generate general protection faults, which can cause a system crash depending on the guest operating system.

Poor networking throughput when using gVNIC

Windows Server 2022 and Windows 11 VMs that use gVNIC driver GooGet package version 1.0.0@44 or earlier might experience poor networking throughput when using Google Virtual NIC (gVNIC).

To resolve this issue, update the gVNIC driver GooGet package to version 1.0.0@45 or later by doing the following:

  1. Check which driver version is installed on your VM by running the following command from an administrator Command Prompt or Powershell session:

    googet installed
    

    The output looks similar to the following:

    Installed packages:
      ...
      google-compute-engine-driver-gvnic.x86_64 VERSION_NUMBER
      ...
    
  2. If the google-compute-engine-driver-gvnic.x86_64 driver version is 1.0.0@44 or earlier, update the GooGet package repository by running the following command from an administrator Command Prompt or Powershell session:

    googet update
    

Generic disk error on Windows Server 2016 and 2012 R2 for M3 VMs

The ability to add or resize a persistent disk for a running M3 VM doesn't work as expected on specific Windows guests at this time. Windows Server 2012 R2 and Windows Server 2016, and their corresponding non-server Windows variants, do not respond correctly to the disk attach and disk resize commands.

For example, removing a disk from a running M3 VM disconnects the disk from a Windows Server instance without the Windows operating system recognizing that the disk is gone. Subsequent writes to the disk return a generic error.

Resolution:

You must restart the M3 VM running on Windows after modifying persistent disks for the disk modifications to be recognized by these guests.