Install the guest environment


This page describes how to manually install the guest environment for virtual machine (VM) instances that are running custom images on Compute Engine.

In most cases, if you are using VMs that are created using Google-provided public images, you do not need to install a guest environment. For information on when to use the guest environment, see When to manually install or update the guest environment.

Before you manually install the guest environment, use the Validate the guest environment procedure to check if the guest environment is running on your VM. If the guest environment is available on your VM but is outdated, update the guest environment.

Before you begin

  • If you haven't already, set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine as follows.

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

Installation methods

There are multiple ways that you can install the guest environment. Choose one of the following options:

  • Import tool. This is the recommended option. However, keep in mind that the import tool not only installs the guest environment but does other configuration updates as well on the image such as configuring networks, configuring bootloader, and installing Google Cloud CLI. For instructions on how to use the import tool, review Making an image bootable.

    The import tool supports a wide variety of operating systems and versions. For more information, see operating system details.

  • Manual installation. Choose one of the following:

    Manual installation of the guest environment is available for the following operating systems:

    • Ubuntu 16.04 or later
    • CentOS 7 or later
    • SUSE Linux Enterprise Server (SLES) 12 SP4 or later and 15 SP1 or later
    • Red Hat Enterprise Linux (RHEL) 7 or later
    • Rocky Linux 8 or later
    • Debian 9 or later
    • Windows Server 2019
    • Windows Server 2016
    • Windows Server 2012 R2
    • Windows Server Semi Annual Channel Releases
    • SQL Server on Windows Server
    • Windows bring your own license:
      • Windows 8
      • Windows 10

Limitations

You can't manually install or use the import tool to install guest environments for Fedora CoreOS and Container-optimized operating systems. If you need one of these operating systems, we recommend that you use public images, because a guest environment is included as a core part of all public images.

Installing the guest environment

Install the guest environment in-place

Use this method to install the guest environment if you can connect to the target instance using SSH. If you can't connect to the instance to install the guest environment, you can instead install the guest environment by cloning its boot disk and using a startup script.

This procedure is useful for imported images if you can connect using SSH password-based authentication. It might also be used to reinstall the guest environment if you have at least one user account with functional key-based SSH.

CentOS/RHEL/Rocky

  1. Ensure that the version of your operating system is supported.
  2. Determine the version of CentOS/RHEL/Rocky Linux, and create the source repo file, /etc/yum.repos.d/google-cloud.repo:

    eval $(grep VERSION_ID /etc/os-release)
    sudo tee /etc/yum.repos.d/google-cloud.repo << EOM
    [google-compute-engine]
    name=Google Compute Engine
    baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${VERSION_ID/.*}-x86_64-stable
    enabled=1
    gpgcheck=1
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
          https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOM
    
  3. Update package lists:

    sudo yum makecache
    sudo yum updateinfo
    
  4. Install the guest environment packages:

    sudo yum install -y google-compute-engine google-osconfig-agent
    
  5. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  6. Verify that you can connect to the instance using SSH.

Debian

  1. Ensure that the version of your operating system is supported.
  2. Install the public repo GPG key:

    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  3. Determine the name of the Debian distro, and create the source list file, /etc/apt/sources.list.d/google-cloud.list:

    eval $(grep VERSION_CODENAME /etc/os-release)
    sudo tee /etc/apt/sources.list.d/google-cloud.list << EOM
    deb http://packages.cloud.google.com/apt google-compute-engine-${VERSION_CODENAME}-stable main
    deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${VERSION_CODENAME} main
    EOM
    
  4. Update package lists:

    sudo apt update
  5. Install the guest environment packages:

    sudo apt install -y google-cloud-packages-archive-keyring
    sudo apt install -y google-compute-engine google-osconfig-agent
    
  6. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  7. Verify that you can connect to the instance using SSH.

Ubuntu

  1. Ensure that the version of your operating system is supported.

  2. Enable the Universe repository. Canonical publishes packages for its guest environment to the Universe repository.

    sudo apt-add-repository universe
  3. Update package lists:

    sudo apt update
  4. Install the guest environment packages:

    sudo apt install -y google-compute-engine google-osconfig-agent
    
  5. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  6. Verify that you can connect to the instance using SSH.

SLES

  1. Ensure that the version of your operating system is supported.

  2. Activate the Public Cloud Module.

    product=$(sudo SUSEConnect --list-extensions | grep -o "sle-module-public-cloud.*")
    [[ -n "$product" ]] && sudo SUSEConnect -p "$product"
    
  3. Update package lists:

    sudo zypper refresh
  4. Install the guest environment packages:

    sudo zypper install -y google-guest-{agent,configs,oslogin} \
    google-osconfig-agent
    sudo systemctl enable /usr/lib/systemd/system/google-*
    
  5. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  6. Verify that you can connect to the instance using SSH.

Windows

Before you begin, ensure that the version of your operating system is supported.

To install the Windows guest environment, run the following commands in an elevated PowerShell version 3.0 or higher prompt. The Invoke-WebRequest command in the instructions below requires a version of PowerShell higher than 3.0.

  1. Download and install GooGet.

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;
    Invoke-WebRequest https://github.com/google/googet/releases/download/v2.18.3/googet.exe -OutFile $env:temp\googet.exe;
    & "$env:temp\googet.exe" -root C:\ProgramData\GooGet -noconfirm install -sources `
    https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable googet;
    Remove-Item "$env:temp\googet.exe"
    

    During installation GooGet adds content to the system environment. After installation completes, launch a new PowerShell console or provide the full path to the googet.exe file (C:\ProgramData\GooGet\googet.exe).

  2. Open a new console and add the google-compute-engine-stable repository.

    googet addrepo google-compute-engine-stable https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable
  3. Install the core Windows guest environment packages.

    googet -noconfirm install google-compute-engine-windows `
    google-compute-engine-sysprep google-compute-engine-metadata-scripts `
    google-compute-engine-vss google-osconfig-agent
    
  4. Install the optional Windows guest environment package.

    googet -noconfirm install google-compute-engine-auto-updater

    Using the googet command.

    To view available packages, run the googet available command.

    To view installed packages, run the googet installed command.

    To update to the latest package version, run the googet update command.

    To view additional commands, run googet help.

Clone boot disk and use startup script

If you cannot connect to an instance to manually install the guest environment, install the guest environment using this procedure, which includes the following steps that can be completed in the Google Cloud console or Cloud Shell.

This method shows the procedure for Linux distributions only. For Windows, use one of the other two installation methods.

Use the Cloud Shell to run this procedure. If you are not using Cloud Shell, install the jq command-line JSON processor. You can use this processor to filter gcloud CLI output. Cloud Shell has jq pre-installed.

CentOS/RHEL/Rocky

  1. Ensure that the version of your operating system is supported.

  2. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  3. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=VM_NAME

      Replace VM_NAME with the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r \
      '.disks[] | select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  4. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. CentOS/RHEL/Rocky Linux uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  5. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  6. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.

      export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount -o nouuid "$DEV" "$NEW_DISK_MOUNT_POINT"
      
    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      echo "== Installing Google guest environment for CentOS/RHEL/Rocky Linux =="
      sleep 30 # Wait for network.
      echo "Determining CentOS/RHEL/Rocky Linux version..."
      eval $(grep VERSION_ID /etc/os-release)
      if [[ -z $VERSION_ID ]]; then
        echo "ERROR: Could not determine version of CentOS/RHEL/Rocky Linux."
        exit 1
      fi
      echo "Updating repo file..."
      tee "/etc/yum.repos.d/google-cloud.repo" << EOM
      [google-compute-engine]
      name=Google Compute Engine
      baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${VERSION_ID/.*}-x86_64-stable
      enabled=1
      gpgcheck=1
      repo_gpgcheck=0
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      EOM
      echo "Running yum makecache..."
      yum makecache
      echo "Running yum updateinfo..."
      yum updateinfo
      echo "Running yum install google-compute-engine..."
      yum install -y google-compute-engine
      rpm -q google-compute-engine
      if [[ $? -ne 0 ]]; then
        echo "ERROR: Failed to install ${pkg}."
      fi
      echo "Removing this rc.local script."
      rm /etc/rc.d/rc.local
      # Move back any previous rc.local:
      if [[ -f "/etc/moved-rc.local" ]]; then
        echo "Restoring a previous rc.local script."
        mv "/etc/moved-rc.local" "/etc/rc.d/rc.local"
      fi
      echo "Restarting the instance..."
      reboot
      EOF
      
    3. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" ]; then
        sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" \
        "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      
    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir \
      "$NEW_DISK_MOUNT_POINT"
    5. Exit the SSH session to the rescue instance.

  7. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  8. Create an instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. In the Google Cloud console, go to the VM instances page.

      Go to VM instances

    2. Click the problematic instance, then click Create similar.

    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.

    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME

    Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  9. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Debian

  1. Ensure that the version of your operating system is supported

  2. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  3. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=VM_NAME

      Replace VM_NAME with the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r \
      '.disks[] | select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  4. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Debian uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  5. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  6. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.

      export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
      
    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      echo "== Installing Google guest environment for Debian =="
      export DEBIAN_FRONTEND=noninteractive
      sleep 30 # Wait for network.
      echo "Determining Debian version..."
      eval $(grep VERSION_CODENAME /etc/os-release)
      if [[ -z $VERSION_CODENAME ]]; then
       echo "ERROR: Could not determine Debian version."
       exit 1
      fi
      echo "Adding GPG key for Google cloud repo."
      curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
      echo "Updating repo file..."
      tee "/etc/apt/sources.list.d/google-cloud.list" << EOM
      deb http://packages.cloud.google.com/apt google-compute-engine-${VERSION_CODENAME}-stable main
      deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${VERSION_CODENAME} main
      EOM
      echo "Running apt update..."
      apt update
      echo "Installing packages..."
      for pkg in google-cloud-packages-archive-keyring google-compute-engine; do
       echo "Running apt install ${pkg}..."
       apt install -y ${pkg}
       if [[ $? -ne 0 ]]; then
          echo "ERROR: Failed to install ${pkg}."
       fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [[ -f "/etc/moved-rc.local" ]]; then
       echo "Restoring a previous rc.local script."
       mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot
      EOF
      
    3. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [[ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      
    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
    5. Exit the SSH session to the rescue instance.

  7. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  8. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. In the Google Cloud console, go to the VM instances page.

      Go to VM instances

    2. Click the problematic instance, then click Create similar.

    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.

    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME

    Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  9. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Ubuntu

  1. Ensure that the version of your operating system is supported

  2. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  3. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=VM_NAME

      Replace VM_NAME with the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r \
      '.disks[] | select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  4. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Ubuntu labels its root volume 1 by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  5. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  6. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.

      export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
      
    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      echo "== Installing a Linux guest environment for Ubuntu =="
      sleep 30 # Wait for network.
      echo "Running apt update..."
      apt update
      echo "Installing packages..."
      echo "Running apt install google-compute-engine..."
      apt install -y google-compute-engine
      if [[ $? -ne 0 ]]; then
       echo "ERROR: Failed to install ${pkg}."
      fi
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [[ -f "/etc/moved-rc.local" ]]; then
       echo "Restoring a previous rc.local script."
       mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot
      EOF
      
    3. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [[ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      
    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
    5. Exit the SSH session to the rescue instance.

  7. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  8. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. In the Google Cloud console, go to the VM instances page.

      Go to VM instances

    2. Click the problematic instance, then click Create similar.

    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.

    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME

    Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  9. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Updating the guest environment

If you are getting a message that the guest environment is outdated, update the packages for your operating system.

CentOS/RHEL/Rocky

To update CentOS, RHEL and Rocky Linux operating systems, run the following commands:

sudo yum makecache
sudo yum install google-compute-engine google-compute-engine-oslogin \
google-guest-agent google-osconfig-agent

Debian

To update Debian operating systems, run the following commands:

sudo apt update
sudo apt install google-compute-engine google-compute-engine-oslogin \
google-guest-agent google-osconfig-agent

Ubuntu

To update Ubuntu operating systems, run the following commands:

sudo apt update
sudo apt install google-compute-engine google-compute-engine-oslogin \
google-guest-agent google-osconfig-agent

SLES

To update SLES operating systems, run the following commands:

sudo zypper refresh
sudo zypper install google-guest-{agent,configs,oslogin} \
google-osconfig-agent

Windows

To update Windows operating systems, run the following command:

googet update

Validating the guest environment

The presence of a guest environment can be determined by either inspecting system logs emitted to the console while an instance starts up, or by listing the installed packages while connected to the instance.

Expected console logs for the guest environment

This table summarizes expected output for console logs emitted by instances with working guest environments as they start up.

Operating system Service management Expected output
CentOS/RHEL/Rocky Linux
Debian
Ubuntu
SLES
Container-Optimized OS 89 and newer
systemd
google_guest_agent: GCE Agent Started (version YYYYMMDD.NN)
google_metadata_script_runner: Starting startup scripts (version YYYYMMDD.NN)
OSConfigAgent Info: OSConfig Agent (version YYYYMMDD.NN)
Container-Optimized OS 85 and older systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine Network Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts
Windows
GCEGuestAgent: GCE Agent Started (version YYYYMMDD.NN)
GCEMetadataScripts: Starting startup scripts (version YYYYMMDD.NN)
OSConfigAgent Info: OSConfig Agent (version YYYYMMDD.NN)

To view console logs for an instance, follow these steps.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

    1. Click the instance you need to examine.
    2. Restart or reset the instance.
    3. Under Logs, click Serial port 1 (console).
    4. Search for the expected output referencing the table above.

gcloud

  1. Restart or reset the instance.
  2. Use the gcloud compute instances get-serial-port-output subcommand to connect using the Google Cloud CLI. For example:

    gcloud compute instances get-serial-port-output VM_NAME

    Replace VM_NAME with the name of the instance you need to examine.

  3. Search for the expected output referencing the table above.

Loaded services for the guest environment

This table summarizes the services that should be loaded on instances with working guest environments. The command to list services must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating system Command to list services Expected output
CentOS/RHEL/Rocky Linux
Debian
sudo systemctl list-unit-files \
| grep google | grep enabled
google-disk-expand.service             enabled
google-guest-agent.service             enabled
google-osconfig-agent.service          enabled
google-shutdown-scripts.service        enabled
google-startup-scripts.service         enabled
google-oslogin-cache.timer             enabled
Ubuntu
sudo systemctl list-unit-files \
| grep google | grep enabled
google-guest-agent.service             enabled
google-osconfig-agent.service          enabled
google-shutdown-scripts.service        enabled
google-startup-scripts.service         enabled
google-oslogin-cache.timer             enabled
Container-Optimized OS
sudo systemctl list-unit-files \
| grep google
var-lib-google.mount                   disabled
google-guest-agent.service             disabled
google-osconfig-agent.service          disabled
google-osconfig-init.service           disabled
google-oslogin-cache.service           static
google-shutdown-scripts.service        disabled
google-startup-scripts.service         disabled
var-lib-google-remount.service         static
google-oslogin-cache.timer             disabled 
SLES 12+
sudo systemctl list-unit-files \
| grep google | grep enabled
google-guest-agent.service              enabled
google-osconfig-agent.service           enabled
google-shutdown-scripts.service         enabled
google-startup-scripts.service          enabled
google-oslogin-cache.timer              enabled 
Windows
Get-Service GCEAgent
Get-ScheduledTask GCEStartup
Running    GCEAgent   GCEAgent
\          GCEStartup Ready

Installed packages for the guest environment

This table summarizes the packages that should be installed on instances with working guest environments. The command to list installed packages must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating system Command to list packages Expected output
CentOS/RHEL/Rocky Linux
rpm -qa --queryformat '%{NAME}\n' \
| grep -iE 'google|gce'
google-osconfig-agent
google-compute-engine-oslogin
google-guest-agent
gce-disk-expand
google-cloud-sdk
google-compute-engine
Debian
apt list --installed \
| grep -i google
gce-disk-expand
google-cloud-packages-archive-keyring
google-cloud-sdk
google-compute-engine-oslogin
google-compute-engine
google-guest-agent
google-osconfig-agent
Ubuntu
apt list --installed \
| grep -i google
google-compute-engine-oslogin
google-compute-engine
google-guest-agent
google-osconfig-agent
SUSE (SLES)
rpm -qa --queryformat '%{NAME}\n' \
| grep -i google
google-guest-configs
google-osconfig-agent
google-guest-oslogin
google-guest-agent
Windows
googet installed
certgen
googet
google-compute-engine-auto-updater
google-compute-engine-driver-gga
google-compute-engine-driver-netkvm
google-compute-engine-driver-pvpanic
google-compute-engine-driver-vioscsi
google-compute-engine-metadata-scripts
google-compute-engine-powershell
google-compute-engine-sysprep
google-compute-engine-vss
google-compute-engine-windows
google-osconfig-agent

What's next