Installing the guest environment

This page describes how to manually install the guest environment for VM instances that are running on Compute Engine.

In most cases, if you are using VM instances that are created using Google-provided public images, you do not need to manually install a guest environment.

Before you manually install the guest environment, use the Validate the guest environment procedure to check if the guest environment is running on your instance. If the guest environment is available on your instance but is outdated, update the guest environment.

Otherwise, determine if you need to manually install the guest environment by reviewing When to manually install the guest environment.

Before you begin

Contents

Operating system support

Manual installation of the guest environment is available for the following operating systems:

  • Ubuntu 14.04 or later
  • CentOS 6 and 7
  • Red Hat Enterprise Linux (RHEL) 6 and 7
  • Debian 9
  • Windows Server 2019
  • Windows Server 1809 and 1803
  • Windows Server 1709
  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2008 R2
  • SQL Server on Windows Server
  • Windows bring your own license(Beta):
    • Windows 7
    • Windows 10

Google recommends that you use the import tool to install the guest environment. For a list of installation options, review Installation methods.

You can't manually install guest environments for SUSE, CoreOS, and Container-optimized operating systems. If you need one of these operating systems, we recommend that you use public images, because a guest environment is included as a core part of all public images.

Installing the guest environment

Installation methods

There are three ways that you can install the guest environment. Choose one of the following three options:

Install the guest environment in-place

Use this method to install the guest environment if you can connect to the target instance using SSH. If you can't connect to the instance to install the guest environment, you can instead install the guest environment by cloning its boot disk and using a startup script.

This procedure is useful for imported images if you can connect using SSH password-based authentication. It might also be used to reinstall the guest environment if you have at least one user account with functional key-based SSH.

CentOS/RHEL

  1. Ensure that the version of your operating system is supported.
  2. Determine the version of CentOS/RHEL, and create the source repo file, /etc/yum.repos.d/google-cloud.repo:

    OS_RELEASE_FILE="/etc/redhat-release"
    if [ ! -f $OS_RELEASE_FILE ]; then
      OS_RELEASE_FILE="/etc/centos-release"
    fi
    DIST=$(cat $OS_RELEASE_FILE | grep -o '[0-9].*' | awk -F'.' '{print $1}')
    sudo tee /etc/yum.repos.d/google-cloud.repo << EOM
    [google-cloud-compute]
    name=Google Cloud Compute
    baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${DIST}-x86_64-stable
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
          https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOM
    
  3. Update package lists:

    sudo yum makecache
    sudo yum updateinfo
    
  4. Install the guest environment packages:

    declare -a PKG_LIST=(python-google-compute-engine
    google-compute-engine-oslogin
    google-compute-engine)
    for pkg in ${PKG_LIST[@]}; do
      sudo yum install -y ${pkg}
    done
    
  5. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  6. Verify that you can connect to the instance using SSH.

Debian

  1. Ensure that the version of your operating system is supported.
  2. Install the public repo GPG key:

    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  3. Determine the name of the Debian distro, and create the source list file, /etc/apt/sources.list.d/google-cloud.list:

    DIST=$(cat /etc/os-release | grep "VERSION=" | sed "s/\"\|(\|)\|VERSION=//g" \
    | awk '{print tolower($NF)}')
    sudo tee /etc/apt/sources.list.d/google-cloud.list << EOM
    deb http://packages.cloud.google.com/apt google-compute-engine-${DIST}-stable main
    deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${DIST} main
    EOM
    
  4. Update package lists:

    sudo apt-get update
  5. Install the guest environment packages:

    declare -a PKG_LIST=(google-cloud-packages-archive-keyring
    python-google-compute-engine
    python3-google-compute-engine
    google-compute-engine-oslogin
    google-compute-engine)
    for pkg in ${PKG_LIST[@]}; do
      sudo apt install -y ${pkg}
    done
    
  6. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  7. Verify that you can connect to the instance using SSH.

Ubuntu

  1. Ensure that the version of your operating system is supported.

  2. Enable the Universe repository. Canonical publishes packages for its guest environment to the Universe repository.

    sudo apt-add-repository universe
  3. Update package lists:

    sudo apt-get update
  4. Install the guest environment packages:

    declare -a PKG_LIST=(python-google-compute-engine
    python3-google-compute-engine
    google-compute-engine-oslogin
    gce-compute-image-packages)
    for pkg in ${PKG_LIST[@]}; do
      sudo apt install -y ${pkg} || echo "Not available: ${pkg}"
    done
    
  5. Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.

  6. Verify that you can connect to the instance using SSH.

Windows

Before you begin, ensure that the version of your operating system is supported.

To install the Windows guest environment, run the following commands in an elevated PowerShell version 3.0 or higher prompt. The Invoke-WebRequest command in the instructions below requires a version of PowerShell higher than 3.0.

  1. Download and install GooGet.

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    Invoke-WebRequest https://github.com/google/googet/releases/download/v2.13.0/googet.exe -OutFile $env:temp\googet.exe
    & "$env:temp\googet.exe" -root C:\ProgramData\GooGet -noconfirm install -sources \
    https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable googet
    Remove-Item "$env:temp\googet.exe"
    

    During installation GooGet adds content to the system environment. After installation completes, launch a new PowerShell console or provide the full path to the googet.exe file (C:\ProgramData\GooGet\googet.exe).

  2. Open a new console and add the google-compute-engine-stable repository.

    googet addrepo google-compute-engine-stable https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable
  3. Install the core Windows guest environment packages.

    googet -noconfirm install google-compute-engine-windows \
    google-compute-engine-sysprep google-compute-engine-metadata-scripts \
    google-compute-engine-vss
    
  4. Install the optional Windows guest environment package.

    googet -noconfirm install google-compute-engine-auto-updater

    Using the googet command.

    To view available packages, run the googet available command.

    To view installed packages, run the googet installed command.

    To update to the latest package version, run the googet update command.

    To view additional commands, run googet help.

Clone boot disk and use startup script

If you cannot connect to an instance to manually install the guest environment, install the guest environment using this procedure, which includes the following steps that can be completed in the Google Cloud console and Cloud Shell.

This method shows the procedure for Linux distributions only. For Windows, use one of the other two installation methods.

Use the Cloud Shell to run this procedure:

CentOS/RHEL

  1. Ensure that the version of your operating system is supported.
  2. Install the public repo GPG key:
  3. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  4. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=instance-name

      where instance-name is the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r '.disks[] | \
      select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  5. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. CentOS/RHEL uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  6. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  7. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.
    export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
    DEV="/dev/sdb1"
    sudo mkdir "$NEW_DISK_MOUNT_POINT"
    sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
    if [ "$?" != "0" ]; then
    # Handle XFS filesystem cases (CentOS/RHEL 7):
    sudo mount -o nouuid "$DEV" "$NEW_DISK_MOUNT_POINT"
    fi
    
    1. Create the rc.local script.
    cat <<'EOF' >/tmp/rc.local
    #!/bin/bash
    declare -a PKG_LIST=(python-google-compute-engine
    google-compute-engine-oslogin
    google-compute-engine)
    declare -x YUM_SERVER="packages.cloud.google.com"
    declare -x REPO_FILE="/etc/yum.repos.d/google-cloud.repo"
    echo "== Installing a Linux guest environment for CentOS/RHEL =="
    sleep 30 # Wait for network.
    echo "Determining CentOS/RHEL version..."
    OS_RELEASE_FILE="/etc/redhat-release"
    if [ ! -f "$OS_RELEASE_FILE" ]; then
       OS_RELEASE_FILE="/etc/centos-release"
    fi
    if [ ! -f "$OS_RELEASE_FILE" ]; then
       echo "ERROR: This system does not appear to be CentOS/RHEL."
       exit 1
    fi
    DIST=$(cat "$OS_RELEASE_FILE" | grep -o '[0-9].*' | awk -F'.' '{print $1}')
    if [ -z $DIST ]; then
       echo "ERROR: Could not determine version of CentOS/RHEL."
       exit 1
    fi
    echo "Updating $REPO_FILE..."
    tee "$REPO_FILE" << EOM
    [google-cloud-compute]
    name=Google Cloud Compute
    baseurl=https://$YUM_SERVER/yum/repos/google-compute-engine-el${DIST}-x86_64-stable
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://$YUM_SERVER/yum/doc/yum-key.gpg
    https://$YUM_SERVER/yum/doc/rpm-package-key.gpg
    EOM
    echo "Running yum makecache..."
    yum makecache
    echo "Running yum updateinfo..."
    yum updateinfo
    echo "Installing packages..."
    for pkg in ${PKG_LIST[@]}; do
       echo "Running yum install ${pkg}..."
       yum install -y ${pkg}
       if [ "$?" != "0" ]; then
          echo "ERROR: Failed to install ${pkg}."
       fi
    done
    echo "Removing this rc.local script."
    rm /etc/rc.d/rc.local
    # Move back any previous rc.local:
    if [ -f "/etc/moved-rc.local" ]; then
       echo "Restoring a previous rc.local script."
       mv "/etc/moved-rc.local" "/etc/rc.d/rc.local"
    fi
    echo "Restarting the instance..."
    reboot now
    EOF
    
    1. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" ]; then
        sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" \
        "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      
    2. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir \
      "$NEW_DISK_MOUNT_POINT"
    3. Exit the SSH session to the rescue instance.

  8. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  9. Create an instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output replacement-instance-name

    where replacement-instance-name is the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  10. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Debian

  1. Ensure that the version of your operating system is supported
  2. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  3. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=instance-name

      where instance-name is the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r '.disks[] | \
      select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  4. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Debian uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  5. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  6. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.

      export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
      
    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      declare -a PKG_LIST=(google-cloud-packages-archive-keyring
      python-google-compute-engine
      python3-google-compute-engine
      google-compute-engine-oslogin
      google-compute-engine)
      declare -x APT_SERVER="packages.cloud.google.com"
      declare -x REPO_FILE="/etc/apt/sources.list.d/google-cloud.list"
      echo "== Installing a Linux guest environment for Debian =="
      sleep 30 # Wait for network.
      echo "Determining Debian version..."
      DIST=$(cat /etc/os-release | grep "VERSION=" \
      | sed "s/\"\|(\|)\|VERSION=//g" | awk '{print tolower($NF)}')
      if [ -z $DIST ]; then
       echo "ERROR: Could not determine Debian version."
       exit 1
      fi
      echo "Adding GPG key for $APT_SERVER."
      curl https://$APT_SERVER/apt/doc/apt-key.gpg | apt-key add -
      echo "Updating $REPO_FILE..."
      tee "$REPO_FILE" << EOM
      deb http://$APT_SERVER/apt google-compute-engine-${DIST}-stable main
      deb http://$APT_SERVER/apt google-cloud-packages-archive-keyring-${DIST} main
      EOM
      echo "Running apt update..."
      apt-get update
      echo "Installing packages..."
      for pkg in ${PKG_LIST[@]}; do
       echo "Running apt install ${pkg}..."
       apt install -y ${pkg}
       if [ "$?" != "0" ]; then
          echo "ERROR: Failed to install ${pkg}."
       fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [ -f "/etc/moved-rc.local" ]; then
       echo "Restoring a previous rc.local script."
       mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot now
      EOF
      
    3. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      
    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
    5. Exit the SSH session to the rescue instance.

  7. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  8. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output replacement-instance-name

    where replacement-instance-name is the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  9. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Ubuntu

  1. Ensure that the version of your operating system is supported
  2. Install the public repo GPG key:
  3. Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.

  4. Stop the problematic instance and create a copy of its boot disk.

    1. Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.

      export PROB_INSTANCE_NAME=instance-name

      where instance-name is the name of the problematic instance.

    2. Stop the problematic instance.

      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Get the name of the boot disk for the problem instance.

      export PROB_INSTANCE_DISK="$(gcloud compute instances describe \
      "$PROB_INSTANCE_NAME" --format='json' |  jq -r '.disks[] | \
      select(.boot == true) | .source')"
      
    4. Create a snapshot of the boot disk.

      export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \
         --snapshot-names "$DISK_SNAPSHOT"
      
    5. Create a new disk from the snapshot.

      export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      
      gcloud compute disks create "$NEW_DISK" \
         --source-snapshot="$DISK_SNAPSHOT"
      
    6. Delete the snapshot:

      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  5. Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Ubuntu labels its root volume 1 by default, so the volume identifier should be /dev/sdb1. For custom cases, use lsblk to determine the volume identifier.

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
  6. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue
  7. Run the following steps on the rescue instance.

    1. Mount the root volume of the new disk.

      export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
      
    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      declare -a PKG_LIST=(python-google-compute-engine
      python3-google-compute-engine
      google-compute-engine-oslogin
      gce-compute-image-packages)
      echo "== Installing a Linux guest environment for Ubuntu =="
      sleep 30 # Wait for network.
      echo "Determining Ubuntu version..."
      DIST=$(cat /etc/os-release | grep "VERSION_ID=" \
      | sed "s/\"\|(\|)\|VERSION_ID=//g" | awk -F. '{print tolower($1)}')
      if [ -z $DIST ]; then
       echo "ERROR: Could not determine Ubuntu version."
       exit 1
      fi
      if [ "$DIST" -lt "16" ]; then
       # Adjust package list for older Ubuntu:
       echo "Ubuntu version less than 16.04."
       declare -a PKG_LIST=(python-google-compute-engine \
       gce-compute-image-packages)
      fi
      echo "Ensuring Ubuntu universe repositories are enabled."
      apt-add-repository universe
      echo "Running apt update..."
      apt-get update
      echo "Installing packages..."
      for pkg in ${PKG_LIST[@]}; do
       echo "Running apt install ${pkg}..."
       apt install -y ${pkg}
       if [ "$?" != "0" ]; then
          echo "ERROR: Failed to install ${pkg}."
       fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [ -f "/etc/moved-rc.local" ]; then
       echo "Restoring a previous rc.local script."
       mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot now
      EOF
      
    3. Move the rc.local script to the root volume of the new disk and set permissions. Move any existing rc.local script aside. The temporary script will replace it when it finishes.

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      
    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
    5. Exit the SSH session to the rescue instance.

  8. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
  9. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance automatically starts after it is created.

    As the replacement instance start up, the temporary rc.local script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. To view logs, run the following command:

    gcloud compute instances get-serial-port-output replacement-instance-name

    where replacement-instance-name is the name you assigned the replacement instance.

    The replacement instance also automatically reboots when the temporary rc.local script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.

  10. Verify that you can connect to the instance using SSH.

    When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Updating the guest environment

If you are getting a message that the guest environment is outdated, update the packages for your operating system.

CentOS/RHEL

To update CentOS and RHEL operating systems, run the following commands:

sudo yum makecache
sudo yum install google-compute-engine google-compute-engine-oslogin python*-google-compute-engine

Debian/Ubuntu

To update Debian and Ubuntu operating systems, run the following commands:

sudo apt-get update
sudo apt install google-compute-engine google-compute-engine-oslogin python*-google-compute-engine

Windows

To update Windows operating systems, run the following command:

googet update

Validating the guest environment

The presence of a guest environment can be determined by either inspecting system logs emitted to the console while an instance starts up, or by listing the installed packages while connected to the instance.

Expected console logs for the guest environment

This table summarizes expected output for console logs emitted by instances with working guest environments as they start up.

Operating system Service management Expected output
CentOS/RHEL 7
Debian 9
Ubuntu 16.04+
systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine Network Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts
CentOS/RHEL 6
Ubuntu 14.04
upstart
google-accounts: INFO Starting Google Accounts daemon
google-ip-forwarding: INFO Starting Google Compute Engine Network Daemon
google-clock-skew: INFO Starting Google Clock Skew daemon
CoreOS systemd and ignition
systemd[1]: Starting Ignition (files)...
[finished] enabling unit "coreos-metadata-sshkeys@.service
[finished] enabling unit "oem-gce.service"
[finished] enabling unit "oem-cloudinit.service"
Container-Optimized OS systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine Network Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts
SUSE (SLES) 12+ systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine Network Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts
Windows
GCEWindowsAgent: GCE Agent Started
GCEMetadataScripts: Starting startup scripts

To view console logs for an instance, follow these steps.

Console

  1. Go to the VM instances page.

    Go to the VM instances page

    1. Click the instance you need to examine.
    2. Restart or reset the instance.
    3. Under Logs, click Serial port 1 (console).
    4. Search for the expected output referencing the table above.

gcloud

  1. Restart or reset the instance.
  2. Use the gcloud compute get-serial-port-output subcommand to connect using the gcloud command-line tool. For example:

    gcloud compute instances get-serial-port-output [INSTANCE_NAME]

    where instance-name is the name of the instance you need to examine.

  3. Search for the expected output referencing the table above.

Loaded services for the guest environment

This table summarizes the services that should be loaded on instances with working guest environments. The command to list services must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating system Command to list services Expected output
CentOS/RHEL 7
Debian 9
Ubuntu 16.04+
sudo systemctl list-unit-files \
| grep google | grep enabled
google-accounts-daemon.service      enabled
google-clock-skew-daemon.service    enabled
google-instance-setup.service       enabled
google-shutdown-scripts.service     enabled
google-startup-scripts.service      enabled
google-network-daemon.service       enabled
CentOS/RHEL 6
Ubuntu 14.04
initctl list | grep google
google-accounts-daemon              start/running
google-network-daemon               start/running
google-clock-skew-daemon            start/running
google-instance-setup               stop/waiting
google-startup-scripts              stop/waiting
google-shutdown-scripts             stop/waiting
CoreOS
sudo systemctl list-unit-files \
| grep \
"oem-cloudinit\|oem-gce\|coreos-metadata-ssh" \
| grep enabled
coreos-metadata-sshkeys@.service    enabled
oem-cloudinit.service               enabled
oem-gce.service                     enabled
Container-Optimized OS
sudo systemctl list-unit-files \
| grep google
var-lib-google.mount                disabled
google-accounts-daemon.service      disabled
google-clock-skew-daemon.service    disabled
google-instance-setup.service       disabled
google-ip-forwarding-daemon.service disabled
google-network-setup.service        disabled
google-shutdown-scripts.service     disabled
google-startup-scripts.service      disabled
var-lib-google-remount.service      static 
SUSE (SLES) 12+
sudo systemctl list-unit-files \
| grep google | grep enabled
google-accounts-daemon.service      enabled
google-network-daemon.service       enabled
google-clock-skew-daemon.service    enabled
google-instance-setup.service       enabled
google-shutdown-scripts.service     enabled
google-startup-scripts.service      enabled
Windows
Get-Service GCEAgent
Get-ScheduledTask GCEStartup
Running    GCEAgent   GCEAgent
\          GCEStartup Ready

Installed packages for the guest environment

This table summarizes the packages that should be installed on instances with working guest environments. The command to list installed packages must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating system Command to list packages Expected output
CentOS/RHEL 6 & 7
yum list installed | grep google-compute
google-compute-engine
google-compute-engine-oslogin.x86_64
python-google-compute-engine
Debian 9
apt list --installed | grep google-compute
google-compute-engine
google-compute-engine-oslogin
python-google-compute-engine
python3-google-compute-engine
Ubuntu 14.04
apt list --installed \
| grep "google-compute\|gce-compute-image-packages"
gce-compute-image-packages
google-compute-engine-oslogin
python-google-compute-engine
Ubuntu 16.04+
apt list --installed \
| grep "google-compute\|gce-compute-image-packages"
gce-compute-image-packages
google-compute-engine-oslogin
python-google-compute-engine
python3-google-compute-engine
SUSE (SLES) 12+
zypper se -i | grep package \
| grep "google-compute-engine\|gce\|ClientConfigGCE"
cloud-regionsrv-client-plugin-gce
google-compute-engine-init
python-gcemetadata
regionServiceClientConfigGCE
Windows
googet installed
certgen
googet
google-compute-engine-auto-updater
google-compute-engine-driver-gga
google-compute-engine-driver-netkvm
google-compute-engine-driver-pvpanic
google-compute-engine-driver-vioscsi
google-compute-engine-metadata-scripts
google-compute-engine-powershell
google-compute-engine-sysprep
google-compute-engine-vss
google-compute-engine-windows

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Compute Engine Documentation