Installing the Linux Guest Environment

This page describes how to install the Linux Guest Environment for Compute Engine. Linux instances created using Google provided public images include a Guest Environment installed by the maintainer of the OS.

The Guest Environment is a set of processes and configuration for Linux Compute Engine instances that provide key Compute Engine features.

The following organizations build and maintain Guest Environment packages for specific Linux distributions:

  • Canonical:
  • Google:
    • CentOS/Red Hat Enterprise Linux (RHEL) 6 and 7
    • Debian 8 and 9
  • SUSE:
    • SLES 11, SLES 12, SLES for SAP

CoreOS provides the following features to support Guest Environment capabilities:

Before you begin

Contents

Installing the Guest Environment

Install the Guest Environment with one of the following options:

Install Guest Environment In-Place

Use this method to install the Guest Environment if you can connect to the target instance using SSH. If you cannot connect to the instance to install the Guest Environment, you can install the Guest Environment by cloning its root disk and creating a startup script.

This procedure is useful for imported images if you can connect using SSH password-based authentication. It might also be used to reinstall the Guest Environment if you have at least one user account with functional key-based SSH.

CentOS/RHEL

Determine the version of CentOS/RHEL, and create the source repo file, /etc/yum.repos.d/google-cloud.repo:

OS_RELEASE_FILE="/etc/redhat-release"
if [ ! -f $OS_RELEASE_FILE ]; then
   OS_RELEASE_FILE="/etc/centos-release"
fi
DIST=$(cat $OS_RELEASE_FILE | grep -o '[0-9].*' | awk -F'.' '{print $1}')
sudo tee /etc/yum.repos.d/google-cloud.repo << EOM
[google-cloud-compute]
name=Google Cloud Compute
baseurl=https://packages.cloud.google.com/yum/repos/google-cloud-compute-el${DIST}-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM

Update package lists:

sudo yum updateinfo

Install the Guest Environment packages:

declare -a PKG_LIST=(python-google-compute-engine \
google-compute-engine-oslogin \
google-compute-engine)
for pkg in ${PKG_LIST[@]}; do
   sudo yum install -y $pkg
done

Restart the instance and inspect its console log to make sure the Guest Environment loads as it starts back up.

Verify that you can connect to the instance using SSH.

Debian

Install the public repo GPG key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Determine the name of the Debian distro, and create the source list file, /etc/apt/sources.list.d/google-cloud.list:

DIST=$(cat /etc/os-release | grep "VERSION=" | sed "s/\"\|(\|)\|VERSION=//g" \
| awk '{print tolower($NF)}')
sudo tee /etc/apt/sources.list.d/google-cloud.list << EOM
deb http://packages.cloud.google.com/apt google-compute-engine-${DIST}-stable main
deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${DIST} main
EOM

Update package lists:

sudo apt-get update

Install the Guest Environment packages:

declare -a PKG_LIST=(google-cloud-packages-archive-keyring \
python-google-compute-engine \
python3-google-compute-engine \
google-compute-engine-oslogin \
google-compute-engine)
for pkg in ${PKG_LIST[@]}; do
   sudo apt install -y $pkg
done

Restart the instance and inspect its console log to make sure the Guest Environment loads as it starts back up.

Verify that you can connect to the instance using SSH.

Ubuntu

Canonical publishes packages for its Guest Environment to the Universe repository. Enable the Universe repository first:

sudo apt-add-repository universe

Update package lists:

sudo apt-get update

Install the Guest Environment packages:

declare -a PKG_LIST=(python-google-compute-engine \
python3-google-compute-engine \
google-compute-engine-oslogin \
gce-compute-image-packages)
for pkg in ${PKG_LIST[@]}; do
   sudo apt install -y $pkg || echo "Not available: $pkg"
done

Restart the instance and inspect its console log to make sure the Guest Environment loads as it starts back up.

Verify that you can connect to the instance using SSH.

Clone Root Disk & Use Startup Script

If you cannot connect to an instance to manually install the Guest Environment, install the Guest Environment using this procedure, which includes the following steps:

  1. Create another instance, if necessary, to which the new disk can be attached. We will call this other instance the rescue instance.

  2. Stop the instance for which you need to reinstall the Guest Environment. We will refer to it as the problematic instance.

  3. While the instance is stopped, create a snapshot of the root disk on the problematic instance.

  4. Create a new persistent disk from that snapshot.

  5. Attach the new disk to the rescue instance, and mount its root volume. Copy a temporary rc.local script whose purpose is to install the Guest Environment to the root volume.

  6. Detach the new disk from the rescue instance, and create a replacement instance using the new disk.

  7. Start the replacement instance.

  8. The replacement instance boots for the first time and runs the temporary rc.local script. This script will install the Guest Environment, delete itself, and reboot again.

  9. Validate that the replacement instance is operational.

  10. Optionally, stop or delete the problematic instance.

Specific steps for this procedure are as follows.

CentOS/RHEL

  1. Create a new instance to serve as the rescue instance. Name this instance rescue for instructional clarity. It does not need to run the same Linux OS as the problematic instance. (We validated this procedure with Debian 9 as the resuce instance.)

  2. Stop the problematic instance and create a copy of its root disk.

    1. Set a variable name for the problematic instance. This will make future steps of this procedure easier to follow. Replace INSTANCE_NAME with the name of the problematic instance:
      PROB_INSTANCE_NAME=INSTANCE_NAME
    2. Stop the problematic instance:
      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Determine the name of its root disk:
      PROB_INSTANCE_DISK="$(gcloud compute instances describe "$PROB_INSTANCE_NAME" \
      --format='get(disks[0].deviceName)')"
    4. Create a snapshot of its root disk:
      DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" --snapshot-names "$DISK_SNAPSHOT"
    5. Create a new disk from the snapshot:
      NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      gcloud compute disks create "$NEW_DISK" --source-snapshot="$DISK_SNAPSHOT"
    6. Delete the snapshot:
      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  3. Attach the new disk to the rescue instance and mount its root volume. Because this procedure only attaches one additional disk, the device identifier of the new disk will be /dev/sdb. CentOS/RHEL use the first volume on their disk as the root volume by default, so the volume identifier should be /dev/sdb1. (For custom cases, use lsblk to determine the volume identifier.)

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"

  4. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue

  5. Perform these steps using the rescue instance:

    1. Mount the root volume of the new disk.

      NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
      if [ "$?" != "0" ]; then
      # Handle XFS filesystem cases (CentOS/RHEL 7):
      sudo mount -o nouuid "$DEV" "$NEW_DISK_MOUNT_POINT"
      fi

    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      declare -a PKG_LIST=(python-google-compute-engine \
      google-compute-engine-oslogin \
      google-compute-engine)
      declare -x YUM_SERVER="packages.cloud.google.com"
      declare -x REPO_FILE="/etc/yum.repos.d/google-cloud.repo"
      echo "== Installing a Linux Guest Environment for CentOS/RHEL =="
      sleep 30 # Wait for network.
      echo "Determining CentOS/RHEL version..."
      OS_RELEASE_FILE="/etc/redhat-release"
      if [ ! -f "$OS_RELEASE_FILE" ]; then
         OS_RELEASE_FILE="/etc/centos-release"
      fi
      if [ ! -f "$OS_RELEASE_FILE" ]; then
         echo "ERROR: This system does not appear to be CentOS/RHEL."
         exit 1
      fi
      DIST=$(cat "$OS_RELEASE_FILE" | grep -o '[0-9].*' | awk -F'.' '{print $1}')
      if [ -z $DIST ]; then
         echo "ERROR: Could not determine version of CentOS/RHEL."
         exit 1
      fi
      echo "Updating $REPO_FILE..."
      tee "$REPO_FILE" << EOM
      [google-cloud-compute]
      name=Google Cloud Compute
      baseurl=https://$YUM_SERVER/yum/repos/google-cloud-compute-el${DIST}-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://$YUM_SERVER/yum/doc/yum-key.gpg
      https://$YUM_SERVER/yum/doc/rpm-package-key.gpg
      EOM
      echo "Running yum updateinfo..."
      yum updateinfo
      echo "Installing packages..."
      for pkg in ${PKG_LIST[@]}; do
         echo "Running yum install $pkg..."
         yum install -y $pkg
         if [ "$?" != "0" ]; then
            echo "ERROR: Failed to install $pkg."
         fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.d/rc.local
      # Move back any previous rc.local:
      if [ -f "/etc/moved-rc.local" ]; then
         echo "Restoring a previous rc.local script."
         mv "/etc/moved-rc.local" "/etc/rc.d/rc.local"
      fi
      echo "Restarting the instance..."
      reboot now
      EOF

    3. Move the rc.local script to the root volume of the new disk and set permissions. (Move any existing rc.local script aside. The temporary script will replace it when it finishes.)

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" ]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"

    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"

    5. Exit the SSH session to the rescue instance.

  6. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"

  7. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance will start after it has been created.
  8. Let the replacement instance start up. While booting, it will run the temporary rc.local script, installing the Linux Guest Environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. For example, you can do this with gcloud as shown below. Change REPLACEMENT_INSTANCE_NAME to the name you assigned the replacement instance.

    gcloud compute instances get-serial-port-output REPLACEMENT_INSTANCE_NAME

  9. The replacement instance will automatically reboot one more time when the temporary rc.local script finishes. Starting with the second reboot, you can inspect its console log to make sure the Guest Environment loads. Verify that you can connect to the instance using SSH.

  10. Once you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Debian

  1. Create a new instance to serve as the rescue instance. Name this instance rescue for instructional clarity. It does not need to run the same Linux OS as the problematic instance. (We validated this procedure with Debian 9 as the resuce instance.)

  2. Stop the problematic instance and create a copy of its root disk.

    1. Set a variable name for the problematic instance. This will make future steps of this procedure easier to follow. Replace INSTANCE_NAME with the name of the problematic instance:
      PROB_INSTANCE_NAME=INSTANCE_NAME
    2. Stop the problematic instance:
      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Determine the name of its root disk:
      PROB_INSTANCE_DISK="$(gcloud compute instances describe "$PROB_INSTANCE_NAME" \
      --format='get(disks[0].deviceName)')"
    4. Create a snapshot of its root disk:
      DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" --snapshot-names "$DISK_SNAPSHOT"
    5. Create a new disk from the snapshot:
      NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      gcloud compute disks create "$NEW_DISK" --source-snapshot="$DISK_SNAPSHOT"
    6. Delete the snapshot:
      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  3. Attach the new disk to the rescue instance and mount its root volume. Because this procedure only attaches one additional disk, the device identifier of the new disk will be /dev/sdb. Debian uses the first volume on its disk as the root volume by default, so the volume identifier should be /dev/sdb1. (For custom cases, use lsblk to determine the volume identifier.)

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"

  4. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue

  5. Perform these steps using the rescue instance:

    1. Mount the root volume of the new disk.

      NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"

    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      declare -a PKG_LIST=(google-cloud-packages-archive-keyring \
      python-google-compute-engine \
      python3-google-compute-engine \
      google-compute-engine-oslogin \
      google-compute-engine)
      declare -x APT_SERVER="packages.cloud.google.com"
      declare -x REPO_FILE="/etc/apt/sources.list.d/google-cloud.list"
      echo "== Installing a Linux Guest Environment for Debian =="
      sleep 30 # Wait for network.
      echo "Determining Debian version..."
      DIST=$(cat /etc/os-release | grep "VERSION=" \
      | sed "s/\"\|(\|)\|VERSION=//g" | awk '{print tolower($NF)}')
      if [ -z $DIST ]; then
         echo "ERROR: Could not determine Debian version."
         exit 1
      fi
      echo "Adding GPG key for $APT_SERVER."
      curl https://$APT_SERVER/apt/doc/apt-key.gpg | apt-key add -
      echo "Updating $REPO_FILE..."
      tee "$REPO_FILE" << EOM
      deb http://$APT_SERVER/apt google-compute-engine-${DIST}-stable main
      deb http://$APT_SERVER/apt google-cloud-packages-archive-keyring-${DIST} main
      EOM
      echo "Running apt update..."
      apt-get update
      echo "Installing packages..."
      for pkg in ${PKG_LIST[@]}; do
         echo "Running apt install $pkg..."
         apt install -y $pkg
         if [ "$?" != "0" ]; then
            echo "ERROR: Failed to install $pkg."
         fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [ -f "/etc/moved-rc.local" ]; then
         echo "Restoring a previous rc.local script."
         mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot now
      EOF

    3. Move the rc.local script to the root volume of the new disk and set permissions. (Move any existing rc.local script aside. The temporary script will replace it when it finishes.)

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"

    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"

    5. Exit the SSH session to the rescue instance.

  6. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"

  7. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance will start after it has been created.
  8. Let the replacement instance start up. While booting, it will run the temporary rc.local script, installing the Linux Guest Environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. For example, you can do this with gcloud as shown below. Change REPLACEMENT_INSTANCE_NAME to the name you assigned the replacement instance.

    gcloud compute instances get-serial-port-output REPLACEMENT_INSTANCE_NAME

  9. The replacement instance will automatically reboot one more time when the temporary rc.local script finishes. Starting with the second reboot, you can inspect its console log to make sure the Guest Environment loads. Verify that you can connect to the instance using SSH.

  10. Once you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Ubuntu

  1. Create a new instance to serve as the rescue instance. Name this instance rescue for instructional clarity. It does not need to run the same Linux OS as the problematic instance. (We validated this procedure with Debian 9 as the resuce instance.)

  2. Stop the problematic instance and create a copy of its root disk.

    1. Set a variable name for the problematic instance. This will make future steps of this procedure easier to follow. Replace INSTANCE_NAME with the name of the problematic instance:
      PROB_INSTANCE_NAME=INSTANCE_NAME
    2. Stop the problematic instance:
      gcloud compute instances stop "$PROB_INSTANCE_NAME"
    3. Determine the name of its root disk:
      PROB_INSTANCE_DISK="$(gcloud compute instances describe "$PROB_INSTANCE_NAME" \
      --format='get(disks[0].deviceName)')"
    4. Create a snapshot of its root disk:
      DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot"
      gcloud compute disks snapshot "$PROB_INSTANCE_DISK" --snapshot-names "$DISK_SNAPSHOT"
    5. Create a new disk from the snapshot:
      NEW_DISK="${PROB_INSTANCE_NAME}-new-disk"
      gcloud compute disks create "$NEW_DISK" --source-snapshot="$DISK_SNAPSHOT"
    6. Delete the snapshot:
      gcloud compute snapshots delete "$DISK_SNAPSHOT"
  3. Attach the new disk to the rescue instance and mount its root volume. Because this procedure only attaches one additional disk, the device identifier of the new disk will be /dev/sdb. Ubuntu labels its root volume 1 by default; thus, the volume identifier should be /dev/sdb1. (For custom cases, use lsblk to determine the volume identifier.)

    gcloud compute instances attach-disk rescue --disk "$NEW_DISK"

  4. Connect to the rescue instance using SSH:

    gcloud compute ssh rescue

  5. Perform these steps using the rescue instance:

    1. Mount the root volume of the new disk.

      NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol"
      DEV="/dev/sdb1"
      sudo mkdir "$NEW_DISK_MOUNT_POINT"
      sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"

    2. Create the rc.local script.

      cat <<'EOF' >/tmp/rc.local
      #!/bin/bash
      declare -a PKG_LIST=(python-google-compute-engine \
      python3-google-compute-engine \
      google-compute-engine-oslogin \
      gce-compute-image-packages)
      echo "== Installing a Linux Guest Environment for Ubuntu =="
      sleep 30 # Wait for network.
      echo "Determining Ubuntu version..."
      DIST=$(cat /etc/os-release | grep "VERSION_ID=" \
      | sed "s/\"\|(\|)\|VERSION_ID=//g" | awk -F. '{print tolower($1)}')
      if [ -z $DIST ]; then
         echo "ERROR: Could not determine Ubuntu version."
         exit 1
      fi
      if [ "$DIST" -lt "16" ]; then
         # Adjust package list for older Ubuntu:
         echo "Ubuntu version less than 16.04."
         declare -a PKG_LIST=(python-google-compute-engine \
         gce-compute-image-packages)
      fi
      echo "Ensuring Ubuntu universe repositories are enabled."
      apt-add-repository universe
      echo "Running apt update..."
      apt-get update
      echo "Installing packages..."
      for pkg in ${PKG_LIST[@]}; do
         echo "Running apt install $pkg..."
         apt install -y $pkg
         if [ "$?" != "0" ]; then
            echo "ERROR: Failed to install $pkg."
         fi
      done
      echo "Removing this rc.local script."
      rm /etc/rc.local
      # Move back any previous rc.local:
      if [ -f "/etc/moved-rc.local" ]; then
         echo "Restoring a previous rc.local script."
         mv "/etc/moved-rc.local" "/etc/rc.local"
      fi
      echo "Restarting the instance..."
      reboot now
      EOF

    3. Move the rc.local script to the root volume of the new disk and set permissions. (Move any existing rc.local script aside. The temporary script will replace it when it finishes.)

      if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then
         sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \
         "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local"
      fi
      sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local"
      sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"

    4. Un-mount the root volume of the new disk.

      sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"

    5. Exit the SSH session to the rescue instance.

  6. Detach the new disk from the rescue instance.

    gcloud compute instances detach-disk rescue --disk "$NEW_DISK"

  7. Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Cloud Console:

    1. Go to the VM instances page.

      Go to the VM instances page

    2. Click the problematic instance, then click Clone.
    3. Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
    4. Click Create. The replacement instance will start after it has been created.
  8. Let the replacement instance start up. While booting, it will run the temporary rc.local script, installing the Linux Guest Environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporary rc.local script. For example, you can do this with gcloud as shown below. Change REPLACEMENT_INSTANCE_NAME to the name you assigned the replacement instance.

    gcloud compute instances get-serial-port-output REPLACEMENT_INSTANCE_NAME

  9. The replacement instance will automatically reboot one more time when the temporary rc.local script finishes. Starting with the second reboot, you can inspect its console log to make sure the Guest Environment loads. Verify that you can connect to the instance using SSH.

  10. Once you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.

Validate the Guest Environment

The presence of a Guest Environment can be determined by either inspecting system logs emitted to the console while an instance starts up, or by listing the installed packages while connected to the instance.

Expected console logs for the Guest Environment

This table summarizes expected output for console logs emitted by instances with working Guest Environments as they start up.

Operating System Service Management Expected Output
CentOS/RHEL 7
Debian 8+
Ubuntu 15.04+
systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine IP Forwarding Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts
Started Google Compute Engine Network Setup
CentOS/RHEL 6
Ubuntu 14.04
upstart
google-accounts: INFO Starting Google Accounts daemon
google-ip-forwarding: INFO Starting Google IP Forwarding daemon
google-clock-skew: INFO Starting Google Clock Skew daemon
CoreOS systemd and ignition
systemd[1]: Starting Ignition (files)...
[finished] enabling unit "coreos-metadata-sshkeys@.service
[finished] enabling unit "oem-gce.service"
[finished] enabling unit "oem-cloudinit.service"
SUSE (SLES) 11 sysV init
Starting google-accounts-daemon
Starting google-ip-forwarding-daemon
Starting google-clock-skew-daemon
Starting google-instance-setup
SUSE (SLES) 12 systemd
Started Google Compute Engine Accounts Daemon
Started Google Compute Engine IP Forwarding Daemon
Started Google Compute Engine Clock Skew Daemon
Started Google Compute Engine Instance Setup
Started Google Compute Engine Startup Scripts
Started Google Compute Engine Shutdown Scripts

To view console logs for an instance, follow these steps.

Console

  1. Go to the VM instances page.

    Go to the VM instances page

  2. Click the instance you need to examine.
  3. Restart or reset the instance.
  4. Under Logs, click on Serial port 1 (console).
  5. Search for the expected output referencing the table above.

gcloud

  1. Restart or reset the instance.
  2. Use the gcloud compute get-serial-port-output subcommand to connect using the gcloud command-line tool. For example:

    gcloud compute instances get-serial-port-output [INSTANCE_NAME]
    

    where [INSTANCE_NAME] is the name of the instance you need to examine.

  3. Search for the expected output referencing the table above.

Loaded services for the Guest Environment

This table summarizes the services that should be loaded on instances with working Guest Environments. The command to list services must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating System Command to List Services Expected Output
CentOS/RHEL 7
Debian 8+
Ubuntu 15.04+
sudo systemctl list-unit-files \
| grep google | grep enabled
google-accounts-daemon.service      enabled
google-ip-forwarding-daemon.service enabled
google-clock-skew-daemon.service    enabled
google-instance-setup.service       enabled
google-shutdown-scripts.service     enabled
google-startup-scripts.service      enabled
google-network-setup.service        enabled
CentOS/RHEL 6
Ubuntu 14.04
initctl list | grep google
google-accounts-daemon              start/running
google-ip-forwarding-daemon         start/running
google-clock-skew-daemon            start/running
google-instance-setup               stop/waiting
google-startup-scripts              stop/waiting
google-shutdown-scripts             stop/waiting
google-network-setup                stop/waiting
CoreOS
sudo systemctl list-unit-files \
| grep \
"oem-cloudinit\|oem-gce\|coreos-metadata-ssh" \
| grep enabled
coreos-metadata-sshkeys@.service    enabled
oem-cloudinit.service               enabled
oem-gce.service                     enabled
SUSE (SLES) 11
ls /etc/init.d/google*
/etc/init.d/google-accounts-daemon
/etc/init.d/google-ip-forwarding-daemon
/etc/init.d/google-clock-skew-daemon
/etc/init.d/google-instance-setup
/etc/init.d/google-startup-scripts
/etc/init.d/google-shutdown-scripts
SUSE (SLES) 12
sudo systemctl list-unit-files \
| grep google | grep enabled
google-accounts-daemon.service      enabled
google-ip-forwarding-daemon.service enabled
google-clock-skew-daemon.service    enabled
google-instance-setup.service       enabled
google-shutdown-scripts.service     enabled
google-startup-scripts.service      enabled

Installed packages for the Guest Environment

This table summarizes the packages that should be installed on instances with working Guest Environments. The command to list installed packages must be run after connecting to the instance, so this check can only be performed if you have access to it.

Operating System Command to List Packages Expected Output
CentOS/RHEL 6 & 7
yum list installed | grep google-compute
google-compute-engine.noarch
google-compute-engine-oslogin.x86_64
python-google-compute-engine.noarch
Debian 8 & 9
apt list --installed | grep google-compute
google-compute-engine
google-compute-engine-oslogin
python-google-compute-engine
python3-google-compute-engine
Ubuntu 14.04
apt list --installed \
| grep "google-compute\|gce-compute-image-packages"
gce-compute-image-packages
google-compute-engine-oslogin
python-google-compute-engine
Ubuntu 16.04 and 17.10
apt list --installed \
| grep "google-compute\|gce-compute-image-packages"
gce-compute-image-packages
google-compute-engine-oslogin
python-google-compute-engine
python3-google-compute-engine
SUSE (SLES) 11
zypper se -i | grep package \
| grep "google-compute-engine\|plugin-gce\|ClientConfigGCE"
cloud-regionsrv-client-plugin-gce
google-compute-engine-init
regionServiceClientConfigGCE
SUSE (SLES) 12
zypper se -i | grep package \
| grep "google-compute-engine\|gce\|ClientConfigGCE"
cloud-regionsrv-client-plugin-gce
google-compute-engine-init
python-gcemetadata
regionServiceClientConfigGCE

What's next

Send feedback about...