This page describes how to manually install the guest environment for VM instances that are running on Compute Engine.
In most cases, if you are using VM instances that are created using Google-provided public images, you do not need to manually install a guest environment.
Before you manually install the guest environment, use the Validate the guest environment procedure to check if the guest environment is running on your instance. If the guest environment is available on your instance but is outdated, update the guest environment.
Otherwise, determine if you need to manually install the guest environment by reviewing When to manually install the guest environment.
Before you begin
-
If you want to use the
- You can either install local tools or use Cloud Shell.
- Install local tools:
- Install the
gcloud
command-line tool. - Install the jq command-line JSON processor,
which lets you filter
gcloud
output.
- Install the
- Use Cloud Shell, with
gcloud
andjq
pre-installed:
- Install local tools:
- Set default properties.
gcloud config set compute/zone [ZONE]
gcloud config set compute/region [REGION]
gcloud config set project [PROJECT]
gcloud
command-line tool examples in this guide:
Contents
Operating system support
Manual installation of the guest environment is available for the following operating systems:
- Ubuntu 16.04 or later
- CentOS 6, 7, and 8
- SUSE Linux Enterprise Server (SLES) 12 SP4, 12 SP5, and 15 SP1 or later
- Red Hat Enterprise Linux (RHEL) 6, 7, and 8
- Debian 9 and 10
- Windows Server 1909, 1903, and 1809
- Windows Server 2019
- Windows Server 2016
- Windows Server 2012 R2
- SQL Server on Windows Server
- Windows bring your own license:
- Windows 7
- Windows 8
- Windows 10
Google recommends that you use the import tool to install the guest environment. For a list of installation options, review Installation methods.
You can't manually install guest environments for CoreOS and Container-optimized operating systems. If you need one of these operating systems, we recommend that you use public images, because a guest environment is included as a core part of all public images.
Installing the guest environment
Installation methods
There are three ways that you can install the guest environment. Choose one of the following three options:
- Use the import tool. For instructions for how to use the import tool, review Making an image bootable.
- Connect to your instance using SSH or RDP and install the guest environment in-place
- Clone your boot disk and install the guest environment using a startup script
Install the guest environment in-place
Use this method to install the guest environment if you can connect to the target instance using SSH. If you can't connect to the instance to install the guest environment, you can instead install the guest environment by cloning its boot disk and using a startup script.
This procedure is useful for imported images if you can connect using SSH password-based authentication. It might also be used to reinstall the guest environment if you have at least one user account with functional key-based SSH.
CentOS/RHEL
- Ensure that the version of your operating system is supported.
Determine the version of CentOS/RHEL, and create the source repo file,
/etc/yum.repos.d/google-cloud.repo
:OS_RELEASE_FILE="/etc/redhat-release" if [ ! -f $OS_RELEASE_FILE ]; then OS_RELEASE_FILE="/etc/centos-release" fi DIST=$(cat $OS_RELEASE_FILE | grep -o '[0-9].*' | awk -F'.' '{print $1}') sudo tee /etc/yum.repos.d/google-cloud.repo << EOM [google-cloud-compute] name=Google Cloud Compute baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${DIST}-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOM
Update package lists:
sudo yum makecache sudo yum updateinfo
Install the guest environment packages:
sudo yum install -y google-compute-engine
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Debian
- Ensure that the version of your operating system is supported.
Install the public repo GPG key:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Determine the name of the Debian distro, and create the source list file,
/etc/apt/sources.list.d/google-cloud.list
:DIST=$(cat /etc/os-release | grep "VERSION=" | sed "s/\"\|(\|)\|VERSION=//g" \ | awk '{print tolower($NF)}') sudo tee /etc/apt/sources.list.d/google-cloud.list << EOM deb http://packages.cloud.google.com/apt google-compute-engine-${DIST}-stable main deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${DIST} main EOM
Update package lists:
sudo apt update
Install the guest environment packages:
sudo apt install -y google-cloud-packages-archive-keyring sudo apt install -y google-compute-engine
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Ubuntu
Ensure that the version of your operating system is supported.
Enable the Universe repository. Canonical publishes packages for its guest environment to the Universe repository.
sudo apt-add-repository universe
Update package lists:
sudo apt update
Install the guest environment packages:
sudo apt install -y gce-compute-image-packages
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
SLES
Ensure that the version of your operating system is supported.
Activate the Public Cloud Module.
product=$(sudo SUSEConnect --list-extensions | grep -o "sle-module-public-cloud.*") [[ -n "$product" ]] && sudo SUSEConnect -p "$product"
Update package lists:
sudo zypper refresh
Install the guest environment packages:
sudo zypper install -y google-compute-engine-{oslogin,init} rsyslog sudo systemctl enable /usr/lib/systemd/system/google-*
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Windows
Before you begin, ensure that the version of your operating system is supported.
To install the Windows guest environment, run the following commands in an elevated PowerShell
version 3.0 or higher prompt. The Invoke-WebRequest
command in the
instructions below requires a version of PowerShell higher than 3.0.
Download and install
GooGet
.[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest https://github.com/google/googet/releases/download/v2.13.0/googet.exe -OutFile $env:temp\googet.exe; & "$env:temp\googet.exe" -root C:\ProgramData\GooGet -noconfirm install -sources ` https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable googet; Remove-Item "$env:temp\googet.exe"
During installation
GooGet
adds content to the system environment. After installation completes, launch a new PowerShell console or provide the full path to thegooget.exe
file (C:\ProgramData\GooGet\googet.exe).Open a new console and add the
google-compute-engine-stable
repository.googet addrepo google-compute-engine-stable https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable
Install the core Windows guest environment packages.
googet -noconfirm install google-compute-engine-windows ` google-compute-engine-sysprep google-compute-engine-metadata-scripts ` google-compute-engine-vss
Install the optional Windows guest environment package.
googet -noconfirm install google-compute-engine-auto-updater
Using the
googet
command.To view available packages, run the
googet available
command.To view installed packages, run the
googet installed
command.To update to the latest package version, run the
googet update
command.To view additional commands, run
googet help
.
Clone boot disk and use startup script
If you cannot connect to an instance to manually install the guest environment, install the guest environment using this procedure, which includes the following steps that can be completed in the Google Cloud console and Cloud Shell.
This method shows the procedure for Linux distributions only. For Windows, use one of the other two installation methods.
Use the Cloud Shell to run this procedure:
CentOS/RHEL
Ensure that the version of your operating system is supported.
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. CentOS/RHEL uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT" if [ "$?" != "0" ]; then # Handle XFS filesystem cases (CentOS/RHEL 7): sudo mount -o nouuid "$DEV" "$NEW_DISK_MOUNT_POINT" fi
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash YUM_SERVER="packages.cloud.google.com" REPO_FILE="/etc/yum.repos.d/google-cloud.repo" echo "== Installing a Linux guest environment for CentOS/RHEL ==" sleep 30 # Wait for network. echo "Determining CentOS/RHEL version..." OS_RELEASE_FILE="/etc/redhat-release" if [ ! -f "$OS_RELEASE_FILE" ]; then OS_RELEASE_FILE="/etc/centos-release" fi if [ ! -f "$OS_RELEASE_FILE" ]; then echo "ERROR: This system does not appear to be CentOS/RHEL." exit 1 fi DIST=$(cat "$OS_RELEASE_FILE" | grep -o '[0-9].*' | awk -F'.' '{print $1}') if [ -z $DIST ]; then echo "ERROR: Could not determine version of CentOS/RHEL." exit 1 fi echo "Updating $REPO_FILE..." tee "$REPO_FILE" << EOM [google-cloud-compute] name=Google Cloud Compute baseurl=https://$YUM_SERVER/yum/repos/google-compute-engine-el${DIST}-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://$YUM_SERVER/yum/doc/yum-key.gpg https://$YUM_SERVER/yum/doc/rpm-package-key.gpg EOM echo "Running yum makecache..." yum makecache echo "Running yum updateinfo..." yum updateinfo echo "Running yum install google-compute-engine..." yum install -y google-compute-engine if [ "$?" != "0" ]; then echo "ERROR: Failed to install ${pkg}." fi echo "Removing this rc.local script." rm /etc/rc.d/rc.local # Move back any previous rc.local: if [ -f "/etc/moved-rc.local" ]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.d/rc.local" fi echo "Restarting the instance..." reboot now EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" ]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir \ "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create an instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud Console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Debian
Ensure that the version of your operating system is supported
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Debian uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash declare -x APT_SERVER="packages.cloud.google.com" declare -x REPO_FILE="/etc/apt/sources.list.d/google-cloud.list" echo "== Installing a Linux guest environment for Debian ==" sleep 30 # Wait for network. echo "Determining Debian version..." DIST=$(cat /etc/os-release | grep "VERSION=" \ | sed "s/\"\|(\|)\|VERSION=//g" | awk '{print tolower($NF)}') if [ -z $DIST ]; then echo "ERROR: Could not determine Debian version." exit 1 fi echo "Adding GPG key for $APT_SERVER." curl https://$APT_SERVER/apt/doc/apt-key.gpg | apt-key add - echo "Updating $REPO_FILE..." tee "$REPO_FILE" << EOM deb http://$APT_SERVER/apt google-compute-engine-${DIST}-stable main deb http://$APT_SERVER/apt google-cloud-packages-archive-keyring-${DIST} main EOM echo "Running apt update..." apt update echo "Installing packages..." for pkg in google-cloud-packages-archive-keyring google-compute-engine; do echo "Running apt install ${pkg}..." apt install -y ${pkg} if [ "$?" != "0" ]; then echo "ERROR: Failed to install ${pkg}." fi done echo "Removing this rc.local script." rm /etc/rc.local # Move back any previous rc.local: if [ -f "/etc/moved-rc.local" ]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.local" fi echo "Restarting the instance..." reboot now EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud Console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Ubuntu
Ensure that the version of your operating system is supported
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Ubuntu labels its root volume 1 by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash echo "== Installing a Linux guest environment for Ubuntu ==" sleep 30 # Wait for network. echo "Ensuring Ubuntu universe repositories are enabled." apt-add-repository universe echo "Running apt update..." apt update echo "Installing packages..." echo "Running apt install gce-compute-image-packages..." apt install -y gce-compute-image-packages if [ "$?" != "0" ]; then echo "ERROR: Failed to install ${pkg}." fi echo "Removing this rc.local script." rm /etc/rc.local # Move back any previous rc.local: if [ -f "/etc/moved-rc.local" ]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.local" fi echo "Restarting the instance..." reboot now EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud Console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Updating the guest environment
If you are getting a message that the guest environment is outdated, update the packages for your operating system.
CentOS/RHEL
To update CentOS and RHEL operating systems, run the following commands:
sudo yum makecache sudo yum install google-compute-engine google-compute-engine-oslogin \ google-guest-agent google-osconfig-agent
Debian
To update Debian operating systems, run the following commands:
sudo apt update sudo apt install google-compute-engine google-compute-engine-oslogin \ google-guest-agent google-osconfig-agent
Ubuntu
To update Ubuntu operating systems, run the following commands:
sudo apt update sudo apt install gce-compute-image-packages google-compute-engine-oslogin \ python3-google-compute-engine
SLES
To update SLES operating systems, run the following commands:
sudo zypper refresh sudo zypper install google-compute-engine-{oslogin,init}
Windows
To update Windows operating systems, run the following command:
googet update
Validating the guest environment
The presence of a guest environment can be determined by either inspecting system logs emitted to the console while an instance starts up, or by listing the installed packages while connected to the instance.
Expected console logs for the guest environment
This table summarizes expected output for console logs emitted by instances with working guest environments as they start up.
Operating system | Service management | Expected output |
---|---|---|
CentOS/RHEL 7+ Debian Ubuntu |
systemd | Started Google Compute Engine Guest Agent. Started Google Compute Engine Shutdown Scripts. Started Google Compute Engine Startup Scripts. |
CentOS/RHEL 6 | upstart | GCEGuestAgent Info: GCE Agent Started (version YYYYMMDD.NN) |
CoreOS | systemd and ignition | systemd[1]: Starting Ignition (files)... [finished] enabling unit "coreos-metadata-sshkeys@.service [finished] enabling unit "oem-gce.service" [finished] enabling unit "oem-cloudinit.service" |
Container-Optimized OS | systemd | Started Google Compute Engine Accounts Daemon Started Google Compute Engine Network Daemon Started Google Compute Engine Clock Skew Daemon Started Google Compute Engine Instance Setup Started Google Compute Engine Startup Scripts Started Google Compute Engine Shutdown Scripts |
SLES 12+ | systemd | Started Google Compute Engine Guest Agent. Started Google Compute Engine Shutdown Scripts. Started Google Compute Engine Startup Scripts. |
Windows | GCEWindowsAgent: GCE Agent Started GCEMetadataScripts: Starting startup scripts |
To view console logs for an instance, follow these steps.
Console
In the Google Cloud Console, go to the VM instances page.
gcloud
- Restart or reset the instance.
Use the
gcloud compute instances get-serial-port-output
subcommand to connect using thegcloud
command-line tool. For example:gcloud compute instances get-serial-port-output VM_NAME
Replace VM_NAME with the name of the instance you need to examine.
Search for the expected output referencing the table above.
Loaded services for the guest environment
This table summarizes the services that should be loaded on instances with working guest environments. The command to list services must be run after connecting to the instance, so this check can only be performed if you have access to it.
Operating system | Command to list services | Expected output |
---|---|---|
CentOS/RHEL 7+ Debian 9+ |
sudo systemctl list-unit-files \ | grep google | grep enabled |
google-disk-expand.service enabled google-guest-agent.service enabled google-osconfig-agent.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
CentOS/RHEL 6 | initctl list | grep google |
google-guest-agent start/running, process NNNN google-startup-scripts stop/waiting google-shutdown-scripts stop/waiting |
Ubuntu | sudo systemctl list-unit-files \ | grep google | grep enabled |
google-accounts-daemon.service enabled google-clock-skew-daemon.service enabled google-instance-setup.service enabled google-network-daemon.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
CoreOS | sudo systemctl list-unit-files \ | grep \ "oem-cloudinit\|oem-gce\|coreos-metadata-ssh" \ | grep enabled |
coreos-metadata-sshkeys@.service enabled oem-cloudinit.service enabled oem-gce.service enabled |
Container-Optimized OS | sudo systemctl list-unit-files \ | grep google |
var-lib-google.mount disabled google-accounts-daemon.service disabled google-clock-skew-daemon.service disabled google-instance-setup.service disabled google-ip-forwarding-daemon.service disabled google-network-setup.service disabled google-shutdown-scripts.service disabled google-startup-scripts.service disabled var-lib-google-remount.service static |
SLES 12+ | sudo systemctl list-unit-files \ | grep google | grep enabled |
google-guest-agent.service enabled google-optimize-local-ssd.service enabled google-set-multiqueue.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
Windows | Get-Service GCEAgent Get-ScheduledTask GCEStartup |
Running GCEAgent GCEAgent \ GCEStartup Ready |
Installed packages for the guest environment
This table summarizes the packages that should be installed on instances with working guest environments. The command to list installed packages must be run after connecting to the instance, so this check can only be performed if you have access to it.
Operating system | Command to list packages | Expected output |
---|---|---|
CentOS/RHEL | rpm -qa --queryformat '%{NAME}\n' \ |grep -iE google\|gce | grep -iE \ 'google-compute|google-guest' |
google-compute-engine google-compute-engine-oslogin google-guest-agent |
Debian | apt list --installed \ | grep -iE 'google-compute|google-guest' |
gce-disk-expand google-compute-engine-oslogin google-compute-engine google-guest-agent google-osconfig-agent |
Ubuntu | apt list --installed \ | grep "google-compute\|gce-compute-image-packages" |
gce-compute-image-packages google-compute-engine-oslogin python3-google-compute-engine |
SUSE (SLES) | rpm -qa --queryformat '%{NAME}\n' \ |grep -iE google\|gce |
google-guest-configs regionServiceClientConfigGCE google-guest-agent google-opensans-fonts python3-gcemetadata google-guest-oslogin cloud-regionsrv-client-plugin-gce |
Windows | googet installed |
certgen googet google-compute-engine-auto-updater google-compute-engine-driver-gga google-compute-engine-driver-netkvm google-compute-engine-driver-pvpanic google-compute-engine-driver-vioscsi google-compute-engine-metadata-scripts google-compute-engine-powershell google-compute-engine-sysprep google-compute-engine-vss google-compute-engine-windows |
What's next
- Read the troubleshooting tips.
- Learn more about applying metadata.
- Learn about SSH keys.