This page describes how to manually install the guest environment for virtual machine (VM) instances that are running custom images on Compute Engine.
In most cases, if you are using VMs that are created using Google-provided public images, you do not need to install a guest environment. For information on when to use the guest environment, see When to manually install or update the guest environment.
Before you manually install the guest environment, use the Validate the guest environment procedure to check if the guest environment is running on your VM. If the guest environment is available on your VM but is outdated, update the guest environment.
Before you begin
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
-
Installation methods
There are multiple ways that you can install the guest environment. Choose one of the following options:
Import tool. This is the recommended option. However, keep in mind that the import tool not only installs the guest environment but does other configuration updates as well on the image such as configuring networks, configuring bootloader, and installing Google Cloud CLI. For instructions on how to use the import tool, review Making an image bootable.
The import tool supports a wide variety of operating systems and versions. For more information, see operating system details.
Manual installation. Choose one of the following:
- Connect to your instance using SSH or RDP and install the guest environment in-place
- Clone your boot disk and install the guest environment using a startup script
Supported operating systems
You can manually install the guest environment on VMs that use OS image versions that are in the general availability (GA) lifecycle or extended support lifecycle stage. To review a list of OS image versions and their lifecycle stage on Compute Engine, see Operating system details.
Limitations
You can't manually install or use the import tool to install guest environments for Fedora CoreOS and Container-optimized operating systems. If you need one of these operating systems, we recommend that you use public images, because a guest environment is included as a core part of all public images.
Installing the guest environment
Install the guest environment in-place
Use this method to install the guest environment if you can connect to the target instance using SSH. If you can't connect to the instance to install the guest environment, you can instead install the guest environment by cloning its boot disk and using a startup script.
This procedure is useful for imported images if you can connect using SSH password-based authentication. It might also be used to reinstall the guest environment if you have at least one user account with functional key-based SSH.
CentOS/RHEL/Rocky
- Ensure that the version of your operating system is supported.
Determine the version of CentOS/RHEL/Rocky Linux, and create the source repo file,
/etc/yum.repos.d/google-cloud.repo
:eval $(grep VERSION_ID /etc/os-release) sudo tee /etc/yum.repos.d/google-cloud.repo << EOM [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${VERSION_ID/.*}-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOM
Update package lists:
sudo yum makecache sudo yum updateinfo
Install the guest environment packages:
sudo yum install -y google-compute-engine google-osconfig-agent
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Debian
- Ensure that the version of your operating system is supported.
Install the public repo GPG key:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Determine the name of the Debian distro, and create the source list file,
/etc/apt/sources.list.d/google-cloud.list
:eval $(grep VERSION_CODENAME /etc/os-release) sudo tee /etc/apt/sources.list.d/google-cloud.list << EOM deb http://packages.cloud.google.com/apt google-compute-engine-${VERSION_CODENAME}-stable main deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${VERSION_CODENAME} main EOM
Update package lists:
sudo apt update
Install the guest environment packages:
sudo apt install -y google-cloud-packages-archive-keyring sudo apt install -y google-compute-engine google-osconfig-agent
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Ubuntu
Ensure that the version of your operating system is supported.
Enable the Universe repository. Canonical publishes packages for its guest environment to the Universe repository.
sudo apt-add-repository universe
Update package lists:
sudo apt update
Install the guest environment packages:
sudo apt install -y google-compute-engine google-osconfig-agent
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
SLES
Ensure that the version of your operating system is supported.
Activate the Public Cloud Module.
product=$(sudo SUSEConnect --list-extensions | grep -o "sle-module-public-cloud.*") [[ -n "$product" ]] && sudo SUSEConnect -p "$product"
Update package lists:
sudo zypper refresh
Install the guest environment packages:
sudo zypper install -y google-guest-{agent,configs,oslogin} \ google-osconfig-agent sudo systemctl enable /usr/lib/systemd/system/google-*
Restart the instance and inspect its console log to make sure the guest environment loads as it starts back up.
Verify that you can connect to the instance using SSH.
Windows
Before you begin, ensure that the version of your operating system is supported.
To install the Windows guest environment, run the following commands in an elevated PowerShell
version 3.0 or higher prompt. The Invoke-WebRequest
command in the
instructions below requires a version of PowerShell higher than 3.0.
Download and install
GooGet
.[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest https://github.com/google/googet/releases/download/v2.18.3/googet.exe -OutFile $env:temp\googet.exe; & "$env:temp\googet.exe" -root C:\ProgramData\GooGet -noconfirm install -sources ` https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable googet; Remove-Item "$env:temp\googet.exe"
During installation
GooGet
adds content to the system environment. After installation completes, launch a new PowerShell console or provide the full path to thegooget.exe
file (C:\ProgramData\GooGet\googet.exe).Open a new console and add the
google-compute-engine-stable
repository.googet addrepo google-compute-engine-stable https://packages.cloud.google.com/yuck/repos/google-compute-engine-stable
Install the core Windows guest environment packages.
googet -noconfirm install google-compute-engine-windows ` google-compute-engine-sysprep google-compute-engine-metadata-scripts ` google-compute-engine-vss google-osconfig-agent
Install the optional Windows guest environment package.
googet -noconfirm install google-compute-engine-auto-updater
Using the
googet
command.To view available packages, run the
googet available
command.To view installed packages, run the
googet installed
command.To update to the latest package version, run the
googet update
command.To view additional commands, run
googet help
.
Clone boot disk and use startup script
If you cannot connect to an instance to manually install the guest environment, install the guest environment using this procedure, which includes the following steps that can be completed in the Google Cloud console or Cloud Shell.
This method shows the procedure for Linux distributions only. For Windows, use one of the other two installation methods.
Use the Cloud Shell to run this
procedure. If you are not using Cloud Shell, install the
jq
command-line JSON processor.
You can use this processor to filter gcloud CLI output.
Cloud Shell has jq
pre-installed.
CentOS/RHEL/Rocky
Ensure that the version of your operating system is supported.
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. CentOS/RHEL/Rocky Linux uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount -o nouuid "$DEV" "$NEW_DISK_MOUNT_POINT"
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash echo "== Installing Google guest environment for CentOS/RHEL/Rocky Linux ==" sleep 30 # Wait for network. echo "Determining CentOS/RHEL/Rocky Linux version..." eval $(grep VERSION_ID /etc/os-release) if [[ -z $VERSION_ID ]]; then echo "ERROR: Could not determine version of CentOS/RHEL/Rocky Linux." exit 1 fi echo "Updating repo file..." tee "/etc/yum.repos.d/google-cloud.repo" << EOM [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el${VERSION_ID/.*}-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOM echo "Running yum makecache..." yum makecache echo "Running yum updateinfo..." yum updateinfo echo "Running yum install google-compute-engine..." yum install -y google-compute-engine rpm -q google-compute-engine if [[ $? -ne 0 ]]; then echo "ERROR: Failed to install ${pkg}." fi echo "Removing this rc.local script." rm /etc/rc.d/rc.local # Move back any previous rc.local: if [[ -f "/etc/moved-rc.local" ]]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.d/rc.local" fi echo "Restarting the instance..." reboot EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [ -f "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" ]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.d/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir \ "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create an instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Debian
Ensure that the version of your operating system is supported
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Debian uses the first volume on a disk as the root volume by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash echo "== Installing Google guest environment for Debian ==" export DEBIAN_FRONTEND=noninteractive sleep 30 # Wait for network. echo "Determining Debian version..." eval $(grep VERSION_CODENAME /etc/os-release) if [[ -z $VERSION_CODENAME ]]; then echo "ERROR: Could not determine Debian version." exit 1 fi echo "Adding GPG key for Google cloud repo." curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - echo "Updating repo file..." tee "/etc/apt/sources.list.d/google-cloud.list" << EOM deb http://packages.cloud.google.com/apt google-compute-engine-${VERSION_CODENAME}-stable main deb http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-${VERSION_CODENAME} main EOM echo "Running apt update..." apt update echo "Installing packages..." for pkg in google-cloud-packages-archive-keyring google-compute-engine; do echo "Running apt install ${pkg}..." apt install -y ${pkg} if [[ $? -ne 0 ]]; then echo "ERROR: Failed to install ${pkg}." fi done echo "Removing this rc.local script." rm /etc/rc.local # Move back any previous rc.local: if [[ -f "/etc/moved-rc.local" ]]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.local" fi echo "Restarting the instance..." reboot EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [[ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Ubuntu
Ensure that the version of your operating system is supported
Create a new instance to serve as the rescue instance. Name this instance rescue. This rescue instance does not need to run the same Linux OS as the problematic instance. This example uses Debian 9 on the rescue instance.
Stop the problematic instance and create a copy of its boot disk.
Set a variable name for the problematic instance. This makes it easier to reference the instance in later steps.
export PROB_INSTANCE_NAME=VM_NAME
Replace VM_NAME with the name of the problematic instance.
Stop the problematic instance.
gcloud compute instances stop "$PROB_INSTANCE_NAME"
Get the name of the boot disk for the problem instance.
export PROB_INSTANCE_DISK="$(gcloud compute instances describe \ "$PROB_INSTANCE_NAME" --format='json' | jq -r \ '.disks[] | select(.boot == true) | .source')"
Create a snapshot of the boot disk.
export DISK_SNAPSHOT="${PROB_INSTANCE_NAME}-snapshot" gcloud compute disks snapshot "$PROB_INSTANCE_DISK" \ --snapshot-names "$DISK_SNAPSHOT"
Create a new disk from the snapshot.
export NEW_DISK="${PROB_INSTANCE_NAME}-new-disk" gcloud compute disks create "$NEW_DISK" \ --source-snapshot="$DISK_SNAPSHOT"
Delete the snapshot:
gcloud compute snapshots delete "$DISK_SNAPSHOT"
Attach the new disk to the rescue instance and mount the root volume for the rescue instance. Because this procedure only attaches one additional disk, the device identifier of the new disk is /dev/sdb. Ubuntu labels its root volume 1 by default, so the volume identifier should be /dev/sdb1. For custom cases, use
lsblk
to determine the volume identifier.gcloud compute instances attach-disk rescue --disk "$NEW_DISK"
Connect to the rescue instance using SSH:
gcloud compute ssh rescue
Run the following steps on the rescue instance.
Mount the root volume of the new disk.
export NEW_DISK_MOUNT_POINT="/tmp/sdb-root-vol" DEV="/dev/sdb1" sudo mkdir "$NEW_DISK_MOUNT_POINT" sudo mount "$DEV" "$NEW_DISK_MOUNT_POINT"
Create the
rc.local
script.cat <<'EOF' >/tmp/rc.local #!/bin/bash echo "== Installing a Linux guest environment for Ubuntu ==" sleep 30 # Wait for network. echo "Running apt update..." apt update echo "Installing packages..." echo "Running apt install google-compute-engine..." apt install -y google-compute-engine if [[ $? -ne 0 ]]; then echo "ERROR: Failed to install ${pkg}." fi echo "Removing this rc.local script." rm /etc/rc.local # Move back any previous rc.local: if [[ -f "/etc/moved-rc.local" ]]; then echo "Restoring a previous rc.local script." mv "/etc/moved-rc.local" "/etc/rc.local" fi echo "Restarting the instance..." reboot EOF
Move the
rc.local
script to the root volume of the new disk and set permissions. Move any existingrc.local
script aside. The temporary script will replace it when it finishes.if [[ -f "$NEW_DISK_MOUNT_POINT/etc/rc.local" ]]; then sudo mv "$NEW_DISK_MOUNT_POINT/etc/rc.local" \ "$NEW_DISK_MOUNT_POINT/etc/moved-rc.local" fi sudo mv /tmp/rc.local "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chmod 0755 "$NEW_DISK_MOUNT_POINT/etc/rc.local" sudo chown root:root "$NEW_DISK_MOUNT_POINT/etc/rc.local"
Un-mount the root volume of the new disk.
sudo umount "$NEW_DISK_MOUNT_POINT" && sudo rmdir "$NEW_DISK_MOUNT_POINT"
Exit the SSH session to the rescue instance.
Detach the new disk from the rescue instance.
gcloud compute instances detach-disk rescue --disk "$NEW_DISK"
Create a new instance to serve as the replacement. When you create the replacement instance, specify the new disk as the boot disk. You can create the replacement instance using the Google Cloud Console:
In the Google Cloud console, go to the VM instances page.
Click the problematic instance, then click Create similar.
Specify a name for the replacement instance. In the Boot disk section, click Change, then click Existing Disks. Select the new disk.
Click Create. The replacement instance automatically starts after it is created.
As the replacement instance start up, the temporary
rc.local
script runs and installs the guest environment. To watch the progress of this script, inspect the console logs for lines emitted by the temporaryrc.local
script. To view logs, run the following command:gcloud compute instances get-serial-port-output REPLACEMENT_VM_NAME
Replace REPLACEMENT_VM_NAME with the name you assigned the replacement instance.
The replacement instance also automatically reboots when the temporary
rc.local
script finishes. During the second reboot, you can inspect the console log to make sure the guest environment loads.Verify that you can connect to the instance using SSH.
When you are satisfied that the replacement instance is functional, you can stop or delete the problematic instance.
Updating the guest environment
If you are getting a message that the guest environment is outdated, update the packages for your operating system.
CentOS/RHEL/Rocky
To update CentOS, RHEL and Rocky Linux operating systems, run the following commands:
sudo yum makecache sudo yum install google-compute-engine google-compute-engine-oslogin \ google-guest-agent google-osconfig-agent
Debian
To update Debian operating systems, run the following commands:
sudo apt update sudo apt install google-compute-engine google-compute-engine-oslogin \ google-guest-agent google-osconfig-agent
Ubuntu
To update Ubuntu operating systems, run the following commands:
sudo apt update sudo apt install google-compute-engine google-compute-engine-oslogin \ google-guest-agent google-osconfig-agent
SLES
To update SLES operating systems, run the following commands:
sudo zypper refresh sudo zypper install google-guest-{agent,configs,oslogin} \ google-osconfig-agent
Windows
To update Windows operating systems, run the following command:
googet update
Validating the guest environment
The presence of a guest environment can be determined by either inspecting system logs emitted to the console while an instance starts up, or by listing the installed packages while connected to the instance.
Expected console logs for the guest environment
This table summarizes expected output for console logs emitted by instances with working guest environments as they start up.
Operating system | Service management | Expected output |
---|---|---|
CentOS/RHEL/Rocky Linux Debian Ubuntu SLES Container-Optimized OS 89 and newer |
systemd | google_guest_agent: GCE Agent Started (version YYYYMMDD.NN) google_metadata_script_runner: Starting startup scripts (version YYYYMMDD.NN) OSConfigAgent Info: OSConfig Agent (version YYYYMMDD.NN) |
Container-Optimized OS 85 and older | systemd | Started Google Compute Engine Accounts Daemon Started Google Compute Engine Network Daemon Started Google Compute Engine Clock Skew Daemon Started Google Compute Engine Instance Setup Started Google Compute Engine Startup Scripts Started Google Compute Engine Shutdown Scripts |
Windows | GCEGuestAgent: GCE Agent Started (version YYYYMMDD.NN) GCEMetadataScripts: Starting startup scripts (version YYYYMMDD.NN) OSConfigAgent Info: OSConfig Agent (version YYYYMMDD.NN) |
To view console logs for an instance, follow these steps.
Console
In the Google Cloud console, go to the VM instances page.
gcloud
- Restart or reset the instance.
Use the
gcloud compute instances get-serial-port-output
subcommand to connect using the Google Cloud CLI. For example:gcloud compute instances get-serial-port-output VM_NAME
Replace VM_NAME with the name of the instance you need to examine.
Search for the expected output referencing the table above.
Loaded services for the guest environment
This table summarizes the services that should be loaded on instances with working guest environments. The command to list services must be run after connecting to the instance, so this check can only be performed if you have access to it.
Operating system | Command to list services | Expected output |
---|---|---|
CentOS/RHEL/Rocky Linux Debian |
sudo systemctl list-unit-files \ | grep google | grep enabled |
google-disk-expand.service enabled google-guest-agent.service enabled google-osconfig-agent.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
Ubuntu | sudo systemctl list-unit-files \ | grep google | grep enabled |
google-guest-agent.service enabled google-osconfig-agent.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
Container-Optimized OS | sudo systemctl list-unit-files \ | grep google |
var-lib-google.mount disabled google-guest-agent.service disabled google-osconfig-agent.service disabled google-osconfig-init.service disabled google-oslogin-cache.service static google-shutdown-scripts.service disabled google-startup-scripts.service disabled var-lib-google-remount.service static google-oslogin-cache.timer disabled |
SLES 12+ | sudo systemctl list-unit-files \ | grep google | grep enabled |
google-guest-agent.service enabled google-osconfig-agent.service enabled google-shutdown-scripts.service enabled google-startup-scripts.service enabled google-oslogin-cache.timer enabled |
Windows | Get-Service GCEAgent Get-ScheduledTask GCEStartup |
Running GCEAgent GCEAgent \ GCEStartup Ready |
Installed packages for the guest environment
This table summarizes the packages that should be installed on instances with working guest environments. The command to list installed packages must be run after connecting to the instance, so this check can only be performed if you have access to it.
Operating system | Command to list packages | Expected output |
---|---|---|
CentOS/RHEL/Rocky Linux | rpm -qa --queryformat '%{NAME}\n' \ | grep -iE 'google|gce' |
google-osconfig-agent google-compute-engine-oslogin google-guest-agent gce-disk-expand google-cloud-sdk google-compute-engine |
Debian | apt list --installed \ | grep -i google |
gce-disk-expand google-cloud-packages-archive-keyring google-cloud-sdk google-compute-engine-oslogin google-compute-engine google-guest-agent google-osconfig-agent |
Ubuntu | apt list --installed \ | grep -i google |
google-compute-engine-oslogin google-compute-engine google-guest-agent google-osconfig-agent |
SUSE (SLES) | rpm -qa --queryformat '%{NAME}\n' \ | grep -i google |
google-guest-configs google-osconfig-agent google-guest-oslogin google-guest-agent |
Windows | googet installed |
certgen googet google-compute-engine-auto-updater google-compute-engine-driver-gga google-compute-engine-driver-netkvm google-compute-engine-driver-pvpanic google-compute-engine-driver-vioscsi google-compute-engine-metadata-scripts google-compute-engine-powershell google-compute-engine-sysprep google-compute-engine-vss google-compute-engine-windows google-osconfig-agent |
What's next
- Read the troubleshooting tips.
- Learn more about applying metadata.
- Learn about SSH keys.