Compute Engine


An instance is a virtual machine hosted on Google's infrastructure.

Instances can run Linux and Windows Server images provided by Google, or any customized versions of these images. You can also build and run images of other operating systems.

Google Compute Engine also lets you specify the machine properties of your instances, such as the number of CPUs and the amount of RAM, based on the machine type you use.

Instances are a per-zone resource.


Getting Started Guide

If you are new to Google Compute Engine, follow the Getting Started Guide to learn how to create a virtual machine instance using the Developers Console.

Getting Started Guide


Every Google Compute Engine instance is a virtual machine that you can customize and manage using the Google Developers Console, the gcloud compute tool, or the REST API. To perform advanced configuration, you must connect to the instance using Secure Shell (SSH) or Remote Desktop Protocol (RDP) for Windows instances. By default, Linux instances support SSH capability for the instance creator, and optionally for other users. Windows instances support RDP connections after you generate a Windows username and password.

As an instance creator, you have full root privileges on any instances you have started. An instance administrator can also add system users using standard Linux commands or Windows User Account management.

To start an instance using the gcloud compute tool, use the instances create command. This reserves the instance, starts it, and then runs any startup scripts that you specify. You can check the status of an instance by running gcloud compute instances list, looking for your instance, and checking for a status of RUNNING. Use instances create to add an instance to a project and start it with the desired hardware, operating system image, zone, and any startup scripts that you want to run. Currently, adding and removing an instance is the same as starting and stopping an instance; you cannot add an instance to a project without starting it, or remove it without stopping it.

A project holds one or more instances but an instance can be a member of one and only one project. When you start an instance, you must specify which project and zone it should belong to. When you delete an instance, it is removed from the project. You can view project information using the gcloud compute project-info describe command, but you must use the Google Developers Console to create and manage projects.

Instances can communicate with other instances in the same network and with the rest of the world through the Internet. A Network object is restricted to a single project, and cannot communicate with other Network objects. See Networks and Instances for more information about network communication to and from an instance.

Useful gcloud compute commands:

Creating and starting an instance

An instance is created and started in a single step. Google Compute Engine does not currently allow you add an instance to a project without starting it. An instance takes a few moments to start up. You must check the instance status to learn when it is actually running. See Checking Instance Status for more information.

Each instance must have a root persistent disk that stores the instance's root filesystem. It is possible to create this root persistent disk when you create your instance, or create it separately and attach it to the instance.

Start an instance using gcloud compute

Start an instance using gcloud compute instances create command:

 $ gcloud compute instances create example-instance \
          --zone us-central1-a \
          --machine-type n1-standard-1 \
          --image debian-7-backports

 Created [https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a/instances/example-instance].
 example-instance  us-central1-a n1-standard-1 RUNNING

The command creates a virtual machine with the follow default settings:

  • The zone that you choose. All instances must live in a zone. You can select a zone at instance creation time by using the --zone flag. If you omit the --zone flag and you have not set the compute/zone property, gcloud compute prompts you for a zone.
  • The latest Debian 7 Backports image.
  • The n1-standard-1 machine type.

Starting an instance in the API

To start an instance in the API, construct a request with a source image:

body = {
  'name': <instance-name>,
  'machineType': <fully-qualified-machine-type-url>
  'networkInterfaces': [{
    'accessConfigs': [{
      'type': 'ONE_TO_ONE_NAT',
      'name': 'External NAT'
    'network': <fully-qualified-network-url>
  'disks': [{
     'autoDelete': 'true',
     'boot': 'true',
     'type': 'PERSISTENT',
     'initializeParams': {
        'diskName': 'my-root-disk',
        'sourceImage': <fully-qualified-image-url>,
        'diskType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/diskTypes/pd-standard'

By providing the following in your request:

'disks': [{
   'autoDelete': 'true',
   'boot': 'true',
   'type': 'PERSISTENT',
   'initializeParams': {
      'diskName': 'my-root-disk',
      'sourceImage': <fully-qualified-image-url>,
      'diskType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/diskTypes/pd-standard'

Compute Engine also creates a root persistent disk with the source image you indicated. You can provide initalizeParams only for a root persistent disk, and can provide it only once per instance creation request. Note that the autoDelete flag also indicates to Compute Engine that the root persistent disk should be automatically deleted when the instance is deleted.

When you are using the API to specify a root persistent disk:

  • You can only specify the boot field on one disk. You may attach multiple persistent disks but only one can be the root persistent disk.
  • You must attach the root persistent disk as the first disk for that instance.
  • When the source field is specified, you cannot specify the initializeParams field, as this conflicts with each other. Providing a source indicates that the root persistent disk exists already, whereas specifying initializeParams indicates that Compute Engine should create the root persistent disk.

If you're using the API client library, you can start a new instance using the instances().insert function. Here is a snippet from the Python client library:

def addInstance(auth_http, gce_service):
# Construct the request body
 body = {
  'machineType': <fully-qualified-machine_type_url>,
  'networkInterfaces': [{
    'accessConfigs': [{
      'type': 'ONE_TO_ONE_NAT',
      'name': 'External NAT'
    'network': <fully-qualified-network_url>
  'disks': [{
     'autoDelete': 'true',
     'boot': 'true',
     'type': 'PERSISTENT',
     'initializeParams' :{
        'diskName': 'my-root-disk',
        'sourceImage': <fully-qualified-image-url>,
        'diskType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/diskTypes/pd-standard'

# Create the instance
request = gce_service.instances().insert(
     project=PROJECT_ID, body=body, zone=DEFAULT_ZONE)
response = request.execute(auth_http)
response = _blocking_call(gce_service, auth_http, response)

print response

You could also make a request to the API directly by sending a POST request to the instances URI with the same request body:

def addInstance(http, listOfHeaders):
  url = 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances'

  body = {
    'name': NEW_INSTANCE_NAME,
    'machineType': <fully-qualified-machine_type_url>,
    'networkInterfaces': [{
      'accessConfigs': [{
        'type': 'ONE_TO_ONE_NAT',
        'name': 'External NAT'
      'network': <fully-qualified-network_url>
    'disks': [{
       'autoDelete': 'true',
       'boot': 'true',
       'type': 'PERSISTENT',
       'initializeParams': {
          'diskName': 'my-root-disk',
          'sourceImage': <fully-qualified-image-url>,
          'diskType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/diskTypes/pd-standard'

  bodyContentURLEncoded = urllib.urlencode(bodyContent)
  resp, content = http.request(uri=url, method="POST", body=dumps(bodyContent), headers=listOfHeaders)

  print resp
  print content

Checking instance status

When you first create an instance, you should check the instance status to see if it is running before you can expect it to respond to requests. It can take a couple seconds before your instance is fully up and running after the initial instances create request. You can also check the status of an instance at anytime after instance creation.

To check the status of an instance, call gcloud compute instances list or gcloud compute instances describe INSTANCE.

Instances are marked with the following states:

  • PROVISIONING - Resources are being reserved for the instance. The instance isn't running yet.
  • STAGING - Resources have been acquired and the instance is being prepared for launch.
  • RUNNING - The instance is booting up or running. You should be able to ssh into the instance soon, though not immediately, after it enters this state.
  • STOPPING - The instance is being stopped either due to a failure, or the instance being shut down. This is a temporary status and the instance will move to either PROVISIONING or TERMINATED.
  • TERMINATED - The instance either failed for some reason or was shut down, either through the API or from inside the guest. You can choose to restart the instance or delete it.

The following diagram describes the progression of these statuses. Note that a terminated instance can be restarted, indicated by the arrow from the terminated state to the provisioning state.

Diagram describing the progression of instance statuses
The progression of instances states.

Windows instances experience a longer startup time because of the sysprep process, so you must run an additional check to verify that the Windows instance has started.

Connecting to an instance using ssh

By default, you can always connect to a Linux instance using ssh. For Windows instances, you must connect using RDP. The ssh connection allows you to manage and configure your instances beyond the basic configuration enabled by gcloud compute or the REST API. The easiest way to ssh into an instance is to use the gcloud compute ssh command from your local computer. For example, the following command connects to an instance named example-instance:

$ gcloud compute ssh example-instance

You can use ssh directly without using the gcloud compute wrapper, as described in Using standard ssh.

Setting up your ssh keys

Every time you run gcloud compute ssh, gcloud compute checks for ssh key files in the locations listed below. If there are no existing key files, gcloud compute will generate a new ssh key:

$ gcloud compute ssh example-instance
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):

You are not required to provide a passphrase but if you don't, the keys on your local machine will be unencrypted. We recommend providing a passphrase to encrypt your keys.

The newly-generated ssh keys are stored in the following locations:

  • $HOME/.ssh/google_compute_engine - Your private key
  • $HOME/.ssh/google_compute_engine.pub - Your public key

gcloud compute copies your public key to the project metadata which allows the virtual machine to see the new key. If you want to use existing keys that are stored in a different location, specify the file using the --ssh-key-file flag.

If you want, you can specify multiple keys for a virtual machine instance using the gcloud compute project-info add-metadata command with the sshKeys metadata key:

$ echo user1:$(cat ~/.ssh/key1.pub) > /tmp/a
$ echo user2:$(cat ~/.ssh/key2.pub) >> /tmp/a
$ gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/a

You can also add ssh keys through the Google Developers Console. See Setting up ssh keys for more information.

Being able to add multiple keys is useful for adding multiple users to an instance at startup time, but it also limits the set of ssh keys to be exactly those you specified.

That's it! You have set up ssh access to your instances. If you install and run gcloud compute ssh from multiple computers, gcloud will generate a new public/private key pair for each computer. If you want to ssh from different ssh clients see, Connecting using standard ssh.

Granting access to project readers

All project editors and owners can ssh into your virtual machine instances. If you want to grant ssh access to project viewers, who have read-only access to Compute Engine, you can create user accounts. User accounts are created by owners or editors and managed by the respective project viewer. User accounts give ssh access to virtual machine instances, but restrict access to other Compute Engine features. Currently, user accounts are in Alpha.

For more information, see Adding User Accounts.

Connecting using standard ssh

After setting up your ssh keys and adding them to the project, you can use standard ssh rather than gcloud compute ssh with the following syntax:

ssh -i KEY_FILE -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no USER@IP_ADDRESS

Important Flags and Parameters:

[Required] The file where the keys are stored on the computer e.g. ~/.ssh/google_compute_engine
[Required] The username to log in that instance. Typically, this is the username of the local user running gcloud compute.
[Required] The external IP address of the instance.

Connecting from the browser (Beta)

You can also ssh into an instance directly from your web browser in the Google Developers Console. This feature is in Beta, which means that it is subject to change and is not covered by any SLA or deprecation policy.

Connecting from one instance to another

If your instance doesn't have an externally-visible IP address, you can still ssh into it by connecting to an instance on the network with an external address and then connecting to the internal-only instance from your externally-visible instance. You might need to do this if you start an instance without an external IP address.

To ssh from one instance to another:

  1. From your local machine, start ssh-agent using the following command to manage your keys for you:

    me@local:~$ eval `ssh-agent`
  2. Call ssh-add to load the gcloud compute keys from your local computer into the agent, and use them for all ssh commands for authentication:

    me@local:~$ ssh-add ~/.ssh/google_compute_engine
  3. Log into an instance with an external IP address while supplying the -A argument to enable authentication agent forwarding.

    me@local:~$ gcloud compute ssh --ssh-flag="-A" INSTANCE
  4. From this externally-addressable instance, you can now log into any other instance on the same network by calling ssh INSTANCE.

  5. When you are done, call exit repeatedly to log out of each instance in turn.
  6. You can continue to simply ssh into your internal instances through your external instance until you close your command window, which will close the ssh-agent context.


me@local:~$ eval `ssh-agent`
Agent pid 17666

me@local:~$ ssh-add ~/.ssh/google_compute_engine
Identity added: /home/user/.ssh/google_compute_engine (/home/user/.ssh/google_compute_engine)

me@local:~$ gcloud compute ssh --ssh-flag="-A" example-instance
INFO: Running command line: ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/user/.ssh/google_compute_engine -o LogLevel=QUIET -A -p 22 user@ --
Linux myinst 2.6.39-gcg-yyyymmdd1612 #18 SMP .... x86_64 GNU/Linux
me@example-instance:~$ ssh example-instance-2
1 package can be updated.
0 updates are security updates.
me@example-instance-2:~$ exit
Connection to example-instance-2 closed.

Root access and instance administrators

For security reasons, the standard Google images do not provide the ability to ssh in directly as root. The instance creator and any users that were added using the metadata sshKeys value are automatically administrators to the account, with the ability to run sudo without requiring a password.

Although it is not recommended, advanced users can modify /etc/ssh/sshd_config and restart sshd to change this policy.

Setting instance scheduling options

By default, Google Compute Engine automatically manages the scheduling decisions for your instances. For example, if your instance is terminated due to a system or hardware failure, Compute Engine automatically restarts that instance. You can modify this automatic behavior by changing the scheduling options for this instance.

Maintenance behavior

  • Live migrate

    By default, standard instances are set to live migrate, where Google Compute Engine will automatically migrate your instance away from an infrastructure maintenance event, and your instance remains running during the migration. Your instance may experience a short period of decreased performance, although generally most instances should not notice any difference. This is ideal for instances that require constant uptime, and can tolerate a short period of decreased performance.

    When Google Compute Engine migrates your instance, it reports a system event that is published to the your list of zone operations. You can review this event by performing a gcloud compute operations list --zones ZONE request or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:

  • Terminate and (Optionally) Restart

    If you would not like your instance to live migrate, you can choose to terminate and optionally restart your instance using the automatic restart setting. With this option, Google Compute Engine will signal your instance to shut down, wait for a short period of time for your instance to shut down cleanly, terminate the instance, and restart it away from the maintenance event. This option is ideal for instances that demand constant, maximum performance, and your overall application is built to handle instance failures or reboots.

    When Google Compute Engine terminates and reboots your instances, it reports a system event that is published to the list of zone operations. You can review this event by performing a gcloud compute operations list --zones ZONE request or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:


    When your instance reboots, it will use the same persistent boot disk as before.

Persistent disks are preserved in both migrate or terminate cases. For the terminate and reboot case, your persistent disk will be briefly detached from the instance while it is being rebooted, and then reattached once the instance is restarted.

See How to Set Scheduling Options below for the default maintenance behavior values and also how to change this setting on existing instances.

Maintenance behavior for preemptible instances

Preemptible instances are instances that run at a much lower price than standard instances, but are subject to termination by Compute Engine at any time and are always terminated after the instance runs for 24 hours.

Because preemptible instances are a finite resource, you cannot live migrate these instances. The maintenance behavior for preemptible instances is always set to TERMINATE by default, and you cannot change this option. It is also not possible to set the automatic restart option for preemptible instances.

Automatic restart

If your instance is set to terminate when there is a maintenance event, or if your instance is terminated because of hardware or software failures, you can set up Google Compute Engine to automatically restart the instance by setting the automaticRestart field to true. This setting does not apply if the instance is taken offline through a user action, such as calling sudo shutdown.

When Google Compute Engine automatically restarts your instance, it reports a system event that is published to the list of zone operations. You can review this event by performing a gcloud compute operations list --zones ZONE request or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:


How to set scheduling options

All instances are configured with default values for onHostMaintenance and automaticRestart settings. The default setting for instances is to set the onHostMaintenance flag to migrate, in which case Google Compute Engine will migrate the instance around infrastructure maintenance events. The default setting for automaticRestart is true so that instances that are terminated because of a system event are automatically restarted.

If you want to manually set scheduling options of an instance, you can do so when first creating the instance or after the instance is created, using the setScheduling method.

Specifying scheduling options during instance creation

To specify the maintenance behavior and automatic restart settings of a new instance in gcloud compute, use the --maintenance-policy flag. By default, instances are automatically set to restart unless indicated by the --no-restart-on-failure flag.

 $ gcloud compute instances create INSTANCE .. [--maintenance-policy MAINTENANCE_POLICY] [--no-restart-on-failure]

In the API, make a POST request to:


with the onHostMaintenance and automaticRestart parameters as part of the request body:

  "kind": "compute#instance",
  "name": "example-instance",
  "description": "Front-end for real-time ingest; don't migrate.",
  // User options for influencing this Instance’s life cycle.
  "scheduling": {
    "onHostMaintenance": "migrate",
    "automaticRestart": "true" # specifies that Google Compute Engine should automatically restart your instance

For more information, see the Instances reference documentation.

Updating scheduling options for an existing instance

To update the scheduling options of an instance, use the instances set-scheduling subcommmand command with the same parameters and flags used in the instance creation command above:

$ gcloud compute instances set-scheduling INSTANCE [--maintenance-policy BEHAVIOR] [--no-restart-on-failure | --restart-on-failure]

In the API, you can make a request to the following URL:


The body of your request must contain the new value for the scheduling options:

  "onHostMaintenance": "migrate"
  "automaticRestart": "true" # specifies that Google Compute Engine should automatically restart your instance

For more information, see the instances : setScheduling reference documentation.

Installing packages and configuring an instance

The instance creator has administrator privileges on any instance that they add to a project, and is automatically on the SUDO list.

When you are logged into an instance as the administrator, you can install packages and configure the instance the same way you would a normal Linux box. For example, you can install Apache, as shown here:

user@myinst:~$ sudo apt-get update
user@myinst:~$ sudo apt-get install apache2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:

You can move files between your local computer and instance using gcloud compute copy-files as described in Copying files between an instance and local computer.

Note that your machine needs access to the Internet to be able to run apt-get. This means that it needs either an external IP address, or access to an Internet proxy.

Compute Engine changes a special attribute in a virtual machine's metadata server shortly before any attempts to live migrate or terminate and restart the virtual machine as part of a pending infrastructure maintenance event. The maintenance-event attribute will be updated before and after an event, allowing you to detect when these events are imminent. You can use this information to help automate any scripts or commands you want to run before and/or after a maintenance event.

For more information, see the Transparent maintenance notice section in the Metadata server documentation page.

Copying files between an instance and local computer

Use gcloud compute copy-files to transfer files between a Linux instance and local computer:

$ gcloud compute copy-files my-instance:~/file-1 \
         my-instance:~/file-2 \
         ~/local-dir \
         --zone ZONE \

To copy files from your local machine to your instance:

$ gcloud compute copy-files ~/local-dir/file-1 \
         my-instance:~/remote-destination \
         --zone ZONE

Detecting if you are running in Google Compute Engine

It is common for systems to want to detect if they are running within a specific cloud environment. Use the following instructions to help you detect if you are running in Compute Engine.


For Linux virtual machines, you can query for the metadata server for a specific header that indicates you are running within Compute Engine. For more information, see Detecting if you are running in Compute Engine.

For Linux images v20131120 and newer, you can request more explicit confirmation using the dmidecode tool. The dmidecode tool can be used to access the DMI/SMBIOS information in /proc/mem directly. Run the following command and the dmidecode tool should print out "Google" to indicate that you are running in Google Compute Engine:

my@myinst:~$ sudo dmidecode -s bios-vendor | grep Google


On Windows instances, "Google" is listed as the system manufacturer and model. You can use utilities like msinfo32.exe to look up this information. For example, msinfo32 displays the following information:

msinfo32 screen displaying Google as manufacturer and model

If you need to programmatically determine this information on a Windows instance, you can create a Windows Management Instrumentation (WMI) application with some modifications.

Identifying an instance through the UUID

Each virtual machine has a universally unique identifier (UUID) that can be accessed through the dmidecode tool for Linux images. A UUID is calculated from the project ID, zone, and instance name of the virtual machine. An instance's UUID is unique among Compute Engine virtual machines and stable for the lifetime of the instance. UUIDs will persist through virtual machine restarts and if the virtual machine is deleted and recreated in the same project and zone, with the same instance name, the UUID will also be the same.

To find an instance's UUID on a Linux instance, run the following command from your virtual machine:

me@myinst:~$ sudo dmidecode -t system

The tool should print a response similar to the following:

dmidecode 2.11
SMBIOS 2.4 present.

Handle 0x0100, DMI type 1, 27 bytes
System Information
  Manufacturer: Google
  Product Name: Google
  Version: Not Specified
  Serial Number: GoogleCloud-FE0C672D324F25F1052C6C50FA8B7397
  UUID: FE0C672D-324F-25F1-052C-6C50FA8B7397
  Wake-up Type: Power Switch
  SKU Number: Not Specified
  Family: Not Specified

Handle 0x2000, DMI type 32, 11 bytes
System Boot Information
  Status: No errors detected

Setting network time protocol (NTP) for instances

Many software systems that depend on careful sequencing of events rely on a stable, consistent system clock. System logs written by most services include a timestamp, which helps debug issues that occur between various components of your system. To help keep system clocks in sync, Compute Engine instances are preconfigured to use network time protocol (NTP).

In addition to keeping server time in sync, NTP is helpful in the rare case of a leap second. A leap second is a one-second adjustment made to UTC time to account for changes in the Earth's rotation. Leap seconds don’t happen at routine intervals, because the Earth's rotation speed varies irregularly in response to climatic and geological events. Previous leap seconds have wreaked havoc on a variety of services and applications on the web. NTP servers help ensure that all servers report the same time during the event of a leap second.

This section describes how to configure NTP servers on your virtual machines to behave properly in the case of a leap second.

Google NTP servers and leap smearing

Leap seconds in Unix time are commonly implemented by repeating the last second of the day. This can cause problems with software that expects timestamps to only ever increase. To get around this problem, the time servers at Google “smear” the extra second over twenty hours—ten before and ten after the leap second event—so that computers do not see the extra second all at once as a repeated timestamp. This reduces risk in systems that dependent on a consistent timestamp. It is recommended that all Compute Engine virtual machine instances are configured to use the internal Google NTP services.

Configure NTP for your instances

Google can’t predict how external NTP services, such as pool.ntp.org, will handle the leap second. If at all possible, it is recommended that you do not use external NTP sources with Compute Engine virtual machines. Even worse, using both Google's NTP service and an external service can result in unpredictable changes in the system time. Using only a single external NTP source is preferable to using a mix, but external NTP services, such as pool.ntp.org, will likely use stepping to handle the leap second. As a result, your virtual machines may see a repeated timestamp.

The safest approach is to configure your Compute Engine virtual machines to use a single NTP server—the internal NTP server provided by Google. Do not mix external NTP servers and Google NTP servers, as this could result in unexpected behavior.

To ensure your virtual machines are correctly configured, follow these instructions.


  1. ssh into your virtual machine.

    Developers Console

    1. Go to the VM instances page in the Developers Console.
    2. Click the ssh button next to the instance you want to configure.

      Screenshot of SSH button


    On gcloud, run:

    $ gcloud compute instances ssh INSTANCE

  2. On your instance, run ntpq -p to check the current state of your NTP configuration:

    $ ntpq -p
          remote           refid      st t when poll reach   delay   offset  jitter
    *metadata.google     2 u   27   64    1    0.634   -2.537   2.285
    *     2 u  191 1024  176   79.245    3.589  27.454

    If you see a single record pointing at metadata.google or metadata.google.internal, you do not need to make any changes. If you see multiple sources, mixed between metadata.google and a public source such as pool.ntp.org, you need to update your sources to remove any external NTP servers.

    For this example, there are two records, one pointing at metadata.google and another pointing to an external address. Since there are multiple sources, you would need to update your NTP servers to remove the * address.

  3. Configure your NTP servers to remove external sources.

    To configure your NTP servers, edit the /etc/ntp.conf file in your favorite text editor. Find the servers section of the configuration, and remove all non-Google NTP sources:

    $ vim /etc/ntp.conf
    # You do need to talk to an NTP server or two (or three).
    #server ntp.your-provider.example
    server metadata.google.internal iburst

    After editing your /etc/ntp.conf file, restart the NTP service. The command to restart may vary based on the Linux distribution:

    $ sudo service ntp reload
  4. Verify your configuration.

    Run the ntpq -p command again to verify that your changes:

    $ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    *metadata.google     2 u   27   64    1    0.634   -2.537   2.285


  1. Go to the VM instances page in the Developers Console.
  2. Click the RDP button next to the Windows instance you want to log into.

    Screenshot of SSH button

  3. Once logged in, right-click on the Powershell icon and select Run as administrator.

    Screenshot of Powershell icon

  4. When the command-prompt loads, run the following command to see the current NTP configuration:

    PS C:\Windows\system32> w32tm /query /configuration
    Type: NTP (Local)
    NtpServer: metadata.google.internal,

    If you see a single record pointing at metadata.google or metadata.google.internal, you do not need to make any changes. If you see multiple sources, mixed between metadata.google and a public source, you need to remove the external server. Follow the Windows' guide for configuring your NTP server.

The leap smearing feature of Google’s NTP server is a convenient way to manage the risk involved with replaying a second on time sensitive systems. Other NTP services may provide an acceptable work around for most software systems. It is just important not to mix Google leap smearing NTP services with public NTP stepping services.

Enabling network traffic

By default, all new instances have the following connections enabled:

  • Traffic between instances in the same network, over any port and any protocol.
  • Incoming ssh connections (port 22) from anywhere.
  • Incoming connections to port 3389 for Remote Desktop Protocol (RDP).

Any other incoming traffic to an instance is blocked. You must explicitly assign new firewall rules a network to enable other connections. See Connecting to an instance using ssh to learn how to ssh into your instance, or Networks and Firewalls to learn how instances communicate with each other over IP, and how to set up an externally accessible HTTP connection to an instance.

Using instance tags

You can assign tags to your instances to help coordinate or group instances that may share common traits or attributes. For example, if there are several instances that perform the same task, such as serving a large website, you may consider tagging these instances with a shared word or term. Instance tags are also used by networks and firewalls to identify which instances firewall rules may apply to. Tags are also reflected in the metadata server, so you can use them for applications running on your instances.

To assign tags to a running instance using gcloud compute, use the instances add-tags command:

$ gcloud compute instances add-tags INSTANCE --tags tag-1,tag-2

Moving an instance between zones

Compute Engine can move your instances between zones by taking a snapshot of your persistent disks, using the snapshots to launch new instances in the desired zone, and deleting your original resources. If a zone is deprecated, you can use this method to move your instances from the deprecated zone.

In detail, Compute Engine will:

  • Take snapshots of persistent disks attached to the source instance.
  • Create copies of the persistent disks in the destination zone.
  • For instances moving within the same region, temporarily promote any ephemeral external IP addresses assigned to the instance to a static external IP address.
  • Create a new instance in the destination zone.
  • Attach the newly created persistent disks to your new instance.
  • Assign an external IP address to the new instance. If necessary, demote the address back to an ephemeral external IP address.
  • Delete the snapshots, original disks, and original instance.

If you want to manually move your instance, you can also perform these steps by hand.


Before you can use this feature to move your instance, make sure that:

  • There is enough quota in your project for Compute Engine to create new snapshots and promote any ephemeral external IP addresses, and enough quota for the new instance and disks in the destination zone. For example, if you have three disks attached to the instance you want to move, Compute Engine will need enough quota to create three temporary persistent disk snapshots, which will be removed after the new disks have been successfully created.
  • The persistent disks that are attached to the instance you want to move are not attached to more than one instance.
  • The source instance does not contain a local SSD. Currently, it is not possible to migrate an instance that has a local SSD and the contents of your local SSD is discarded when you move the instance.

Once your instance and disks have been moved, you will still need to update any existing references you have to the original resource, such as any target instances or target pools that are pointing to the old instance. This will not be done for you automatically.

Caution: During the move, do not modify any of the resources being moved or modify any of the temporary resources that are created for the purposes of the move, as this could cause the move to fail. Any temporary resources created for the move will have unique names that are automatically generated from the name of the source persistent disk or the source ephemeral external IP address.

If the move is interrupted, Compute Engine will stop the move and no further action will be taken. At any point during this process, your data is preserved either in snapshots or on the persistent disk itself. However, once a move has begun, Compute Engine cannot roll back any changes already made.

Resource properties

During the move, some server-generated properties of your instance and disks will change.

Properties that change for instances

Property name Changes
Network IP address A new network IP address will be assigned.
External IP address
  • If the instance is moving between zones in the same region, the external IP address remains the same.
  • If the instance is moving between zones in different regions, the new instance will be assigned an ephemeral external IP address, regardless of whether the original instance had a static or ephemeral external IP address.
CPU platform Depending on the available CPU platform in your destination zone, your instance might have a different CPU platform after it has been moved. For example, an instance in us-central1-a that is being moved to us-central1-f will see it's CPU platform change from a Sandy Bridge processor to an Ivy Bridge processor.

For a full list of CPU platforms, see Zones.

Properties that change for disks

Property name Changes
Source snapshot The source snapshot of the new disk will be set to the temporary snapshot that is created during the move.
Source snapshot ID The source snapshot ID is set to the temporary snapshot's ID.
Source image The source image field will be empty.
Image ID The image ID will be empty.
Last detached timestamp The last detached timestamp will be empty.
Last attached timestamp The last attached timestamp will update to the timestamp when the newly-moved disk was attached to the newly-moved instance.

Properties that change for both instances and disks

Property name Changes
ID A new resource ID will be generated.
Creation timestamp A new creation timestamp will be generated.
Zone resource URLs All zone resource URLs will change to reflect the destination zone. The following is a list of resource URLs that will change when moving instances:
  • An instance's source disk URL
  • An instance's machine type URL
  • Self-link URLs
  • Zone URLs
  • Disk type URLs
  • Any URLs of instances listed in a disk's users[] list

Move your instance

You can move your instance using the instructions below. You can only move one instance per request but you can make multiple requests in parallel to move several instances at once. Keep in mind that you will need to ensure you have enough quota in your destination zones to move all your desired instances.

Before you move your instance, it is important that you understand the Requirements for using this feature.

gcloud compute

In gcloud, use the gcloud compute instances move sub-command to move your instance:

$ gcloud compute instances move INSTANCE_NAME \
    --zone ZONE --destination-zone DESTINATION_ZONE


In the API, make a POST request to the moveInstance method with a request body that contains the targetInstance and the destinationZone. For example:

   "targetInstance": "zones/us-central1-a/instances/source-instance",
   "destinationZone": "zones/europe-west1-d"


You can also choose to manually move your instance between zones.

The following example describes how to move an instance with two persistent disks, myrootdisk and mydatadisk, from the europe-west1-c zone to europe-west1-b. The example instance looks like the following:

$ gcloud compute instances list
myinstance europe-west1-c n1-standard-4 RUNNING

To move the instance to another zone:

  1. First, set the auto-delete state of myrootdisk and mydatadisk to ensure that it is not automatically deleted when the instance is deleted.

    $ gcloud compute instances set-disk-auto-delete myinstance --zone europe-west1-c \
             --disk myrootdisk --no-auto-delete
    Updated [...].
    $ gcloud compute instances set-disk-auto-delete myinstance --zone europe-west1-c \
             --disk mydatadisk --no-auto-delete
    Updated [...].

    If the state was updated, gcloud compute returns the response Updated [...]. If the auto-delete state was already set to false, then gcloud compute returns an empty response.

  2. (Optional) Save your instance metadata.

    When you delete your instance, the instance metadata will also be removed. You can save that information in a separate file so that you can reapply the instance metadata to the new instance.

    $ gcloud compute instances describe myinstance --zone europe-west1-c | \
             tee myinstance.describe
  3. Create backups of your data.

    As a precaution, create backups of your data using persistent disk snapshots. Since the persistent disks are still attached to the instance, you should clear your disk buffers before you take a snapshot to ensure your snapshot is consistent with the state of your persistent disk.

    Once you have cleared your disk buffers, create the snapshots as follows:

    $ gcloud compute disks snapshot myrootdisk mydatadisk \
             --snapshot-names backup-myrootsnapshot,backup-mydatasnapshot \
             --zone europe-west1-c
  4. Delete your instance.

    Deleting your instance will shut it down cleanly and detach any persistent disks.

    $ gcloud compute instances delete myinstance --zone europe-west1-c
    The following instances will be deleted. Attached disks configured to
    be auto-deleted will be deleted unless they are attached to any other
    instances. Deleting a disk is irreversible and any data on the disk
    will be lost.
     - [myinstance] in [europe-west1-c]
    Do you want to continue (Y/n)?

    The warning does not apply in this case because we turned off the auto-delete state for the disks attached to myinstance in step 1. Enter Y to continue, which returns the response:

    Deleted [...].
  5. Next, create another snapshot of both the root disk and the data disk.

    $ gcloud compute disks snapshot myrootdisk mydatadisk \
             --snapshot-names myrootsnapshot,mydatasnapshot \
             --zone europe-west1-c
    Created [.../mydatasnapshot].
    Created [.../myrootsnapshot].
  6. (Optional) Delete your persistent disks.

    If you plan to reuse the same names of the persistent disks for the new disks, you will need to delete the existing disks to release the names. This is because Compute Engine requires unique names for resources within a project. Deleting your disks will also save on persistent disk storage costs as well.

    You can choose to delete your persistent disks later, after you have successfully moved your instance to the new zone, if you do not plan to reuse the same disk names. This gives you the added benefit of keeping your data around to make sure it is successfully migrated.

    $ gcloud compute disks delete myrootdisk mydatadisk --zone europe-west1-c
  7. Create new persistent disks in europe-west1-b from the snapshots you just created.

    $ gcloud compute disks create myrootdiskb --source-snapshot myrootsnapshot \
             --zone europe-west1-b
    Created [.../myrootdiskb].
    NAME        ZONE           SIZE_GB TYPE        STATUS
    myrootdiskb europe-west1-b 100     pd-standard READY
    $ gcloud compute disks create mydatadiskb --source-snapshot mydatasnapshot \
             --zone europe-west1-b
    Created [.../mydatadiskb].
    NAME        ZONE           SIZE_GB TYPE        STATUS
    mydatadiskb europe-west1-b 4000    pd-standard READY
  8. Recreate your instance in europe-west1-b.

    If you saved your instance metadata in a myinstances.describe file, you can use it to set the same metadata on your instance.

    If your instance had a reserved IP address, you can also reassign that address to your new instance by specifying the --address ADDRESS option.

    $ gcloud compute instances create myinstanceb --machine-type n1-standard-4 \
             --zone europe-west1-b \
             --disk name=myrootdiskb,boot=yes,mode=rw \
             --disk name=mydatadiskb,mode=rw
    Created [.../myinstanceb].
    myinstanceb europe-west1-b n1-standard-4 RUNNING
  9. (Optional) Delete your persistent disk snapshots.

    Once you have confirmed that your virtual machines were moved over successfully and you no longer need the snapshots for other purposes, you can choose to delete the snapshots you created to save on storage costs.

    $ gcloud compute snapshots delete myrootsnapshot mydatasnapshot

    You can also delete your backup snapshots, but make sure you no longer need these snapshots, since there is no way to retrieve snapshots after they have been deleted:

    $ gcloud compute snapshots delete backup-myrootsnapshot backup-mydatasnapshot

Stopping or deleting an instance

If you are no longer using an instance and do not want to continue to pay for it, you can either stop the instance, keeping it around in case you want to start it again later, or delete the instance entirely.

Stopping an instance retains all resources attached to that instance and you will continue to be charged for these resources, such as persistent disks.

Deleting an instance deletes the associated resources, unless you explicitly define otherwise.

Stopping an instance

To stop an instance, you can use the Stop button in the Google Developers Console, use the gcloud compute instances stop command, or call the instances.stop() API directly. You also have the option of bypassing the API and shutting down the instance from inside the guest, such as running sudo shutdown on Linux.

Stopping an instance from the Developers Console, gcloud compute, or the API causes Compute Engine to send the ACPI Power Off signal to the instance. Modern guest operating systems are configured to perform a clean shutdown before powering off in response to the power off signal. Compute Engine waits a short time for the guest to finish shutting down and then transitions the instance to the TERMINATED state.

A terminated instance still exists but is not accessible unless it is restarted. Any resources still attached to the terminated instance remains attached until it is manually released or the instance is deleted. Instance configuration settings are maintained, such as the instance metadata.

Instances that are in a TERMINATED state are not charged for per-minute running virtual machine usage and do not count towards your regional CPU quota, so you can choose to stop instances that you are not using, saving you from being charged for instances that aren't active. Once you are ready, you can come back and start the same instances again, with the same instance properties, metadata, and resources.

While your instances are not charged for per-minute usage charges in TERMINATED state, any resources attached to the virtual machine will continue to be charged until they are deleted, such as static IPs and persistent disks.

Almost all instances can be stopped and later restarted, except for virtual machines with local SSDs, which cannot be restarted.

When you stop an instance, it has the following effects on the instance and on resources related to the instance:

Resource Effect
Persistent disks

Persistent disks are maintained when an instance is stopped, even persistent disks that are marked for auto-delete.

You will continue to be charged for persistent disks associated with stopped virtual machines, just like you would a persistent disk that is not associated with any virtual machines.

RAM and virtual machine state

Reset to power-on state, no data is saved.

External ephemeral IPs

Ephemeral IPs are released when an instance is stopped, but a new ephemeral IP address is acquired when the instance is restarted.

External static IPs

Static external IPs are maintained.

Static IPs assigned to stopped instances are charged as if they are not attached to any instance. See pricing page for details.

Internal IPs / MAC address

Internal IPs and MAC addresses are maintained.


Stopped instances are not billed, but when a stopped instance is restarted, the 10 minute minimum billing duration will be enforced.

When an instance is stopped, you can still perform actions that can affect the stopped instance, such as:

  • Adding or removing attached disks
  • Updating a disk’s auto-delete setting
  • Modifying instance tags
  • Modifying custom instance metadata
  • Modifying project-wide metadata
  • Removing or setting a new static IP
  • Modifying the instance’s scheduling options

However, you cannot change an instance’s machine type or image.

Stop an instance

To stop an instance:

  1. Make a request to the stop() method which powers off the instance using the API. In the Google Developers Console, you can stop an instance using the Shutdown button. In gcloud compute, you can make this request using the instances stop subcommand:

    $ gcloud compute instances stop INSTANCE

    You can use gcloud from outside or within the instance.

    In the API directly, you can make a POST request to the following URI:


    We recommend stopping an instance via API using one of the above methods.

  2. Make a request to the guest OS using sudo poweroff, while you are logged into the virtual machine.

    Although this method works, it can be risky because there is no mechanism in place to check whether the instance you are shutting down has a local SSD device attached. If it does, and you stop the instance, you will not be able to restart the instance later. For these reasons, we recommend using the API to stop an instance.

    If you still would like to use this method, run the following command on your instance:

    me@my-instance:~$ sudo poweroff

If you want to restart the instance after stopping it, see Restarting a stopped instance.

You could choose to leave an instance in a terminated state indefinitely but keep in mind that terminated instances still count against your quota. If you do not plan to restart the instance, you should consider deleting it.

Deleting an instance

You can permanently delete an instance using the gcloud compute instances delete command. When you delete an instance in this way, the instance is shut down, removed from the list of instances, and all resources attached to the instance are released, such as persistent disks and any static IP addresses.

To delete an instance, use the following command:

$ gcloud compute instances delete INSTANCE [INSTANCE..]

Instance shutdown period

When you shut down or delete an instance, Compute Engine sends the ACPI Power Off signal to the instance and waits a short period of time for your instance to shut down cleanly. This period usually lasts at least 90 seconds, but could be longer. After this grace period, if your instance is still running, Compute Engine will forcefully terminate it, even if your shutdown script is still running.

If you choose to run a shutdown script during this period, we recommend that your shutdown script finishes running within this time period, so that the operating system has time to complete its shutdown, including flushing buffers to disk.

Restart an instance

It is possible to restart or reset your instance in multiple ways.

Reset an instance

You can perform a hard reset on an instance by using the instances reset command or by making a POST request to the following URI:


Performing a reset on your instance is similar to pressing the reset button on your computer, which forces a restart, wiping the memory contents of the machine and resetting the virtual machines to its initial state. Note that your instance remains in RUNNING mode through the reset.

To reset your instance using gcloud compute:

$ gcloud compute instances reset INSTANCE

To reset your instance using the client libraries, construct a request to the instances().reset method:

def resetInstance(auth_http, gce_service):
  request = gce_service.instances().reset(project=PROJECT_ID, zone=ZONE_NAME, instance=INSTANCE_NAME)
  response = request.execute(auth_http)

  print response

For more information on this method, see the instances().reset reference documentation.

Starting a stopped instance

You can restart a stopped instance with its original configuration using the instances().start method. This method boots up a powered-down virtual machine instance that is currently in TERMINATED state.

Using the start method is different from reset() and other restart methods because it allows you to restart an instance in a TERMINATED state, whereas methods such as reset() and sudo reboot only works on instances that are currently running.

Almost all instances can be stopped and restarted, except for virtual machines with local SSDs, which cannot be restarted. If an instance is marked as TERMINATED and does not have a local SSD device attached, you can restart it.

To restart a stopped instance:

$ gcloud compute instances start INSTANCE --zone ZONE

In the API, you can make a POST request to the following URI:


Other restart methods

You can also choose to reset your instance using the following methods as well:

  • sudo reboot (Linux only) - Called from within the instance. Wipes the memory and re-initializes the instance with the original metadata, image, and persistent disks. It will not pick up any updated versions of the image, and the instance will retain the same ephemeral IP address. This is similar to restarting your computer.
  • Rebooting a Windows instance - You can reboot a Windows instance, similar to sudo reboot above, using the Start menu. In the Start menu, click on the arrow next to Log off and click Restart.
  • gcloud compute instances delete followed by gcloud compute instances create - This is a completely destructive restart, and will initialize the instance with any information passed into gcloud compute instances create. You can then select any new images or other resources you'd like to use. The restarted instance will probably have a different IP address. This method potentially swaps the physical machine hosting the instance.

Listing all instances

You can see a list of all instances in a project by calling instances list:

$ gcloud compute instances list
example-instance   us-central1-a n1-standard-1  RUNNING
example-instance-2 us-central1-a n1-standard-1   RUNNING

By default, gcloud compute provides an aggregate listing of all your resources across all available zones. If you want a list of resources from just a single zone, provide the --zones flag in your request.

In the API, you need to make requests to two different methods to get a list of aggregate resources or a list of resources within a zone. To make a request for an aggregate list, make a GET request to that resource's aggregatedList URI:


In the client libraries, make a request to the instances().aggregatedList function:

def listAllInstances(auth_http, gce_service):
  request = gce_service.instances().aggregatedList(project=PROJECT_ID)
  response = request.execute(auth_http)

  print response

To make a request for a list of instances within a zone, make a GET request to the following URI:


In the API client libraries, make a instances().list request:

def listInstances(auth_http, gce_service):
  request = gce_service.instances().list(project=PROJECT_ID,
  response = request.execute(auth_http)

  print response

Handling instance failures

Unfortunately, individual instances will experience failures from time to time. This can be due to a variety of reasons, including unexpected outages, hardware error, or other system failures. As a way to mitigate such situations, you should use persistent disks and back up your data routinely.

If an instance fails, it will be restarted automatically, with the same root persistent disk, metadata, and instance settings as when it failed. To control the automatic restart behavior for an instance, see How to set scheduling options. However, if an instance is terminated for a zone maintenance window it will stay terminated and will not be restarted when the zone exits the maintenance window.

In general, you should design your system to be robust enough that the failure of a single instance should not be catastrophic to your application. For more information, see Designing Robust Systems.

Creating a custom image

You can create a custom instance image by customizing a provided image, and then loading it onto new instances as they are brought up. See Creating an image from a root persistent disk for more information.

Reducing Compute Engine costs with Preemptible Instances

Google Compute Engine offers preemptible instances that you can create and run at a much lower price than regular instances. If your applications are fault-tolerant and can withstand instance instability, then preemptible instances can reduce your Compute Engine costs significantly. See the Preemptible Instances documentation for more information.