Enabling Nested Virtualization for VM Instances

This document describes how to enable support for nested virtualization on Compute Engine VM instances. It also covers the basic steps of starting and configuring a nested VM.

Nested virtualization adds support for Intel VT-x processor virtualization instructions to Compute Engine VMs. Using nested virtualization, you start a VM instance as normal on Compute Engine and then install a KVM-compatible hypervisor on the VM instance so you can run another VM instance on top of that hypervisor. Other hypervisors, such as Hyper-V, ESX, and Xen are not currently supported. You can use nested virtualization on any Linux VM instance running on a Haswell or newer platform.

Nested virtualization is ideally suited for VM-based applications and workloads where converting or importing your VM images to Compute Engine is not feasible. For example, you can use nested virtualization to build a disaster recovery solution for an on-premises workload running on KVM-based virtual machines, that can seamlessly failover to VMs running on Compute Engine without the extra time or orchestration needed to convert your KVM-based VM to a native Compute Engine image. Another workload that could be well-suited for nested virtualization is a software-validation framework that needs to test and validate new versions of a software package on numerous versions of different KVM-compatible OSes. Running nested VMs removes the need to convert and manage a large library of Compute Engine images.

Before you begin

How nested virtualization works

Compute Engine VMs run on top of physical hardware (the host server) which is referred to as the L0 environment. Within the host server, a pre-installed hypervisor allows a single server to host multiple Compute Engine VMs, which are referenced to as L1 or native VMs on Compute Engine. When you use nested virtualization, you install another hypervisor on top of the L1 guest OS and create nested VMs, referred to as L2 VMs, using the L1 hypervisor. L1 or native Compute Engine VMs that are running a guest hypervisor and nested VMs can also be referred to as host VMs.

Nested Virtualization Diagram

Restrictions

  • Nested virtualization can only be enabled for L1 VMs running on Haswell processors or later. If the default processor for a zone is Sandy Bridge or Ivy Bridge, you can use minimum CPU selection to choose Haswell or later for a particular instance. Review the Regions and Zones page to determine which zones support Haswell or later processors.
  • Nested virtualization is only supported for KVM-based hypervisors running on Linux instances. ESX and Xen hypervisors are not supported.
  • Nested virtualization does not currently support Windows instances.

Tested KVM versions

Google runs basic nested virtualization boot and integration tests using the following Linux distros and kernel/KVM versions on the Compute Engine instance:

  • CentOS 7 with kernel version 3.10
  • Debian 9 with kernel version 4.9
  • Debian 8 with kernel version 3.16
  • RHEL 7 with kernel version 3.10
  • SLES 12.2 with kernel version 4.4
  • SLES 12.1 with kernel version 3.12
  • Ubuntu 16.04 LTS with kernel version 4.4
  • Ubuntu 14.04 LTS with kernel version 3.13

If you are having trouble running nested VMs on distributions and kernel/KVM versions not listed here, reproduce your issue using one of the above environments as the guest operating system on the host Compute Engine instance before reporting the issue as a bug.

Performance

Even with hardware-assisted nested virtualization, there will be a performance penalty for the nested VMs themselves and any applications or workloads running inside them. While it is impossible to predict the exact performance degradation for a given application or workload, expect at least a 10% penalty for CPU-bound workloads and possibly much more for I/O bound workloads.

Enabling nested virtualization on an instance

You can enable nested virtualization using the API or gcloud component. To enable nested virtualization, you must create a custom image with a special license key that enables VMX in the L1 or host VM instance and then use that image on an instance that meets the restrictions for nested virtualization. The license key does not incur additional charges.

  1. Create a boot disk from a public image or from a custom image with an operating system. Alternatively, you can skip this step and apply the license to an existing disk from one of your VM instances.

    gcloud

    Use the gcloud command-line tool to create the disk from the boot disk image of your choice. For this example, create a disk named disk1 from the debian-9 image family:

    gcloud compute disks create disk1 --image-project debian-cloud --image-family debian-9

    API

    Create a disk named disk1 from the debian-9 image family:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/us-central1-b/disks
    
    {
     "name": "disk1",
     "sourceImage": "/projects/debian-cloud/global/images/family/debian-9"
    }

    where: [PROJECT_ID] is your project ID.

  2. Using the boot disk that you created or a boot disk from an existing instance, create a custom image with the special license key required for virtualization.

    gcloud

    If you are creating an image using the `gcloud` command-line tool, provide the following license URL using the `--licenses` flag:

    https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx

    For example, the following command creates an image named `nested-vm-image` from an example disk named `disk1`:

    gcloud compute images create nested-vm-image \
      --source-disk disk1 --source-disk-zone us-central1-b \
      --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

    API

    In the API, include the licenses property in your API request:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/images
    
    {
       "licenses": ["projects/vm-options/global/licenses/enable-vmx"],
       "name": "nested-vm-image",
       "sourceDisk": "zones/us-central1-b/disks/disk1"
    }

    where [PROJECT_ID] is your project ID.

  3. After you create the image with the necessary license, you can delete the source disk if you no longer need it.
  4. Create a VM instance using the new custom image with the license. You must create the instance in a zone that supports the Haswell CPU Platform or newer. For example:
    gcloud compute instances create example-nested-vm --zone us-central1-b \
                  --image nested-vm-image
  5. Confirm that nested virtualization is enabled in the VM.
    1. Connect to the VM instance. For example:
      gcloud compute ssh example-nested-vm
    2. Check that nested virtualization is enabled by running the following command. A non-zero response confirms that nested virtualization is enabled.
      grep -cw vmx /proc/cpuinfo

Starting a nested VM

You can start a nested VM in many different ways. This section provides an example of starting a nested VM using qemu-system-x86_64 on an L1 VM running Debian. If you are having trouble running nested VMs using methods other than this documented process, reproduce your issue using this process before reporting the issue as a bug.

  1. Connect to the VM instance. For example:

    gcloud compute ssh example-nested-vm
    
  2. Update the VM instance and install some necessary packages:

    sudo apt-get update && sudo apt-get install qemu-kvm -y
    
  3. Download an OS image:

    wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2
    
  4. Run screen:

    screen
    
  5. Hit enter at the screen welcome prompt.

  6. Start the nested VM. When prompted, login in with user: root, password: root.

    sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -curses
    
  7. Test that your VM has external access:

    user@nested-vm:~$ wget google.com && cat index.html

  8. When you are done, exit the nested VM by hitting the Ctrl + a, Ctrl + d buttons.

Starting a private bridge between the host and nested VMs

To enable connections between the host and nested VM, you can create a private bridge. This sample procedure is intended for an L1 VM running Debian.

  1. Connect to the VM instance. For example:

    gcloud compute ssh example-nested-vm
    
  2. Update the VM instance and install some necessary packages:

    sudo apt-get update && sudo apt-get install uml-utilities qemu-kvm bridge-utils virtinst libvirt-bin -y
    
  3. Start the default network that comes with the libvirt package:

    sudo virsh net-start default
    
  4. Check that you now have the virbr0 bridge:

    sudo ifconfig -a
    
     eth0      Link encap:Ethernet  HWaddr 42:01:0a:80:00:02
               inet addr:10.128.0.2  Bcast:10.128.0.2  Mask:255.255.255.255
               UP BROADCAST RUNNING MULTICAST  MTU:1460  Metric:1
               RX packets:14799 errors:0 dropped:0 overruns:0 frame:0
               TX packets:9294 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:1000
               RX bytes:97352041 (92.8 MiB)  TX bytes:1483175 (1.4 MiB)

    lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    virbr0 Link encap:Ethernet HWaddr 5a:fa:7e:d2:8b:0d inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

  5. Create a dummy interface:

    sudo modprobe dummy
    
  6. Add the dummy interface to the bridge:

    sudo brctl addif virbr0 dummy0
    
  7. Check that the interface was added:

    sudo brctl show
    
    bridge name    bridge id          STP enabled    interfaces
    virbr0         8000.6a20fa136bb6  yes            dummy0

  8. Create a tun interface to go from the host VM to the nested VM:

    sudo tunctl -t tap0
    sudo ifconfig tap0 up
    
  9. Bond the tun interface to the bridge VM:

    sudo brctl addif virbr0 tap0
    
  10. Double check that the bridge network is properly set up:

    sudo brctl show
    
    bridge name     bridge id               STP enabled     interfaces
    virbr0          8000.5254005085fe       yes              dummy0
                                                             tap0
    1. Download an OS image:

    wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2s
    
  11. Run screen:

    screen
    
  12. Hit enter at the screen welcome prompt.

  13. Start your nested VM. When prompted, login in with user: root, password: root.

    sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -net nic -net tap,ifname=tap0,script=no -curses
    
  14. On the nested VM, run ifconfig to confirm that the VM has an address in the virbr0 space, such as 192.168.122.89:

    user@nested-vm:~$ ifconfig

  15. Start a dummy webserver on port 8000:

    user@nested-vm:~$ python -m SimpleHTTPServer

  16. Exit the nested VM by hitting the Ctrl + a, Ctrl + a buttons.

  17. Test that your host VM can ping the nested VM:

    curl 192.168.122.89:8000
    

    The nested VM should return something similar to:

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    <title>Directory listing for /</title>
    <body>
    <h2>Directory listing for /</h2>
    <hr>
    <ol>
    <li><a href=".aptitude/">.aptitude/</a>
    <li><a href=".bashrc">.bashrc</a>
    <li><a href=".profile">.profile</a>
    </ol>
    <hr>
    </body>
    </html>
    

Configuring a nested VM to be accessible from outside the host VM

You can set up an instance with multiple network interfaces or set up an instance using an alias IP so that VMs outside the host VM can ping the nested VM.

The following sample procedure sets up your host and nested VM so that the nested VM is accessible from other VMs on the same network using alias IPs. This procedure is intended for an L1 VM running Debian.

  1. Create a VM with nested virtualization enabled but make sure to include an alias IP range and support for HTTP/HTTPS traffic. For example:

    gcloud compute instances create example-nested-vm --image nested-vm-image \
        --tags http-server,https-server --can-ip-forward \
        --network-interface subnet=subnet1,aliases=/30
    
  2. Connect to the VM instance. For example:

    gcloud compute ssh example-nested-vm
    
  3. Update the VM instance and install some necessary packages:

    sudo apt-get update && sudo apt-get install uml-utilities qemu-kvm bridge-utils virtinst libvirt-bin -y
    
  4. Start the default network that comes with the libvirt package:

    sudo virsh net-start default
    
  5. Check that you now have the virbr0 bridge:

    sudo ifconfig -a
     
     eth0      Link encap:Ethernet  HWaddr 42:01:0a:80:00:02
               inet addr:10.128.0.2  Bcast:10.128.0.2  Mask:255.255.255.255
               UP BROADCAST RUNNING MULTICAST  MTU:1460  Metric:1
               RX packets:14799 errors:0 dropped:0 overruns:0 frame:0
               TX packets:9294 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:1000
               RX bytes:97352041 (92.8 MiB)  TX bytes:1483175 (1.4 MiB)

    lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    virbr0 Link encap:Ethernet HWaddr 5a:fa:7e:d2:8b:0d inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

  6. Create a dummy interface:

    sudo modprobe dummy
    
  7. Add the dummy interface to the bridge:

    sudo brctl addif virbr0 dummy0
    
  8. Check that the interface was added:

    sudo brctl show
    bridge name    bridge id          STP enabled    interfaces
    virbr0         8000.6a20fa136bb6  yes            dummy0

  9. Create a tun interface to go from the host VM to the nested VM:

    sudo tunctl -t tap0
    sudo ifconfig tap0 up
    
  10. Bond the tun interface to the bridge VM:

    sudo brctl addif virbr0 tap0
    
  11. Double check that the bridge network is properly set up:

    sudo brctl show
    
    bridge name     bridge id               STP enabled     interfaces
    virbr0          8000.5254005085fe       yes              dummy0
                                                             tap0
    1. Download an OS image:

    wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2s
    
  12. Run screen:

    screen
    
  13. Hit enter at the screen welcome prompt.

  14. Start your nested VM. When prompted, login in with user: root, password: root.

    sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -net nic -net tap,ifname=tap0,script=no -curses
    
  15. On the nested VM, run ifconfig to confirm that the VM has an address in the virbr0 space, such as 192.168.122.89:

    user@nested-vm:~$ ifconfig

  16. Start a dummy webserver on port 8000:

    user@nested-vm:~$ python -m SimpleHTTPServer

  17. Exit the nested VM by hitting the Ctrl + a, Ctrl + a buttons.

  18. Test that your host VM can ping the nested VM:

    curl 192.168.122.89:8000
    

    The nested VM should return something similar to:

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    <title>Directory listing for /</title>
    <body>
    <h2>Directory listing for /</h2>
    <hr>
    <ol>
    <li><a href=".aptitude/">.aptitude/</a>
    <li><a href=".bashrc">.bashrc</a>
    <li><a href=".profile">.profile</a>
    </ol>
    <hr>
    </body>
    </html>
    
  19. On your host VM, set up iptables to allow forwarding from your host VM to your nested VM. For example, if you want to use an alias IP of 10.128.0.13 to forward traffic to your hosted VM at 192.168.122.89:

    echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
    sudo iptables -t nat -A PREROUTING -d 10.128.0.13 -j DNAT --to-destination 192.168.122.89
    sudo iptables -t nat -A POSTROUTING -d 192.168.122.89 -j MASQUERADE
    sudo iptables -A INPUT -p udp -j ACCEPT
    sudo iptables -A FORWARD -p tcp -j ACCEPT
    sudo iptables -A OUTPUT -p tcp -j ACCEPT
    sudo iptables -A OUTPUT -p udp -j ACCEPT
    

    Note: You might need to flush your iptables if this step does not work for you.

  20. Next, log into another VM that is on the same network as the hosted VM and make a curl request to the alias IP. For example:

    user@another-vm:~$ curl 10.128.0.13:8000

    The nested VM should return something similar to:

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    <title>Directory listing for /</title>
    <body>
    <h2>Directory listing for /</h2>
    <hr>
    <ol>
    <li><a href=".aptitude/">.aptitude/</a>
    <li><a href=".bashrc">.bashrc</a>
    <li><a href=".profile">.profile</a>
    </ol>
    <hr>
    </body>
    </html>
    

Troubleshooting

Google runs basic nested virtualization boot and integration tests using specific Linux distros and kernel/KVM versions on the Compute Engine instance. Additionally, these tests use a specific process. Before reporting the issues as a bug, reproduce those issues using the following items:

Running grep -c vmx /proc/cpuinfo returns 0 and says my VM is not enabled for nesting.

  1. Make sure you have started your VM with a CPU platform of Haswell or higher.
  2. Make sure you are using the correct license with your VM image.

I can't exit out of my nested VM.

If you did not run screen before each nested VM session, you will not be able to detach from the nested VM. Log into the host VM in another terminal and kill the process, then run screen on the host VM before you start a nested VM.

My iptables rules aren't forwarding traffic to my nested VM.

  • iptables resolve rules from top to bottom, make sure your rules are higher priority than other rules.
  • Check that there are no conflicting rules intercepting your packets.
  • Consider flushing your iptables:

    1. First, set the default policies:

      sudo iptables -P INPUT ACCEPT
      sudo iptables -P FORWARD ACCEPT
      sudo iptables -P OUTPUT ACCEPT
      
    2. Next, flush all tables and chains, and delete non-default chains:

      sudo iptables -t nat -F
      sudo iptables -t mangle -F
      sudo iptables -F
      sudo iptables -X
      
Was this page helpful? Let us know how we did:

Send feedback about...

Compute Engine Documentation