This document describes how to enable support for nested virtualization on Compute Engine VM instances. It also covers the basic steps of starting and configuring a nested VM.
Nested virtualization adds support for Intel VT-x processor virtualization instructions to Compute Engine VMs. Using nested virtualization, you start a VM instance as normal on Compute Engine and then install a KVM-compatible hypervisor on the VM instance so you can run another VM instance on top of that hypervisor. You can use nested virtualization on any Linux VM instance running on a Haswell or newer platform. For other constraints, see the sub-section, restrictions for nested virtualization.
Nested virtualization is ideally suited for VM-based applications and workloads where converting or importing your VM images to Compute Engine is not feasible. For example, you can use nested virtualization to build a disaster recovery solution for an on-premises workload running on KVM-based virtual machines, that can seamlessly failover to VMs running on Compute Engine without the extra time or orchestration needed to convert your KVM-based VM to a native Compute Engine image. Another workload that could be well-suited for nested virtualization is a software-validation framework that needs to test and validate new versions of a software package on numerous versions of different KVM-compatible OSes. Running nested VMs removes the need to convert and manage a large library of Compute Engine images.
Before you begin
- If you want to use the command-line examples in this guide:
- Install or update to the latest version of the gcloud command-line tool.
- Set a default region and zone.
- If you want to use the API examples in this guide, set up API access.
- Read about Nested virtualization.
How nested virtualization works
Compute Engine VMs run on top of physical hardware (the host server), which is referred to as the L0 environment. Within the host server, a pre-installed hypervisor enables a single server to host multiple Compute Engine VMs, which are referred to as L1 or native VMs on Compute Engine. When you use nested virtualization, you install another hypervisor on top of the L1 guest OS and create nested VMs, referred to as L2 VMs, using the L1 hypervisor. L1 or native Compute Engine VMs that are running a guest hypervisor and nested VMs can also be referred to as host VMs.
Restrictions
Nested virtualization can only be enabled for L1 VMs running on Haswell processors or later. If the default processor for a zone is Sandy Bridge or Ivy Bridge, you can use minimum CPU selection to choose Haswell or later for a particular instance. Review the Regions and Zones page to determine which zones support Haswell or later processors.
E2 and N2D machine types do not support nested virtualization; all other machine types support nested virtualization.
Nested virtualization is supported only for KVM-based hypervisors running on Linux instances. Hyper-V, ESX, and Xen hypervisors are not supported.
Windows VMs do not support nested virtualization; that is, host VMs must run a Linux OS. However, nested VMs can run certain Windows OSes (described below).
Tested KVM versions
Google runs basic nested virtualization boot and integration tests using the following host VM and nested VM combinations:
- Debian 9 with kernel version 4.9 hosting the following nested VMs:
- CentOS 6.5 with kernel version 2.6
- Debian 9 with kernel version 4.9
- RHEL 5.11 with kernel version 2.6
- SLES 12 SP3 with kernel version 4.4
- Ubuntu 16.04 LTS with kernel version 4.15
- Windows Server 2016 Datacenter Edition
- SLES 12 SP3 with kernel version 4.4 hosting the following nested VMs:
- SLES 12 SP3 with kernel version 4.4
- Ubuntu 16.04 LTS with kernel version 4.15 hosting the following nested VMs:
- Ubuntu 16.04 LTS with kernel version 4.15
If you are having trouble running nested VMs on distributions and kernel/KVM versions not listed here, reproduce your issue using one of the above environments as the guest operating system on the host Compute Engine instance before reporting the issue.
Performance
Even with hardware-assisted nested virtualization, there will be a performance penalty for the nested VMs themselves and any applications or workloads running inside them. While you can't predict the exact performance degradation for a given application or workload, expect at least a 10% penalty for CPU-bound workloads and possibly much more for I/O bound workloads.
Enabling nested virtualization on an instance
There are two steps required to used nested virtualization:
Nested virtualization is allowed on the project, folder, or organization level. This is the default setting, so unless someone in your organization has disabled nested virtualization, you do not need to do anything to enable nested virtualization.
The VM instances for which you want to use nested virtualization must use a custom image with a special license key. This is covered in the steps that follow.
To enable nested virtualization on a VM instance, create a custom image with a special license key that enables VMX in the L1 or host VM instance and then use that image on an instance that meets the restrictions for nested virtualization. The license key does not incur additional charges.
- Create a boot disk from a public image or from a custom image with an
operating system. Alternatively, you can skip this step and apply the license
to an existing disk from one of your VM instances.
gcloud
Use the
gcloud
command-line tool to create the disk from the boot disk image of your choice. For this example, create a disk nameddisk1
from thedebian-9
image family:gcloud compute disks create disk1 --image-project debian-cloud --image-family debian-9 --zone us-central1-b
API
Create a disk named
disk1
from thedebian-9
image family:POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/us-central1-b/disks { "name": "disk1", "sourceImage": "/projects/debian-cloud/global/images/family/debian-9" }
where:
[PROJECT_ID]
is your project ID.- Using the boot disk that you created or a boot disk from an existing instance, create a custom image with the special license key required for virtualization.
gcloud
If you are creating an image using the
gcloud
command-line tool, provide the following license URL using the--licenses
flag:--licenses https://compute.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
For example, the following command creates an image named
nested-vm-image
from an example disk nameddisk1
code>:gcloud compute images create nested-vm-image \ --source-disk disk1 --source-disk-zone us-central1-b \ --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
API
In the API, include the licenses property in your API request:
POST https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/images { "licenses": ["projects/vm-options/global/licenses/enable-vmx"], "name": "nested-vm-image", "sourceDisk": "zones/us-central1-b/disks/disk1" }
where
[PROJECT_ID]
is your project ID.- After you create the image with the necessary license, you can delete the source disk if you no longer need it.
- Create a VM instance using the new custom image with the license. You must create the instance in a zone that supports the Haswell CPU Platform or newer. For example:
gcloud compute instances create example-nested-vm --zone us-central1-b \ --min-cpu-platform "Intel Haswell" \ --image nested-vm-image
- Confirm that nested virtualization is enabled in the VM.
- Connect to the VM instance. For example:
gcloud compute ssh example-nested-vm
- Check that nested virtualization is enabled by running the following
command. A nonzero response confirms that nested virtualization is
enabled.
grep -cw vmx /proc/cpuinfo
Starting a nested VM
You can start a nested VM in many different ways. This section provides an example of starting a nested VM using
qemu-system-x86_64
on an L1 VM running Debian. If you are having trouble running nested VMs using methods other than this documented process, reproduce your issue using this process before reporting the issue as a bug.Connect to the VM instance. For example:
gcloud compute ssh example-nested-vm
Update the VM instance and install some necessary packages:
sudo apt update && sudo apt install qemu-kvm -y
Download an OS image:
wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2
Run
screen
:screen
Press Enter at the
screen
welcome prompt.Start the nested VM. When prompted, login in with
user: root
,password: root
.sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -curses
Test that your VM has external access:
user@nested-vm:~$ wget google.com && cat index.html
When you have finished, detach from the screen session with
Ctrl+A
,Ctrl+D
.
Starting a private bridge between the host and nested VMs
To enable connections between the host and nested VM, you can create a private bridge. This sample procedure is intended for an L1 VM running Debian.
Note: VPC networks have a default maximum transmission unit (MTU) of1460
bytes. However, the network MTU can be set to1500
bytes. See Maximum transmission unit for background on network MTUs.Connect to the VM instance. For example:
gcloud compute ssh example-nested-vm
Update the VM instance and install some necessary packages:
sudo apt update && sudo apt install uml-utilities qemu-kvm bridge-utils virtinst libvirt-daemon-system libvirt-clients -y
Start the default network that comes with the
libvirt
package:sudo virsh net-start default
Check that you now have the virbr0 bridge:
sudo ifconfig -a eth0 Link encap:Ethernet HWaddr 42:01:0a:80:00:02 inet addr:10.128.0.2 Bcast:10.128.0.2 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1 RX packets:14799 errors:0 dropped:0 overruns:0 frame:0 TX packets:9294 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:97352041 (92.8 MiB) TX bytes:1483175 (1.4 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) virbr0 Link encap:Ethernet HWaddr 5a:fa:7e:d2:8b:0d inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Create a tap interface to go from the host VM to the nested VM:
sudo tunctl -t tap0 sudo ifconfig tap0 up
Bond the tap interface to the bridge VM:
sudo brctl addif virbr0 tap0
Double check that the bridge network is properly set up:
sudo brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254005085fe yes tap0
Download an OS image:
wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2
Run
screen
:screen
Press Enter at the
screen
welcome prompt.Start your nested VM:
sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -net nic -net tap,ifname=tap0,script=no -curses
When prompted, login in with
user: root
,password: root
.On the nested VM, run
ifconfig
to confirm that the VM has an address in the virbr0 space, such as 192.168.122.89:user@nested-vm:~$ ifconfig
Start a placeholder web server on port 8000:
user@nested-vm:~$ python -m SimpleHTTPServer
Detach from the screen session with
Ctrl+A
,Ctrl+D
.Test that your host VM can ping the nested VM:
curl 192.168.122.89:8000
The nested VM should return something similar to:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html> <title>Directory listing for /</title> <body> <h2>Directory listing for /</h2> <hr> <ol> <li><a href=".aptitude/">.aptitude/</a> <li><a href=".bashrc">.bashrc</a> <li><a href=".profile">.profile</a> </ol> <hr> </body> </html>
Configuring a nested VM to be accessible from outside the host VM
You can set up an instance with multiple network interfaces or set up an instance using an alias IP so that VMs outside the host VM can ping the nested VM.
The following sample procedure sets up your host and nested VM so that the nested VM is accessible from other VMs on the same network using alias IPs. This procedure uses an imaginary subnet called
subnet1
that was previously created. You can replacesubnet1
with the name of your own subnet, or create a new subnet calledsubnet1
.Finally, note that this procedure is intended for an L1 VM running Debian.
Create a VM with nested virtualization enabled but make sure to include an alias IP range and support for HTTP/HTTPS traffic. For example:
gcloud compute instances create example-nested-vm --image nested-vm-image \ --tags http-server,https-server --can-ip-forward \ --min-cpu-platform "Intel Haswell" \ --network-interface subnet=subnet1,aliases=/30
Connect to the VM instance. For example:
gcloud compute ssh example-nested-vm
Update the VM instance and install some necessary packages:
sudo apt update && sudo apt install uml-utilities qemu-kvm bridge-utils virtinst libvirt-daemon-system libvirt-clients -y
Start the default network that comes with the
libvirt
package:sudo virsh net-start default
Check that you now have the virbr0 bridge:
sudo ifconfig -a eth0 Link encap:Ethernet HWaddr 42:01:0a:80:00:02 inet addr:10.128.0.2 Bcast:10.128.0.2 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1 RX packets:14799 errors:0 dropped:0 overruns:0 frame:0 TX packets:9294 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:97352041 (92.8 MiB) TX bytes:1483175 (1.4 MiB)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
virbr0 Link encap:Ethernet HWaddr 5a:fa:7e:d2:8b:0d inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Create a tun interface to go from the host VM to the nested VM:
sudo tunctl -t tap0 sudo ifconfig tap0 up
Bond the tun interface to the bridge VM:
sudo brctl addif virbr0 tap0
Double check that the bridge network is properly set up:
sudo brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254005085fe yes tap0
Download an OS image:
wget https://people.debian.org/~aurel32/qemu/amd64/debian_squeeze_amd64_standard.qcow2
Run
screen
:screen
Hit enter at the
screen
welcome prompt.Start your nested VM:
sudo qemu-system-x86_64 -enable-kvm -hda debian_squeeze_amd64_standard.qcow2 -m 512 -net nic -net tap,ifname=tap0,script=no -curses
When prompted, login in with
user: root
,password: root
.On the nested VM, run
ifconfig
to confirm that the VM has an address in the virbr0 space, such as 192.168.122.89:user@nested-vm:~$ ifconfig
Start a placeholder web server on port 8000:
user@nested-vm:~$ python -m SimpleHTTPServer
Detach from the screen session with
Ctrl+A
,Ctrl+D
.Test that your host VM can ping the nested VM:
curl 192.168.122.89:8000
The nested VM should return something similar to:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html> <title>Directory listing for /</title> <body> <h2>Directory listing for /</h2> <hr> <ol> <li><a href=".aptitude/">.aptitude/</a> <li><a href=".bashrc">.bashrc</a> <li><a href=".profile">.profile</a> </ol> <hr> </body> </html>
On your host VM, set up
iptables
to allow forwarding from your host VM to your nested VM. For the L2 guest image used in these instructions (debian_squeeze_amd64_standard.qcow2
), it is necessary to first flush the IP tables:sudo iptables -F
Next, determine the VM's alias IP:
ip route show table local
The VM should return something similar to output below. In this example, there are two IP addresses associated with the VM's ethernet device,
eth0
. The first, 10.128.0.2, is the VM's primary IP address, returned bysudo ifconfig -a
. The second, 10.128.0.13, is the VM's alias IP address.local 10.128.0.2 dev eth0 proto kernel scope host src 10.128.0.2 broadcast 10.128.0.2 dev eth0 proto kernel scope link src 10.128.0.2 local 10.128.0.13/30 dev eth0 proto 66 scope host broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.122.0 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown local 192.168.122.1 dev virbr0 proto kernel scope host src 192.168.122.1 broadcast 192.168.122.255 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
To forward traffic from the alias IP (10.128.0.13) to the nested VM (192.168.122.89):
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward sudo iptables -t nat -A PREROUTING -d 10.128.0.13 -j DNAT --to-destination 192.168.122.89 sudo iptables -t nat -A POSTROUTING -s 192.168.122.89 -j MASQUERADE sudo iptables -A INPUT -p udp -j ACCEPT sudo iptables -A FORWARD -p tcp -j ACCEPT sudo iptables -A OUTPUT -p tcp -j ACCEPT sudo iptables -A OUTPUT -p udp -j ACCEPT
Next, log into another VM that is on the same network as the hosted VM and make a
curl
request to the alias IP. For example:user@another-vm:~$ curl 10.128.0.13:8000
The nested VM should return something similar to:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html> <title>Directory listing for /</title> <body> <h2>Directory listing for /</h2> <hr> <ol> <li><a href=".aptitude/">.aptitude/</a> <li><a href=".bashrc">.bashrc</a> <li><a href=".profile">.profile</a> </ol> <hr> </body> </html>
Managing nested virtualization on an organization
If you have the
orgPolicy.policyAdmin
role, you can control whether projects and folders within the organization can use nested virtualization by updating the organizational policy. To allow or prevent nested virtualization, update the Disable VM nested virtualization constraint.As a general best practice, you should explicitly set the value of the Disable VM nested virtualization organizational policy boolean constraint so that your projects, folders, or your entire organization do not rely on the default setting.
Console
Go to the Organization policies page in the Google Cloud Console.
Go to the Organization policies pageFrom the drop-down menu at the top of the page, select the project, folder, or organization for which you want to edit organization policies. The Organization policies page displays a list of organization policy constraints that are available.
Select the Disable VM nested virtualization constraint from the list on the Organization policies page. The Policy details page that appears describes the constraint and provides information about how the constraint is currently applied.
Click Edit to customize the constraint.
On the Edit page, select Customize.
Under Enforcement, select an enforcement option:
To enable enforcement of this constraint, select On.
To disable enforcement of this constraint, select Off.
Click Save.
gcloud
Use the
gcloud
command-line tool to run either theenable-enforce
ordisable-enforce
command to prevent or allow nested virtualization, respectively.To allow nested virtualization
gcloud beta resource-manager org-policies disable-enforce \ compute.disableNestedVirtualization \ [--project=[PROJECT_ID] | --folder=[FOLDER_ID] | --organization=[ORGANIZATION_ID]]
To prevent nested virtualization
gcloud beta resource-manager org-policies enable-enforce \ compute.disableNestedVirtualization \ [--project=[PROJECT_ID] | --folder=[FOLDER_ID] | --organization=[ORGANIZATION_ID]]
Provide one of
--project
,--folder
, or--organization
where:[PROJECT_ID]
is the project ID to enable nested virtualization.[FOLDER_ID]
is the folder ID to enable nested virtualization.[ORGANIZATION_ID]
is the organization ID to enable nested virtualization.
API
To enable or disable nested virtualization for a project, folder, or organization, make a
POST
request to the following URL:https://cloudresourcemanager.googleapis.com/v1/[RESOURCE]/[RESOURCE_ID]:setOrgPolicy
where
[RESOURCE]
is one ofprojects
,folders
, ororganizations
.[RESOURCE_ID]
is the project ID, the folder ID, or the organization ID, based on the resource.
Provide the following in the request body, setting
enforced
to eithertrue
(prevents nested virtualization) orfalse
(allows nested virtualization):{"policy": {"booleanPolicy": { "enforced": false }, "constraint": "constraints/compute.disableNestedVirtualization" } }
You can determine if your project, folder, or organization is blocked from using nested virtualization if the organization policy is set to
enforced
. For example, using thegcloud
tool:gcloud beta resource-manager org-policies describe \ constraints/compute.disableNestedVirtualization --project=[PROJECT_ID] --effective
where
[PROJECT_ID]
is your project ID. If the response has an emptybooleanPolicy: {}
, then nested virtualization is allowed. Otherwise, the response will includeenforced: true
. For more information, see the documentation on boolean constraints.What's next?
[{ "type": "thumb-down", "id": "hardToUnderstand", "label":"Hard to understand" },{ "type": "thumb-down", "id": "incorrectInformationOrSampleCode", "label":"Incorrect information or sample code" },{ "type": "thumb-down", "id": "missingTheInformationSamplesINeed", "label":"Missing the information/samples I need" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }] Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2021-01-06 UTC.
- Using the boot disk that you created or a boot disk from an existing instance, create a custom image with the special license key required for virtualization.