This document explains how to enable the Data Plane Development Kit (DPDK) on a virtual machine (VM) instance for faster network packet processing.
DPDK is a framework for performance-intensive applications that require fast packet processing, low latency, and consistent performance. DPDK provides a set of data plane libraries and a network interface controller (NIC) that bypass the kernel network and run directly in the user space. For example, enabling DPDK on your VM is useful when running the following:
Network function virtualization (NFV) deployments
Software-defined networking (SDN) applications
Video streaming or voice over IP applications
You can run DPDK on a VM using one of the following virtual NIC (vNIC) types:
Recommended: gVNIC
A high-performance, secure, and scalable virtual network interface specifically designed for Compute Engine that succeeds virtIO as the next generation vNIC.
-
An open source ethernet driver that lets VMs efficiently access physical hardwares, such as block storage and networking adapters.
One issue with running DPDK in a virtual environment, instead of on physical hardware, is that virtual environments lack support for SR-IOV and I/O Memory Management Unit (IOMMU) for high-performing applications. To overcome this limitation, you must run DPDK on guest physical addresses rather than host virtual addresses by using one of the following drivers:
IOMMU-less Virtual Function I/O (VFIO)
Before you begin
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Requirements
When creating a VM to run DPDK on, make sure of the following:
To avoid a lack of network connectivity when running your applications, use two Virtual Private Cloud networks:
A VPC network for the control plane
A VPC network for the data plane
The two VPC networks must both specify the following:
A subnet with a unique IP address range
The same region for their subnets
The same type of VNIC—either gVNIC or VirtIO-Net
When creating the VM:
You must specify the same region as the two VPC networks' subnets.
You must specify the vNIC type you plan to use with DPDK.
You must specify a supported machine series for gVNIC or VirtIO-Net.
Restrictions
Running DPDK on a VM has the following restrictions:
You can only use single-stack subnets for the two VPC networks used in the VM.
If you're using gVNIC as the vNIC type for the two VPC networks, make sure of the following:
You must use DPDK version 22.11 or later.
You can only use supported disk images.
If you want to enable per VM Tier_1 networking performance for higher network performance when creating the VM, you must specify the following:
gVNIC as the vNIC type
A supported machine type with 30 vCPUs or more
Configure a VM to run DPDK
This section explains how to create a VM to run DPDK on.
Create the VPC networks
Create two VPC networks, for the data plane and control plane, using the Google Cloud console, Google Cloud CLI, or Compute Engine API. You can later specify these networks when creating the VM.
Console
Create a VPC network for the data plane:
In the Google Cloud console, go to VPC networks.
The VPC networks page opens.
Click
Create VPC network.The Create a VPC network page opens.
In the Name field, enter a name for your network.
In the New subnet section, do the following:
In the Name field, enter a name for your subnet.
In the Region menu, select a region for your subnet.
Select IPv4 (single-stack) (default).
In the IPv4 range, enter a valid IPv4 range address in CIDR notation.
Click Done.
Click Create.
The VPC networks page opens. It can take up to a minute before the creation of the VPC network completes.
Create a VPC network for the control plane with a firewall rule to allow SSH connections into the VM:
Click
Create VPC network again.The Create a VPC network page opens.
In the Name field, enter a name for your network.
In the New subnet section, do the following:
In the Name field, enter a name for the subnet.
In the Region menu, select the same region you specified for the subnet of the data plane network.
Select IPv4 (single-stack) (default).
In the IPv4 range, enter a valid IPv4 range address in CIDR notation.
Click Done.
In the IPv4 firewall rules tab, select the NETWORK_NAME-allow-ssh checkbox.
Where NETWORK_NAME is the network name you specified in the previous steps.
Click Create.
The VPC networks page opens. It can take up to a minute before the creation of the VPC network completes.
gcloud
To create a VPC network for the data plane, follow these steps:
Create a VPC network with a manually-created subnet using the
gcloud compute networks create
command with the--subnet-mode
flag set tocustom
.gcloud compute networks create DATA_PLANE_NETWORK_NAME \ --bgp-routing-mode=regional \ --mtu=MTU \ --subnet-mode=custom
Replace the following:
DATA_PLANE_NETWORK_NAME
: the name for the VPC network for the data plane.MTU
: the maximum transmission unit (MTU), which is the largest packet size of the network. The value must be between1300
and8896
. The default value is1460
. Before setting the MTU to a value higher than1460
, see Maximum transmission unit.
Create a subnet for the VPC data plane network you've just created using the
gcloud compute networks subnets create
command.gcloud compute networks subnets create DATA_PLANE_SUBNET_NAME \ --network=DATA_PLANE_NETWORK_NAME \ --range=DATA_PRIMARY_RANGE \ --region=REGION
Replace the following:
DATA_PLANE_SUBNET_NAME
: the name of the subnet for the data plane network.DATA_PLANE_NETWORK_NAME
: the name of the data plane network you specified in the previous steps.DATA_PRIMARY_RANGE
: a valid IPv4 range for the subnet in CIDR notation.REGION
: the region where to create the subnet.
To create a VPC network for the control plane with a firewall rule to allow SSH connections into the VM, follow these steps:
Create a VPC network with a manually-created subnet using the
gcloud compute networks create
command with the--subnet-mode
flag set tocustom
.gcloud compute networks create CONTROL_PLANE_NETWORK_NAME \ --bgp-routing-mode=regional \ --mtu=MTU \ --subnet-mode=custom
Replace the following:
CONTROL_PLANE_NETWORK_NAME
: the name for the VPC network for the control plane.MTU
: the MTU, which is the largest packet size of the network. The value must be between1300
and8896
. The default value is1460
. Before setting the MTU to a value higher than1460
, see Maximum transmission unit.
Create a subnet for the VPC control plane network you've just created using the
gcloud compute networks subnets create
command.gcloud compute networks subnets create CONTROL_PLANE_SUBNET_NAME \ --network=CONTROL_PLANE_NETWORK_NAME \ --range=CONTROL_PRIMARY_RANGE \ --region=REGION
Replace the following:
CONTROL_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network.CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you specified in the previous steps.CONTROL_PRIMARY_RANGE
: a valid IPv4 range for the subnet in CIDR notation.REGION
: the region where to create the subnet, which must match with the region you specified in the data plane network's subnet.
Create a VPC firewall rule that allows SSH into the control plane network using the
gcloud compute firewall-rules create
command with the--allow
flag set totcp:22
.gcloud compute firewall-rules create FIREWALL_RULE_NAME \ --action=allow \ --network=CONTROL_PLANE_NETWORK_NAME \ --rules=tcp:22
Replace the following:
FIREWALL_RULE_NAME
: the name of the firewall rule.CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you created in the previous steps.
API
To create a VPC network for the data plane, follow these steps:
Create a VPC network with a manually-created subnet by making a
POST
request to thenetworks.insert
method with theautoCreateSubnetworks
field set tofalse
.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "autoCreateSubnetworks": false, "name": "DATA_PLANE_NETWORK_NAME", "mtu": MTU }
Replace the following:
PROJECT_ID
: the project ID of the current project.DATA_PLANE_NETWORK_NAME
: the name for the network for the data plane.MTU
: the maximum transmission unit (MTU), which is the largest packet size of the network. The value must be between1300
and8896
. The default value is1460
. Before setting the MTU to a value higher than1460
, see Maximum transmission unit.
Create a subnet for the VPC data plane network by making a
POST
request to thesubnetworks.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "DATA_PRIMARY_RANGE", "name": "DATA_PLANE_SUBNET_NAME", "network": "projects/PROJECT_ID/global/networks/DATA_PLANE_NETWORK_NAME" }
Replace the following:
PROJECT_ID
: the project ID of the project where the data plane network is located.REGION
: the region where you want to create the subnet.DATA_PRIMARY_RANGE
: the primary IPv4 range for the new subnet in CIDR notation.DATA_PLANE_SUBNET_NAME
: the name of the subnet for the data plane network you created in the previous step.DATA_PLANE_NETWORK_NAME
: the name of the data plane network you created in the previous step.
To create a VPC network for the control plane with a firewall rule to allow SSH into the VM, follow these steps:
Create a VPC network with a manually-created subnet by making a
POST
request to thenetworks.insert
method with theautoCreateSubnetworks
field set tofalse
.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "autoCreateSubnetworks": false, "name": "CONTROL_PLANE_NETWORK_NAME", "mtu": MTU }
Replace the following:
PROJECT_ID
: the project ID of the current project.CONTROL_PLANE_NETWORK_NAME
: the name for the network for the control plane.MTU
: the MTU, which is the largest packet size of the network. The value must be between1300
and8896
. The default value is1460
. Before setting the MTU to a value higher than1460
, see Maximum transmission unit.
Create a subnet for the VPC data control network by making a
POST
request to thesubnetworks.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "CONTROL_PRIMARY_RANGE", "name": "CONTROL_PLANE_SUBNET_NAME", "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME" }
Replace the following:
PROJECT_ID
: the project ID of the project where the control plane network is located.REGION
: the region where you want to create the subnet.CONTROL_PRIMARY_RANGE
: the primary IPv4 range for the new subnet in CIDR notation.CONTROL_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network you created in the previous step.CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you created in the previous step.
Create a VPC firewall rule that allows SSH into the control plane network by making a
POST
request to thefirewalls.insert
method. In the request, set theIPProtocol
field totcp
and theports
field to22
.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ], "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME" }
Replace the following:
PROJECT_ID
: the project ID of the project where the control plane network is located.CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you created in the previous steps.
For more configuration options when creating a VPC network, see Create and manage VPC networks.
Create a VM that uses the VPC networks for DPDK
Create a VM that enables gVNIC or virtIO-Net on the two VPC networks that you created previously using the Google Cloud console, gcloud CLI, and Compute Engine API.
Recommended: Specify Ubuntu LTS or Ubuntu Pro as the operating system image because of their package manager support for the UIO and IOMMU-less VFIO drivers. If you don't want to specify any of these operating systems, specifying Debian 11 or later is recommended for faster packet processing.
Console
Create a VM that uses the two VPC network subnets you created in the previous steps by doing the following:
In the Google Cloud console, go to VM instances.
The VM instances page opens.
Click
Create instance.The Create an instance page opens.
In the Name field, enter a name for your VM.
In the Region menu, select the same region where you created your networks in the previous steps.
In the Zone menu, select a zone for your VM.
In the Machine configuration section, do the following:
Select one of the following options:
For common workloads, select the General purpose tab (default).
For performance-intensive workloads, select the Compute optimized tab.
For high memory-to-vCPUs ratios workloads, select the Memory optimized tab.
For workloads that use Graphics processing units (GPUs), select the GPUs tab.
Optional. If you specified GPUs in the previous step and you want to change the GPU to attach to the VM, do one or more of the following:
In the GPU type menu, select a type of GPU.
In the Number of GPUs menu, select the number of GPUs.
In the Series menu, select a machine series.
In the Machine type menu, select a machine type.
Optional: Expand Advanced configurations, and follow the prompts to further customize the machine for this VM.
Optional: In the Boot disk section, click Change, and then follow the prompts to change the disk image.
Expand the Advanced options section.
Expand the Networking section.
In the Network performance configuration section, do the following:
In the Network interface card menu, select one of the following:
To use gVNIC, select gVNIC.
To use VirtIO-Net, select VirtIO.
Optional: For higher network performance and reduced latency, select the Enable Tier_1 networking checkbox.
In the Network interfaces section, do the following:
In the default row, click
Delete item "default".Click Add network interface.
The New network interface section appears.
In the Network menu, select the control plane network you created in the previous steps.
Click Done.
Click Add network interface again.
The New network interface section appears.
In the Network menu, select the data plane network you created in the previous steps.
Click Done.
Click Create.
The VM instances page opens. It can take up to a minute before the creation of the VM completes.
gcloud
Create a VM that uses the two VPC network subnets you created
in the previous steps by using the
gcloud compute instances create
command
with the following flags:
gcloud compute instances create VM_NAME \
--image-family=IMAGE_FAMILY \
--image-project=IMAGE_PROJECT \
--machine-type=MACHINE_TYPE \
--network-interface=network=CONTROL_PLANE_NETWORK_NAME,subnet=CONTROL_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \
--network-interface=network=DATA_PLANE_NETWORK_NAME,subnet=DATA_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \
--zone=ZONE
Replace the following:
VM_NAME
: the name of the VM.IMAGE_FAMILY
: the image family for the operating system that the boot disk will be initialized with. Alternatively, you can specify the--image=IMAGE
flag and replaceIMAGE
with a specific version of an image. Learn how to view a list of images available in the Compute Engine images project.IMAGE_PROJECT
: the name of the image project that contains the disk image.MACHINE_TYPE
: a machine type, predefined or custom, for the VM.VNIC_TYPE
: the vNIC type to use for the control plane and data plane networks. The value must be one of the following:To use gVNIC, specify
GVNIC
.To use VirtIO-Net, specify
VIRTIO_NET
.
CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you created in the previous steps.CONTROL_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network you created in the previous steps.DATA_PLANE_NETWORK_NAME
: the name of the data plane network you created in the previous steps.DATA_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network you automatically created in the previous steps.ZONE
: the zone where to create the VM in. Specify a zone within the same region of the subnet you created in the previous steps.
For example, to create a VM named dpdk-vm
in the us-central1-a
zone that
specifies a SSD persistent disk of 512 GB, a predefined C2 machine type with
60 vCPUs, Tier_1 networking, and a data plane and a control plane network
that both use gVNIC, run the following command:
gcloud compute instances create dpdk-vm \
--boot-disk-size=512GB \
--boot-disk-type=pd-ssd \
--image-project=ubuntu-os-cloud \
--image-family=ubuntu-2004-lts \
--machine-type=c2-standard-60 \
--network-performance-configs=total-egress-bandwidth-tier=TIER_1 \
--network-interface=network=control,subnet=control,nic-type=GVNIC \
--network-interface=network=data,subnet=data,nic-type=GVNIC \
--zone=us-central1-a
API
Create a VM that uses the two VPC network subnets you created
in the previous steps by making a POST
request to the
instances.insert
method
with the following fields:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
{
"name": "VM_NAME",
"machineType": "MACHINE_TYPE",
"disks": [
{
"initializeParams": {
"sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE_FAMILY"
}
}
],
"networkInterfaces": [
{
"network": "global/networks/CONTROL_PLANE_NETWORK_NAME",
"subnetwork": "regions/REGION/subnetworks/CONTROL_PLANE_SUBNET_NAME",
"nicType": "VNIC_TYPE"
},
{
"network": "global/networks/DATAPLANE_NETWORK_NAME",
"subnetwork": "regions/REGION/subnetworks/DATA_PLANE_SUBNET_NAME",
"nicType": "VNIC_TYPE"
}
]
}
Replace the following:
PROJECT_ID
: the project ID of the project where the control plane VPC network and the data plane VPC network are located.ZONE
: the zone where to create the VM in.VM_NAME
: the name of the VM.MACHINE_TYPE
: a machine type, predefined or custom, for the VM.IMAGE_PROJECT
: the name of the image project that contains the disk image.IMAGE_FAMILY
: the image family for the operating system that the boot disk will be initialized with. Alternatively, you can specify a specific version of an image. Learn how to view a list of images in the Compute Engine images project.CONTROL_PLANE_NETWORK_NAME
: the name of the control plane network you created in the previous steps.REGION
: the region where the subnets of the control plane and data plane networks exist.CONTROL_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network you created in the previous steps.VNIC_TYPE
: the vNIC type to use for the control plane and data plane networks. The value must be one of the following:To use gVNIC, specify
GVNIC
.To use VirtIO-Net, specify
VIRTIO_NET
.
DATA_PLANE_NETWORK_NAME
: the name of the data plane network you created in the previous steps.DATA_PLANE_SUBNET_NAME
: the name of the subnet for the control plane network you created in the previous steps.
For example, to create a VM named dpdk-vm
in the us-central1-a
zone that
specifies a SSD persistent disk of 512 GB, a predefined C2 machine type with
60 vCPUs, Tier_1 networking, and a data plane and a control plane network
that both use gVNIC, make the following POST
request:
POST https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-a/instances
{
"name": "dpdk-vm",
"machineType": "c2-standard-60",
"disks": [
{
"initializeParams": {
"diskSizeGb": "512GB",
"diskType": "pd-ssd",
"sourceImage": "projects/ubuntu-os-cloud/global/images/ubuntu-2004-lts"
},
"boot": true
}
],
"networkInterfaces": [
{
"network": "global/networks/control",
"subnetwork": "regions/us-central1/subnetworks/control",
"nicType": "GVNIC"
},
{
"network": "global/networks/data",
"subnetwork": "regions/us-central1/subnetworks/data",
"nicType": "GVNIC"
}
],
"networkPerformanceConfig": {
"totalEgressBandwidthTier": "TIER_1"
}
}
For more configuration options when creating a VM, see Create and start a VM instance.
Install DPDK on your VM
To install DPDK on your VM, follow these steps:
Connect to the VM you created in the previous section using SSH.
Configure the dependencies for DPDK installation:
sudo apt-get update && sudo apt-get upgrade -yq sudo apt-get install -yq build-essential ninja-build python3-pip \ linux-headers-$(uname -r) pkg-config libnuma-dev sudo pip install pyelftools meson
Install DPDK:
wget https://fast.dpdk.org/rel/dpdk-23.07.tar.xz tar xvf dpdk-23.07.tar.xz cd dpdk-23.07
To build DPDK with the examples:
meson setup -Dexamples=all build sudo ninja -C build install; sudo ldconfig
Install driver
To prepare DPDK to run on a driver, install the driver by selecting one of the following methods:
Install a IOMMU-less VFIO
To install the IOMMU-less VFIO driver, follow these steps:
Check if VFIO is enabled:
cat /boot/config-$(uname -r) | grep NOIOMMU
If VFIO isn't enabled, then follow the steps in Install UIO.
Enable the No-IOMMU mode in VFIO:
sudo bash -c 'echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode'
Install UIO
To install the UIO driver on DPDK, select one of the following methods:
Install UIO using git
To install the UIO driver on DPDK using git
, follow these steps:
Clone the
igb_uio
git repository to a disk in your VM:git clone https://dpdk.org/git/dpdk-kmods
From the parent directory of the cloned git repository, build the module and install the UIO driver on DPDK:
pushd dpdk-kmods/linux/igb_uio sudo make sudo depmod && sudo insmod igb_uio.ko popd
Install UIO using Linux packages
To install the UIO driver on DPDK using Linux packages, follow these steps:
Install the
dpdk-igb-uio-dkms
package:sudo apt-get install -y dpdk-igb-uio-dkms
Install the UIO driver on DPDK:
sudo modprobe igb_uio
Bind DPDK to a driver and test it
To bind DPDK to the driver you installed in the previous section, follow these steps:
Get the Peripheral Component Interconnect (PCI) slot number for the current network interface:
sudo lspci | grep -e "gVNIC" -e "Virtio network device"
For example, if the VM is using
ens4
as the network interface, the PCI slot number is00:04.0
.Stop the network interface connected to the network adaptor:
sudo ip link set NETWORK_INTERFACE_NAME down
Replace
NETWORK_INTERFACE_NAME
with the name of the network interface specified in the VPC networks. To see which network interface the VM is using, view the configuration of the network interface:sudo ifconfig
Bind DPDK to the driver:
sudo dpdk-devbind.py --bind=DRIVER PCI_SLOT_NUMBER
Replace the following:
DRIVER
: the driver to bind DPDK on. Specify one of the following values:UIO driver:
igb_uio
IOMMU-less VFIO driver:
vfio-pci
PCI_SLOT_NUMBER
: the PCI slot number of the current network interface formatted as00:0NUMBER.0
.
Create the
/mnt/huge
directory, and then create some hugepages for DPDK to use for buffers:sudo mkdir /mnt/huge sudo mount -t hugetlbfs -o pagesize=1G none /mnt/huge sudo bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages' sudo bash -c 'echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
Test that DPDK can use the network interface you created in the previous steps by running the
testpmd
example application that is included with the DPDK libraries:sudo ./build/app/dpdk-testpmd
For more information about testing DPDK, see Testpmd Command-line Options.
Unbind DPDK
After using DPDK, you can unbind it from the driver you've installed in the previous section. To unbind DPDK, follow these steps:
Unbind DPDK from the driver:
sudo dpdk-devbind.py -u PCI_SLOT_NUMBER
Replace
PCI_SLOT_NUMBER
with the PCI slot number you specified in the previous steps. If you want to verify the PCI slot number for the current network interface:sudo lspci | grep -e "gVNIC" -e "Virtio network device"
For example, if the VM is using
ens4
as the network interface, the PCI slot number is00:04.0
.Reload the Compute Engine network driver:
sudo bash -c 'echo PCI_SLOT_NUMBER > /sys/bus/pci/drivers/VNIC_DIRECTORY/bind' sudo ip link set NETWORK_INTERFACE_NAME up
Replace the following:
PCI_SLOT_NUMBER
: the PCI slot number you specified in the previous steps.VNIC_DIRECTORY
: the directory of the vNIC. Depending on the vNIC type you're using, specify one of the following values:gVNIC:
gvnic
VirtIO-Net:
virtio-pci
NETWORK_INTERFACE_NAME
: the name of the network interface you specified in the previous section.
What's next
Review the network bandwidth rates for your machine type.
Learn more about creating and managing VPC networks.
Learn more about higher MTU settings with jumbo frames.
Learn more about TCP optimization for network performance.