Virtual Switching in User Space: Forwarding and Control with DPDK and OVS
Andrei Negulescu Behnam Dezfouli
Santa Clara University
Keywords and acronyms: configurability, DPDK, IoT, NFV, OvS, SDN, virtual switching
1. Background and Motivation
National Institute of Standards and Technology defines telecommunications as being “the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.” Building an infrastructure that provides telecommunication services to users (human beings or machines) has always been an arduous process, unfortunately, one in which the users could not directly participate. The progress made by the computing technology creates new opportunities to have users getting more involved, giving them the necessary tools to build communication infrastructures and services that are flexible and opened for customization. Virtual switching plays a vital role in giving the networking infrastructure the amount of flexibility needed by the end users interested to give their applications the desired level of configurability. New networking ecosystems have emerged, such as SDN, NFV and IoT, and they arguably have been enabled by the development of virtual switches. We are interested in evaluating the performance impact of virtual switching on specific networking services and applications and we start with virtual switches running in the user space, such as OVS-DPDK.
2. Introduction
The document describes the necessary steps required to build a virtual switching environment that runs in the Linux user space of a virtual machine. This can be extended to different hardware platforms as well as to different operating systems and virtualization technologies. From a qualitative perspective, the overall performance offered by a virtual switch running both forwarding and control planes in user space is expected to be superior to the performance achieved when the non DPDK version of OVS and the associated interface NICs, with their respective drivers, are running in kernel space.
The performance characterization of general-purpose physical switches relies on throughput and latency measurements. Due to increased configurability, virtual switches performance characterization is an onerous enterprise. A quantitative performance evaluation is difficult to achieve, although this is necessary and required in order to assess the gain realized by using the virtual switching in user space approach. Defining performance and how is it influenced by factors such as the utilized hardware or the operating system will be the first step taken towards a better understanding of the advantages and disadvantages of using virtual switching for building modern networking infrastructures.
The workflow highlighted by this document assumes that we have a successful installation of the DPDK library and OpenvSwitch with DPDK support, as presented in our previous technical report: “DPDK with OVS and Open Flow Installation: Getting Ready for User Space Switching in a Virtual Environment”.
3. General DPDK Setup
The memory allocation for packet buffers requires hugepages support (hugetlbfs linux option) in the running kernel. This is recommended by DPDK developers in order to improve performance by reducing the memory access time (more details are presented by the Intel DPDK System Requirements document https://doc.dpdk.org/guides/linux_gsg/sys_reqs.html). Since the use of hugepages is expected to reduce the time taken by the OS to access a page by limiting the number of virtual page to physical page address translations (i.e. limiting the number of entries cached by TLB), we consider that the number and size of hugepages are two factors that should be considered as part of virtual switching performance evaluations.
Configure Hugepages
Example on setting up the hugepages support per DPDK System Requirements. The setup parameters depend on the hardware available; we just present the configuration that we used for our intended application. In this example we set 512 pages, each page having a size of 2048kBytes.
#cd ~/dpdk
#echo 512 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
#sudo mkdir /mnt/huge
sudo mkdir -p /mnt/huge_2mb
sudo mount -t hugetlbfs none /mnt/huge
sudo mount -t hugetlbfs nodev /mnt/huge
sudo mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
sudo mount -t hugetlbfs none /dev/hugepages
The following verification step confirms the creation of “Hugepages”, i.e. number and size. Notice that the total number of pages is equal to the number of free pages, which means that the DPDK forwarding service does not yet use the Hugepages.
#grep Huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: 512
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Bind desired network devices to a DPDK compatible driver
In this step we will “move” the desired interfaces to the user space, i.e. associate network interfaces to a DPDK compatible driver. Linux kernel modules, such as uio or vfio, are used in order map the device memory to the kernel user space (https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html). In the following example we will force eth2 interface association to the DPDK compatible driver. This step involves loading the dpdk driver, disable the target interface and bind it to the dpdk driver, as shown in the following example:
#ifconfig eth2
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 100.1.2.10 netmask 255.255.255.0 broadcast 100.1.2.255
inet6 fe80::20c:29ff:fe35:b911 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:35:b9:11 txqueuelen 1000 (Ethernet)
RX packets 1 bytes 258 (258.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12 bytes 936 (936.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#cd ~/dpdk
#. /usertools/dpdk-devbind.py –status
Network devices using kernel driver
===================================
0000:03:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth0 drv=vmxnet3 unused= *Active*
0000:04:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth1 drv=vmxnet3 unused= *Active*
0000:0b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth2 drv=vmxnet3 unused= *Active*
0000:13:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth3 drv=vmxnet3 unused= *Active*
0000:1b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth4 drv=vmxnet3 unused= *Active*
#systemctl status dpdk.service
dpdk.service – DPDK runtime environment
Loaded: loaded (/lib/systemd/system/dpdk.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2019-08-09 04:45:52 UTC; 8min ago
Process: 764 ExecStart=/lib/dpdk/dpdk-init start (code=exited, status=0/SUCCESS)
Main PID: 764 (code=exited, status=0/SUCCESS)
The dpdk compatible driver that we load in this case id vfio-pci. Other options are available and they can be used based on the intended use cases and network design. The choice of the dpdk driver should be considered as another factor that is required for evaluating the performance of a virtual switching solution.
#cd dpdk
#sudo modprobe vfio-pci
#sudo chmod 777 /dev/vfio
#sudo chmod 777 /dev/vfio/*
#sudo ifconfig eth2 down
#sudo ./usertools/dpdk-devbind.py —bind=vfio-pci eth2
#. /usertools/dpdk-devbind.py –status
Network devices using DPDK-compatible driver
============================================
=>0000:0b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ drv=vfio-pci unused=vmxnet3
Network devices using kernel driver
===================================
0000:03:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth0 drv=vmxnet3 unused=vfio-pci *Active*
0000:04:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth1 drv=vmxnet3 unused=vfio-pci *Active*
0000:13:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth3 drv=vmxnet3 unused=vfio-pci *Active*
0000:1b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth4 drv=vmxnet3 unused=vfio-pci *Active*
We verify the Hugepages status and notice that at this point in the configuration process they are all available.
#grep Huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: 512
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
We also verify the initial bootloader settings (notice that we went from a number of 16 pages to 512 after the manual configuration step).
#cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.15.0-54-generic root=UUID=2e294961-ce03-483e-a53e-ff3fc4514bd4 ro default_hugepagesz=2M hugepagesz=2M hugepages=16 iommu=pt intel_iommu=on
4. Hardware platform discovery
This step follows a minimal check list of parameters need for further Open vSwitch configuration. A deeper hardware discovery can be performed here based on the intended use cases.
#numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3
node 0 size: 1993 MB
node 0 free: 1436 MB
node distances:
node 0
0: 10
#taskset -c -p 1
pid 1’s current affinity list: 0-3
#sudo ovs-vswitchd –version
ovs-vswitchd (Open vSwitch) 2.9.2
DPDK 17.11.2
The following command shows that the OpenvSwitch parameters that are dpdk specific have not been set yet.
#sudo ovs-vsctl get Open_vSwitch . other_config
{}
5. OVS Setup
- Start the OpenvSwitch setup with a clean database installation.
#sudo /usr/share/openvswitch/scripts/ovs-ctl stop
#sudo mkdir -p /etc/openvswitch
#sudo mkdir -p /var/run/openvswitch
#sudo rm /etc/openvswitch/conf.db
#sudo ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
There are two start option, 1. “no SSL support” and “SSL support”, presented as follows:
– No SSL support
#sudo ovsdb-server –remote=punix:/var/run/openvswitch/db.sock –remote=db:Open_vSwitch,Open_vSwitch,manager_options –pidfile –detach
– SSL support
#sudo ovsdb-server –remote=punix:/var/run/openvswitch/db.sock \
–remote=db:Open_vSwitch,Open_vSwitch,manager_options \
–private-key=db:Open_vSwitch,SSL,private_key \
–certificate=Open_vSwitch,SSL,certificate \
–bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert –pidfile –detach
- Initialize the OVS database:
#sudo ovs-vsctl –no-wait init
- Start vswitchd configuring OVS with the dpdk specific options:
#sudo ovs-vsctl –no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x4
#sudo ovs-vsctl –no-wait set Open_vSwitch . other_config:dpd-lcore-mask=0x4
#sudo ovs-vsctl –no-wait set Open_vSwitch . other_config:dpdk-socket-mem=”256“
We have requested that a number of 256 hugepages to be used, as presented above.
Finally, our OVS dpdk settings look as following:
#sudo ovs-vsctl get Open_vSwitch . other_config
{dpd-lcore-mask=”0x4″, dpdk-init=”true”, dpdk-socket-mem=”256“, pmd-cpu-mask=”0x4”}
Another check on the number of free hugepages should be performed at this step:
#grep Huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: 512
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
- Start Open vSwitch ovs-vswitchd with dpdk-init option true.
The DPDK forwarding capability will get activated in this step as presented in the following example.
#export DB_SOCK=/var/run/openvswitch/db.sock
#sudo ovs-vsctl –no-wait set Open_vSwitch . other_config:dpdk-init=true
#sudo ovs-vswitchd unix:$DB_SOCK –pidfile –detach
An execution trace example follows:
2019-08-09T17:27:23Z|00001|ovs_numa|INFO|Discovered 4 CPU cores on NUMA node 0
2019-08-09T17:27:23Z|00002|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 CPU cores
2019-08-09T17:27:23Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting…
2019-08-09T17:27:23Z|00004|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2019-08-09T17:27:23Z|00005|dpdk|INFO|Using DPDK 17.11.2
2019-08-09T17:27:23Z|00006|dpdk|INFO|DPDK Enabled – initializing…
2019-08-09T17:27:23Z|00007|dpdk|INFO|No vhost-sock-dir provided – defaulting to /var/run/openvswitch
2019-08-09T17:27:23Z|00008|dpdk|INFO|IOMMU support for vhost-user-client disabled.
2019-08-09T17:27:23Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x4 –socket-mem 256
2019-08-09T17:27:23Z|00010|dpdk|INFO|EAL: Detected 4 lcore(s)
2019-08-09T17:27:24Z|00011|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
2019-08-09T17:27:24Z|00012|dpdk|INFO|EAL: Probing VFIO support…
2019-08-09T17:27:24Z|00013|dpdk|INFO|EAL: VFIO support initialized
2019-08-09T17:27:24Z|00014|dpdk|INFO|EAL: PCI device 0000:03:00.0 on NUMA socket -1
2019-08-09T17:27:24Z|00015|dpdk|WARN|EAL: Invalid NUMA socket, default to 0
2019-08-09T17:27:24Z|00016|dpdk|INFO|EAL: probe driver: 15ad:7b0 net_vmxnet3
2019-08-09T17:27:24Z|00017|dpdk|INFO|EAL: PCI device 0000:04:00.0 on NUMA socket -1
2019-08-09T17:27:24Z|00018|dpdk|WARN|EAL: Invalid NUMA socket, default to 0
2019-08-09T17:27:24Z|00019|dpdk|INFO|EAL: probe driver: 15ad:7b0 net_vmxnet3
2019-08-09T17:27:24Z|00020|dpdk|INFO|EAL: PCI device 0000:0b:00.0 on NUMA socket -1
2019-08-09T17:27:24Z|00021|dpdk|WARN|EAL: Invalid NUMA socket, default to 0
2019-08-09T17:27:24Z|00022|dpdk|INFO|EAL: probe driver: 15ad:7b0 net_vmxnet3
2019-08-09T17:27:24Z|00023|dpdk|INFO|EAL: using IOMMU type 1 (Type 1)
2019-08-09T17:27:24Z|00024|dpdk|INFO|EAL: Ignore mapping IO port bar(3)
2019-08-09T17:27:24Z|00025|dpdk|INFO|EAL: PCI device 0000:13:00.0 on NUMA socket -1
2019-08-09T17:27:24Z|00026|dpdk|WARN|EAL: Invalid NUMA socket, default to 0
2019-08-09T17:27:24Z|00027|dpdk|INFO|EAL: probe driver: 15ad:7b0 net_vmxnet3
2019-08-09T17:27:24Z|00028|dpdk|INFO|EAL: PCI device 0000:1b:00.0 on NUMA socket -1
2019-08-09T17:27:24Z|00029|dpdk|WARN|EAL: Invalid NUMA socket, default to 0
2019-08-09T17:27:24Z|00030|dpdk|INFO|EAL: probe driver: 15ad:7b0 net_vmxnet3
Zone 0: name:<rte_eth_dev_data>, IO:0x24bb64c0, len:0x34900, virt:0x1003b64c0, socket_id:0, flags:0
2019-08-09T17:27:24Z|00031|dpdk|INFO|DPDK pdump packet capture enabled
2019-08-09T17:27:24Z|00032|dpdk|INFO|DPDK Enabled – initialized
2019-08-09T17:27:24Z|00033|timeval|WARN|Unreasonably long 1170ms poll interval (15ms user, 421ms system)
2019-08-09T17:27:24Z|00034|timeval|WARN|faults: 589 minor, 49 major
2019-08-09T17:27:24Z|00035|timeval|WARN|disk: 9568 reads, 0 writes
2019-08-09T17:27:24Z|00036|timeval|WARN|context switches: 109 voluntary, 3 involuntary
2019-08-09T17:27:24Z|00037|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=d5517297:
2019-08-09T17:27:24Z|00038|coverage|INFO|bridge_reconfigure 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00039|coverage|INFO|cmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 10
2019-08-09T17:27:24Z|00040|coverage|INFO|miniflow_malloc 0.0/sec 0.000/sec 0.0000/sec total: 25
2019-08-09T17:27:24Z|00041|coverage|INFO|hmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 399
2019-08-09T17:27:24Z|00042|coverage|INFO|txn_unchanged 0.0/sec 0.000/sec 0.0000/sec total: 2
2019-08-09T17:27:24Z|00043|coverage|INFO|txn_incomplete 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00044|coverage|INFO|poll_create_node 0.0/sec 0.000/sec 0.0000/sec total: 54
2019-08-09T17:27:24Z|00045|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00046|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0000/sec total: 76
2019-08-09T17:27:24Z|00047|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00048|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00049|coverage|INFO|util_xalloc 0.0/sec 0.000/sec 0.0000/sec total: 7970
2019-08-09T17:27:24Z|00050|coverage|INFO|netdev_get_hwaddr 0.0/sec 0.000/sec 0.0000/sec total: 5
2019-08-09T17:27:24Z|00051|coverage|INFO|netlink_received 0.0/sec 0.000/sec 0.0000/sec total: 3
2019-08-09T17:27:24Z|00052|coverage|INFO|netlink_sent 0.0/sec 0.000/sec 0.0000/sec total: 1
2019-08-09T17:27:24Z|00053|coverage|INFO|89 events never hit
We notice that 256 pages are being used, which means that OVS has become the control plane for DPDK.
- Verify the number of free hugepages
# grep Huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 512
HugePages_Free: 256
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
- Verify if the ovsdb-server and ovs-vswitchd processesare running:
#ps aux | grep ovs
root ………. ovsdb-server –remote=punix:/var/run/openvswitch/db.sock –remote=db:Open_vSwitch,Open_vSwitch,manager_options –pidfile –detach
root ………. ovs-vswitchd unix:/var/run/openvswitch/db.sock –pidfile –detach
- Additional check point:
#sudo /usr/share/openvswitch/scripts/ovs-ctl stop
#sudo /usr/share/openvswitch/scripts/ovs-ctl start
* ovsdb-serveris already running
* ovs-vswitchd is already running
* Enabling remote OVSDB managers
6. OVS with DPDK Virtual Bridge, Ports and Interfaces Configuration Verification
- Execution traces examples
In this section we give a couple of examples and execution traces. These should be used as a reference for setup validation and configuration verification purposes.
#sudo ovs-vsctl show
92958a30-cfae-400f-acc5-7399fa1c048e
#sudo ovs-vsctl add-br br0 — set bridge br0 datapath_type=netdev
#sudo ovs-vsctl show
92958a30-cfae-400f-acc5-7399fa1c048e
Bridge “br0”
Port “br0”
Interface “br0”
type: internal
#sudo ovs-vsctl add-port br0 phy0 — set Interface phy0 type=dpdk options:dpdk-devargs=0000:0b:00.0 ofport_request=1
#sudo ovs-vsctl add-port br0 phy1 — set Interface phy1 type=dpdk options:dpdk-devargs=0000:1b:00.0 ofport_request=2
#sudo ovs-vsctl show
e3744b87-a781-428a-b9a7-34408b41126f
Bridge “br0”
Port “phy0”
Interface “phy0”
type: dpdk
options: {dpdk-devargs=”0000:0b:00.0“}
Port “phy1”
Interface “phy1”
type: dpdk
options: {dpdk-devargs=”0000:1b:00.0“}
Port “br0”
Interface “br0”
type: internal
#sudo ovs-ofctl del-flows br0
#sudo ovs-ofctl add-flow br0 in_port=1,action=output:2
#sudo ovs-ofctl add-flow br0 in_port=2,action=output:1
#sudo ovs-ofctl dump-flows br0
Based on the existing network architecture, we can start a traffic test between desired user space interfaces. An execution trace example follows:
- Example1
2019-08-09T17:40:10.052Z|00033|dpdk|INFO|DPDK Enabled – initialized
2019-08-09T17:40:10.057Z|00034|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
………………………………………………………………………………………………………………………………………………………………….
2019-08-09T17:40:10.058Z|00049|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_orig_tuple6
2019-08-09T17:40:10.605Z|00050|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 3 created.
2019-08-09T17:40:10.788Z|00051|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 2 created.
2019-08-09T17:40:10.788Z|00052|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
………………………………………………………………………………………………………………………………………………………………….
2019-08-09T17:40:10.818Z|00059|dpif_netdev|INFO|Core 3 on numa node 0 assigned port ‘phy0’ rx queue 0 (
measured processing cycles 0).
………………………………………………………………………………………………………………………………………………………………….
2019-08-09T18:39:16Z|00050|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 3 created.
2019-08-09T18:39:16Z|00051|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 2 created.
2019-08-09T18:39:16Z|00052|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
………………………………………………………………………………………………………………………………………………………………….
2019-08-09T18:39:16Z|00057|netdev_dpdk|ERR|Interface phy1(rxq:1 txq:3 lsc interrupt mode:false) configure error: Operation not supported
2019-08-09T18:39:16Z|00058|dpif_netdev|INFO|Core 3 on numa node 0 assigned port ‘phy1’ rx queue 0 (measured processing cycles 0).
- Example2
#sudo ovs-vsctl show
e3744b87-a781-428a-b9a7-34408b41126f
Bridge “br0”
Port “phy0”
Interface “phy0”
type: dpdk
options: {dpdk-devargs=”0000:0b:00.0“}
Port “phy1”
Interface “phy1”
type: dpdk
options: {dpdk-devargs=”0000:1b:00.0“}
Port “br0”
Interface “br0”
type: internal
#./usertools/dpdk-devbind.py –status
Network devices using DPDK-compatible driver
============================================
0000:0b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ drv=vfio-pci unused=vmxnet3
0000:1b:00.0 ‘VMXNET3 Ethernet Controller 07b0’ drv=vfio-pci unused=vmxnet3
Network devices using kernel driver
===================================
0000:03:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth0 drv=vmxnet3 unused=vfio-pci *Active*
0000:04:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth1 drv=vmxnet3 unused=vfio-pci *Active*
0000:13:00.0 ‘VMXNET3 Ethernet Controller 07b0’ if=eth3 drv=vmxnet3 unused=vfio-pci *Active*
7. Conclusion and Future Work
In this document we have detailed the workflow required to setup a virtual switching environment in Linux user space using OpenvSwitch and DPDK. The combination of DPDK forwarding and OVS based control plane running in user space presents itself as a compelling proposition to infrastructure users due to increased configurability and performance provided to the network services and applications that are using them.
The installation, verification and validation process revealed a number of factors that we consider to be influencing the performance of a virtual switching environment, such as a) hugepages size and number, b) dpdk interface drivers and c) hardware platform choices. We consider these factors to be the minimum set required for the performance characterization of a virtual switching solution. This work also provides the background quantitative evaluation needed by networking infrastructure users when deciding on services configurability and customization.
References and Recommended Reading
National Institute of Standards and Technology, https://www.nist.gov/
System Requirements for DPDK, https://doc.dpdk.org/guides/linux_gsg/sys_reqs.html
Open vSwitch* with DPDK Overview, https://software.intel.com/en-us/articles/open-vswitch-with-dpdk-overview
Configure Open vSwitch* with Data Plane Development Kit on Ubuntu* Server 17.04,
https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server
OVS-DPDK Parameters: Dealing with multi-NUMA, https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/
Open vSwitch with DPDK, http://docs.openvswitch.org/en/latest/intro/install/dpdk/
Open vSwitch, Release 2.8.0, https://buildmedia.readthedocs.org/media/pdf/ovs-istokes/stable/ovs-istokes.pdf
Troubleshooting Resources
OVS-DPDK
OVS-DPDK End to End Troubleshooting Guide, https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/ovs-dpdk_end_to_end_troubleshooting_guide/index
http://mails.dpdk.org/archives/users/2018-August/003326.html