Basically, a single hardware device like a network card pretends to be a whole bunch (say 16) virtual devices. Each device can be passed as a PCIe device to a guest VM, and will be handled inside as a hardware device. So your Windows VM will need the Broadcom driver or whatnot, rather than using the VirtIO one.
Why do this? Part because it turns out that putting your VM host's hardware interface into a Linux software bridge disables part of the hardware acceleration. This can actually make it so that you can't reach the full bandwidth of the device. On slower CPUs this may mean you can't get to 10 Gbps.
Part because there's overhead in the VM transition and this also greatly reduces this.
I also like that it doesn't need you to fiddle with the network configuration on the host.
It's well supported, including on some consumer motherboards but you have to do a bunch of fiddling in the BIOS config to enable it.
Christophe Massiot had a great talk at FOSDEM last year about the pros and cons of different network options including SR-IOV in a VM environment, especially the challenges of multicast
Oh my god, thank you. I've been trying to figure out why my VM to VM bandwidth is capped at 30Gbit. I'm using multi-threaded iperf to benchmark, so it doesn't seem to be a data generation or consumption bottleneck. I'm going to have to do a bit more experimenting.
If both VMs are on the same host, is there any way to essentially achieve RDMA? VM1 says to VM2, "It's in memory at this location", and VM2 just reads directly from that memory location without a copy by the CPU?
I'm no expert, obviously, but I fail to see why VM to VM memory operations should be slower than RAM sans some latency increase due to setting up the operation.
Basically, a single hardware device like a network card pretends to be a whole bunch (say 16) virtual devices. Each device can be passed as a PCIe device to a guest VM, and will be handled inside as a hardware device. So your Windows VM will need the Broadcom driver or whatnot, rather than using the VirtIO one.
Why do this? Part because it turns out that putting your VM host's hardware interface into a Linux software bridge disables part of the hardware acceleration. This can actually make it so that you can't reach the full bandwidth of the device. On slower CPUs this may mean you can't get to 10 Gbps.
Part because there's overhead in the VM transition and this also greatly reduces this.
I also like that it doesn't need you to fiddle with the network configuration on the host.
It's well supported, including on some consumer motherboards but you have to do a bunch of fiddling in the BIOS config to enable it.