Making super fast virtual machines with passthrough
Reader beware: this text is highly technical! It’s meant for fellow developers and tech enthusiasts. We’ll take a look at using virtualization as a no-compromise replacement for dual booting between operating systems, emphasis on the word no-compromise. Many tasks are quite feasible even with a basic VM, for example testing websites in a legacy browser not available on current operating systems. For more demanding tasks, booting to another natively running OS and then back just for one specific app or a gaming break is cumbersome. So what can be done?
Getting up to speed
Emulating a CPU, graphics and I/O has a heavy performance cost. We can get away with it when running old applications on modern hardware (think of DOSBox and video game console emulators). We need something faster than emulation for a snappy VM running new operating systems and apps. To do this, the virtual machine monitor has to bring the guest system closer to bare metal. Hardware assisted virtualization has been on server and consumer CPUs for years (as Intel VT-x and AMD-V). It allows executing guest instructions on the real CPU with far less overhead than in emulation. It is crucial for running x86 virtual machines at nearly native performance. It’s also widely supported nowadays.
On the I/O side of things, various paravirtualization methods have been used to boost performance compared to emulation. What this means is the virtual machine manager exposes special devices that allow somewhat direct access to actual host hardware through an API. This provides faster disk, networking and timing support in virtualized environments. Even limited hardware accelerated graphics support exists in various virtual machine managers, where OpenGL and Direct3D up to version 9 via an API translation similar to Wine are supported. Paravirtualized devices often need specific device drivers that have to be installed on the guest OS.
Another step further is to provide a guest system with dedicated, exclusive access to actual devices on the host machine. We call this method PCI passthrough. Since the guest has direct access to the device, it is controlled by the same device drivers and has the potential to provide the same performance and device specific functionality as on bare metal. For instance, TRIM commands can be sent directly to an SSD connected to a passed through disk controller. Dedicated storage and networking hardware can be useful for demanding server use cases where the best performance is needed without giving up the benefits of virtualization. On the desktop, an interesting use case is dedicating a graphics card (VGA passthrough) to a guest OS and running graphics-intensive applications with high performance.
The software side
Support for PCI passthrough exists in various virtualization software. However for VGA passthrough specifically the common and well documented approach is to run a VM using the Linux kernel’s KVM as its hypervisor, QEMU as the userspace emulator and OVMF as its UEFI component. Trying out QEMU is relatively straightforward. Virtual machines can be fired up from the command line and all the needed configuration options are given as arguments. Then, host devices can be given to a VM using a helper driver called vfio-pci.
If all goes well, you’ll have a VM with direct access to the piece of hardware, with minimal overhead. Pretty much any PCI-E device can be passed through, with caveats (we’ll get back to these in a moment). Many motherboards have their SATA and USB ports spread out on more than one controller. Then, one of them could be dedicated to a VM. My own VM setup has its own graphics card, USB controller (mouse and keyboard can be switched between the host and guest with a switch), an add-on SATA controller and the onboard audio passed through. After some tinkering, optimization and figuring out what works best, I’ve given up on dual booting because the VM is simply free of compromises.
The fine print
Let’s look at the caveats, then. Doing this needs obviously appropriate hardware. Most importantly both the CPU and the motherboard need IOMMU virtualization support (Intel VT-d and AMD-Vi). Luckily, they have been available in many if not most consumer platforms for years. However, a working implementation on a motherboard is not a given even if VT-d or AMD-Vi support is advertised. Though in many cases fixes have been provided via BIOS/UEFI updates, virtualization features are hardly a priority on mainstream consumer hardware.
Another common issue is something known is IOMMU grouping which deals with the separation between devices. To put it simply, devices cannot be passed through if other devices belong to the same group, unless you pass all of them. Otherwise they could interfere with each other and nasty things could happen. How your onboard devices and add-on card slots are grouped depends on the motherboard and the chipset it’s based on.
Finally, the PCI-E devices themselves can have firmware bugs that cause them to behave badly when passed to a VM. Some hardware vendors are also known to implement VM detection in their (consumer hardware) drivers and prevent them from working if they are running inside one. With server grade, and to some extent enthusiast consumer hardware you might avoid these issues. Still, it doesn’t hurt to do some research on your prospective components before building a setup like this.
All in all, we have just scratched the surface on this matter and this is only intended to provide an introduction. If you’re interested in this kind of thing, I recommend checking out the links below for more details. You might also ask, what’s the point? Considering the amount of time spent tinkering and possibly the cost of additional extension cards for VMs, one could just buy a whole extra machine and be done with it. But where’s the challenge and fun in that? 😉
Passthrough and virtualization in general will quite probably remain in the niche for regular desktop/laptop users but time will tell what happens in the business and server world.