GPU Passthrough using Qemu and KVM

By Richard Szibele
23 May, 2017

I have been using Windows 10 inside a Qemu/KVM virtual machine with a passed-through GPU on NixOS for a while now and I can say that the performance is very close to native. It's great to be able to fire up a Windows 10 machine which integrates seamlessly into my workflow with synergy. Not to mention, it is also possible to play games without lag! In my case I pass through my NVidia GTX 650 Ti and it works very well in the virtual machine, but YMMV. I will give a detailed description on how I got it working so you can get it working too.

Requirements

You will require two GPUs, one for your host and one for your guest and a motherboard which supports virtualization. I use an AMD Radeon HD 5470 for my host and I pass through my Nvidia GTX 650 Ti to my guest. You will also require a motherboard which supports virtualization and iommu, like my Gigabyte motherboard GA-990FXA-UD3. You will also require the latest kernel with vfio support, I'm currently using 4.9.27 without issues.

Enabling Virtualization in the BIOS

The first thing we must do is enable virtualization in the BIOS. This really depends on your motherboard, I had to enable virtualization on mine. Look for a setting called virtualization and for IOMMU support AMD-Vi on AMD boards and Vt-D on Intel boards.

Once you have ensured that it is indeed available and enabled, you may have to add intel_iommu=on for Intel boards or amd_iommu=on for AMD boards to your bootloader kernel options.

After rebooting, check to see if AMD-Vi or Vt-D is available:

	
$ dmesg | grep -e DMAR -e IOMMU
AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
AMD IOMMUv2 driver by Joerg Roedel 
AMD IOMMUv2 functionality not available on this system
	

The important line for AMD boards is AMD-Vi: Found IOMMU... and for Intel boards Intel-IOMMU: enabled.

Preventing GPU Drivers from Loading

Next we must blacklist all drivers for the GPU which should be passed to the guest. In my case I must blacklist the drivers nouveau and nvidia. If you are passing though an AMD graphics card you must blacklist the drivers radeon and fglrx.

Blacklisting Drivers on Debian

To blacklist a driver on Debian, create the file /etc/modprobe.d/guest-gpu-blacklist.conf with the drivers to be blacklisted:

	
blacklist nouveau
blacklist nvidia
	
Blacklisting Drivers on NixOS

To blacklist a driver on NixOS, add the following line to your /etc/nixos/configuration.nix:

	
boot.blacklistedKernelModules = [ "nouveau" "nvidia" ];
	

Ensuring that Drivers Weren't Loaded

Next we create a bash script to load the vfio-pci driver to the GPU device 0000:04:00.0 and the GPU's audio device 0000:04:00.1.

	
#!/bin/bash

# GPU
id="0000:04:00.0";
vendor=$(cat /sys/bus/pci/devices/$id/vendor)
device=$(cat /sys/bus/pci/devices/$id/device)
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

# GPU's audio device
id="0000:04:00.1"
vendor=$(cat /sys/bus/pci/devices/$id/vendor)
device=$(cat /sys/bus/pci/devices/$id/device)
# unbind snd_hda_intel first
echo $id > /sys/bus/pci/devices/$id/driver/unbind
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
	

After creating the file, execute it as root. Now the vfio-pci driver should be loaded for both the GPU and the GPU's audio device:

	
$ lspci -v
...
04:00.0 VGA compatible controller: NVIDIA Corporation GK106 [GeForce GTX 650 Ti] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: Palit Microsystems Inc. Device 11c6
        Flags: fast devsel, NUMA node 0
        Memory at fb000000 (32-bit, non-prefetchable) [disabled] [size=16M]
        Memory at d0000000 (64-bit, prefetchable) [disabled] [size=128M]
        Memory at d8000000 (64-bit, prefetchable) [disabled] [size=32M]
        I/O ports at c000 [disabled] [size=128]
        [virtual] Expansion ROM at fc000000 [disabled] [size=512K]
        Capabilities: 
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau

04:00.1 Audio device: NVIDIA Corporation GK106 HDMI Audio Controller (rev a1)
        Subsystem: Palit Microsystems Inc. Device 11c6
        Flags: fast devsel, IRQ 16, NUMA node 0
        Memory at fc080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: 
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel
...
	

Launching Qemu

Finally, we must create the script which launches Qemu with our preferred arguments.

	
#!/bin/bash

export QEMU_AUDIO_DRV=pa

qemu-kvm \
-cpu host,kvm="off" \
-smp sockets=1,cores=2,threads=1 \
-m 3G \
-vga none \
-boot d \
-drive format=raw,file=/dev/sde \
-device intel-hda -device hda-duplex \
-usb \
-usb -usbdevice host:0416:5020 \
-net nic -net 'user,hostfwd=tcp::7777-:5900,hostfwd=tcp::7778-:22' \
-device vfio-pci,host=04:00.0,addr=09.0,multifunction=on,x-vga=on \
-device vfio-pci,host=04:00.1,addr=09.1 #\
#-device usb-host,hostbus=9,hostaddr=2 \
#-device usb-host,hostbus=8,hostaddr=2
	

The above snippet launches a virtual machine with a CPU with 2 cores, 3GB of RAM, no built-in VGA adapter, direct access to drive /dev/sde and passes through the hosts devices 04:00.0 and 04:00.1 to 09.0 and 09.1 on the guest respectively.

It's important to note that newer Nvidia drivers on Windows deliberately check for a hypervisor like KVM and refuse to install if it is detected. The CPU option kvm="off" does not turn off virtualization, it merely hides the hypervisor from the guest system so that you can install the Nvidia drivers.

← back