I have a Microserver Gen8 too and want to passthrough my GPU to my VM.
I ended up compiling my own kernel for UnRAID with the RMRR check patched out, as what the Proxmox guys did. Attached is kernel for "Linux 4.14.16-unRAID." Copy 'bzimage-new' into '/boot' and modify your syslinux to use the new kernel. I think this kernel will only work on the specific UnRAID version I'm running right now (v6.4.1).
My config as follows (I didn't need to do PCS override or 'vfio_iommu_type1.allow_unsafe_interrupts=1' for me):
label unRAID OS (PCIe passthrough - No RMRR chk)
menu default
kernel /bzimage-new
append isolcpus=2,3,6,7 initrd=/bzroot
The ACS override separated my GPU from the PCI bridge (IOMMU group 1) into its own IOMMU group 11.
IOMMU group 0: [8086:0108] 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 Processor Family DRAM Controller (rev 09)
IOMMU group 1: [8086:0101] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09)
...
[snip]
...
IOMMU group 11: [10de:104a] 07:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
[10de:0e08] 07:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)
...
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</hostdev>
I then installed the NVIDIA drivers in my VM... though after a restart Windows says the driver for my GPU can't be loaded because of a problem. I'll need to look into this some more, but the custom kernel seems to bypass the "Device is ineligible for IOMMU domain attach due to platform RMRR requirement".
If you want to compile your own next time (a.k.a. every time UnRAID gets updated, I think), I used CHBMB's kernel compile script here and the Proxmox kernel patch here. I've also attached my modified script and patch. I compile on the UnRAID machine itself. My modified script just copies the Proxmox patch to '/usr/src/linux-*' and comments out fetching of CHBMB's DVB '.config' on line 44. Thanks CHBMB!
bzimage-new
kernel-compile-module.sh
remove_rmrr_check.patch
Update 14/03/18: After overcoming the RMRR hurdle, I found out the GPU driver could not be loaded (code 43) because of NVIDIA deciding to intentionally block the driver if it detects it's being loaded in a VM to get you to buy their workstation Quattro cards. This is widely documented and can be bypassed with:
<features>
...
<hyperv>
...
<vendor_id state='on' value='blahblahblah'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
</features>
However, this still didn't work; I still got the code 43 problem. After more googling and dumping my GPU BIOS and whatnot, I think it's because my GPU BIOS doesn't have EFI support. When I substituted my GPU BIOS in the VM config with a similar variant with UEFI support (I couldn't find one for my exact model), the driver loaded correctly and I got output through the HDMI! This was great until my whole host system crashed, probably due to using an incompatible GPU BIOS.
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
</source>
<rom file='/mnt/user/vm-iso/Gigabyte.GT610.1024.130107-UEFI.rom'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</hostdev>
Update 16/03/18: I got my Gigabyte GT 610 Silent passthrough partially working. As suspected, the GPU BIOS did not contain UEFI support. I dumped my vBIOS (I did set the GPU to "secondary optional" in the HP RBSU) and confirmed the ROM was not corrupt (hopefully—the md5 happened to match Techpowerup's database and rom-parser didn't complain). I then used the GOPUpd tool to inject the UEFI bit of firmware into my BIOS image (the GOP was included in the tool for supported GPUs—luckily mine, a GF119, was. With the new ROM file, I specified it in my VM config, as above and the code 43 error was gone. Everything appears to work, but when the VM is rebooted, the whole host crashes. I wanted to actually try and flash my patched vBIOS to my GPU, but the EEPROM size on the GPU appears to be too small to fit the patched vBIOS.
I did come across a QVM bug which has similar symptoms to mine, except mine crashes regardless of how long I have the VM running. I am running a Windows Server 2016 host. I did try enabling MSI to see if it will solve my host crashing problem to no avail...
Update 25/03/18: I believe my GPU is inherently not compatible because of it lacking the feature to reset properly. This in turn causes my host to crash and reboot if a GPU is detached from the VM and re-attached again.
Update 10/04/18: Late update, but I tried using SeaBIOS on Ubuntu with the GT610 but there was some unrelated bug and I got a login loop with the Linux Nvidia drivers installed. In the end, I swapped the GPU with a GT520 (not fanless unfortunately), followed all the same steps above with dumping the vBIOS, and it no longer crashes the host under UEFI Windows Server 2016.