TheBekker

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by TheBekker

  1. That makes sense, thanks a lot for a very good explanation Just waiting on disks and some other gear to arrive, then its gonna be time. Thanks again!
  2. I've got the Plus license They are formatted as xfs. I do have a full backup to cloud, though i would not wanna do a full re-down if possible, that would be rough Really its a price/need thing, currently i got 6tb array, which i really good enough for a while, so just going 4x3tb would be 9tb (1 parity) which would be way more then enough for a long while for me I could even just go 3x3tb just to save a bit initally since i dont really need the storrage increase just yet. I really wanna stay on WD Reds, and they are about 121 usd pr 3tb drive (converted from DKK) here i DK. So i could get away with an initial spend of 363usd contra 605 usd (2x8tb wd reds). But i do get your point Also maybe an important note, my current server only has support for 2.5" drives, where the new can do both (also one of the reasons for migrating to a new server). though atm the new one only got bays for 4 disks, but the plan is to upgrade that to 8 or 9, if/when needed.
  3. Hi So i'm planing to migrate my current unraid server to new hardware, and in the same run, i would like to go from my current array 6 (inc. parity) disks that are from 1-2tb in size each, to maybe just 4 disks but then do 3tb disks instead. Is there a good way to do this? Size upgrade seems easy, but the i could see an issue in going to a lower number of disks, since data is split between all of them.
  4. No idea how to change it? Where do i do that? Black would probably be better, the unraid website works pretty good with the colors, but on the unraid gui it seems like its just not fully implemented and thought trough. As i read it, it seems like its a work in progress, so hopefully there's gonna be some patches soon to maybe make the new design feel like its more like a whole thing, a not just some paint on top here and there, like it in my opinion feels like right now. This is truely ment as constructive criticism, i work as a developer and is by no means a designer, but i can appreciate good UI and UX, and looking forward for furture versions
  5. Upgrade went without any issues, thanks! Regarding the "new" gui, it really dosnt work for me. I understand trying to hit the same branding as the website, but it seems a bit half baked to me. Those bright colored buttons/text on white backgrounds just isn't pretty or nice to read, i'd rather see an upgrade to using some responsive framework instead of a random splash of paint.
  6. Quite interesting, i'll look in to that and do some tests thanks
  7. My server has 48Gigs of ram, and doesn't seem to fill up. They are all from Feb. this year. I'm copying about 40 files, ranging from 1.5gb to 4gb in size. I just tried to enable my cache drive on the share i'm copying to, and that seems to fix the issue. Just seems strange that i would need SSD cache to achieve a stable 80-100 mb's
  8. I just noticed today when i had to copy a large file to my Unraid array. I started out good with about 80-100 mb/s, but after a while it drops down to 5-15mb/s. Its a 4 disk array with 1 being parity + 1 SSD Cache (though cache is not enabled on the share im copying to).. Using all WD Blacks 7200rpm drives. Any ideas what could be causing this?
  9. Are there any supported FC HBA cards in Unraid?
  10. Hi I got a really good price on a "LSI 1932", and i got a LPE12002 FC HBA laying around unused. I initially thought everything would be fine and just work, but now after researching, i know see that it might not be that simple, and that i maybe should have gone for a SAS connected soloution instead. I'm still waiting to recieve the "1932" to test, but i got some questions in the meantime. Would this HBA work with unraid? Do unraid have support for any FC HBA cards out of the box? There are drivers available for Red hat and Suse, would they work on unraid? Any help would be appreciated.
  11. Ahh i think i found the issue, simply me needing to change settings when connecting with RDP, to use the host audio, instead of playing on remote.. (doh!) Thanks anyways!
  12. I tried both the virtual audio cable, and adding the new device to xml. But still looks like i cant use those device when connecte with RDP.
  13. Hi So i got a windows VM running on my server (2U with no audio interface), its used to consume some rtmp video streams in OBS studio, and stream as one single stream to twitch. It also mixes in some audio playing on the VM. Now my problem is, that it seems that like obs wont get any audio, unless i'm connected to the VM via RDP? Is that because the server has no audio device? and therefore cant process audio unless the virtual RDP audio thing i there? So i would need to get a cheap usb soundcard and passthrough? Any advice, before i waste money on hardware?
  14. Okay, so i just put in a GT710, booted up my vm with the GT730, and now it seems to work. I will try a bios dump later in the weekend maybe, with the cards swapped, so i might dump it correctly, so i can run without the GT710.
  15. I just bought another gpu to put in the server, and see if it makes a difference, since it looks like the integrated one turns off when a dedicated one is added. I'll return with my findings.
  16. So tried a lot of stuff now. Tried dumping the rom from the gpu, and seems successful. I then load the vm with the rom, like described in the video guide, but i still get error code 43. I tried reinstalling a couple of times, no luck. Also noticed that i get the error even before the nvidia driver was installed, so when its just recorgized as somthing like "Basic display driver.....". I then tried making a fresh vm with newest virtio, and switched to seabios with loaded rom. I then noticed i got "Basic display driver", without error code 43, so actually thought everything was going well, until i installed driver, then i got error code 43 again.
  17. Okay, so i just got home and tried plugging in a monitor on the iGPU, reboot, no signal. So i plugged in to the GPU, and there was the boot seq. and unraid. Then i removed the GPU, and tried iGPU again, then it shows on the iGPU. So seems like the server chooses the GPU over the iGPU? I've been through the bios, and dont seem to see any settings about it. Only thing i can find, is to disable the iGPU, but that doesn't really help me. Any suggestions? So activate the igpu so that you can passthrough the 730... Not sure if the passthrough will work directly, without rom. If not working, dump the rom and use it. You might be able to disable the igpu after that...
  18. There is an integrated one on the server, but its not plugged in atm, only used with when i initially installed unraid, just running "headless" now. The 730 was then added later on.
  19. Hi I'm trying to passtrough my GT730 card to my Win10 VM. It shows up, but the driver fails with error code 43. I searched around both herre and on google, found a couple of older issue with the same error, but no luck getting it working. Some where talking about changing XML to avoid nvidia drivers from detecting its a VM, but that seemed to be outdated, as far i could read, and not necessary anymore? Some also fixed it by changing the PCI slot, i tried that, with no luck. Would be grateful for any help, im' kinda out of ideas right now. Its running on a Fujitsu RS300 S6, with Unraid 6.2.4 VM Xml: <domain type='kvm' id='10'> <name>Windows 10</name> <uuid>d5107eec-6391-33a8-4069-3c8d8580b402</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d5107eec-6391-33a8-4069-3c8d8580b402_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='6' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b4:46:48'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> </domain> PCI devices 04:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 730] (rev a1) 04:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1) IOMMU groups: find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/0/devices/0000:ff:00.0 /sys/kernel/iommu_groups/0/devices/0000:ff:00.1 /sys/kernel/iommu_groups/1/devices/0000:ff:02.0 /sys/kernel/iommu_groups/1/devices/0000:ff:02.1 /sys/kernel/iommu_groups/1/devices/0000:ff:02.4 /sys/kernel/iommu_groups/1/devices/0000:ff:02.5 /sys/kernel/iommu_groups/2/devices/0000:ff:03.0 /sys/kernel/iommu_groups/2/devices/0000:ff:03.1 /sys/kernel/iommu_groups/2/devices/0000:ff:03.2 /sys/kernel/iommu_groups/2/devices/0000:ff:03.4 /sys/kernel/iommu_groups/3/devices/0000:ff:04.0 /sys/kernel/iommu_groups/3/devices/0000:ff:04.1 /sys/kernel/iommu_groups/3/devices/0000:ff:04.2 /sys/kernel/iommu_groups/3/devices/0000:ff:04.3 /sys/kernel/iommu_groups/4/devices/0000:ff:05.0 /sys/kernel/iommu_groups/4/devices/0000:ff:05.1 /sys/kernel/iommu_groups/4/devices/0000:ff:05.2 /sys/kernel/iommu_groups/4/devices/0000:ff:05.3 /sys/kernel/iommu_groups/5/devices/0000:ff:06.0 /sys/kernel/iommu_groups/5/devices/0000:ff:06.1 /sys/kernel/iommu_groups/5/devices/0000:ff:06.2 /sys/kernel/iommu_groups/5/devices/0000:ff:06.3 /sys/kernel/iommu_groups/6/devices/0000:fe:00.0 /sys/kernel/iommu_groups/6/devices/0000:fe:00.1 /sys/kernel/iommu_groups/7/devices/0000:fe:02.0 /sys/kernel/iommu_groups/7/devices/0000:fe:02.1 /sys/kernel/iommu_groups/7/devices/0000:fe:02.4 /sys/kernel/iommu_groups/7/devices/0000:fe:02.5 /sys/kernel/iommu_groups/8/devices/0000:fe:03.0 /sys/kernel/iommu_groups/8/devices/0000:fe:03.1 /sys/kernel/iommu_groups/8/devices/0000:fe:03.2 /sys/kernel/iommu_groups/8/devices/0000:fe:03.4 /sys/kernel/iommu_groups/9/devices/0000:fe:04.0 /sys/kernel/iommu_groups/9/devices/0000:fe:04.1 /sys/kernel/iommu_groups/9/devices/0000:fe:04.2 /sys/kernel/iommu_groups/9/devices/0000:fe:04.3 /sys/kernel/iommu_groups/10/devices/0000:fe:05.0 /sys/kernel/iommu_groups/10/devices/0000:fe:05.1 /sys/kernel/iommu_groups/10/devices/0000:fe:05.2 /sys/kernel/iommu_groups/10/devices/0000:fe:05.3 /sys/kernel/iommu_groups/11/devices/0000:fe:06.0 /sys/kernel/iommu_groups/11/devices/0000:fe:06.1 /sys/kernel/iommu_groups/11/devices/0000:fe:06.2 /sys/kernel/iommu_groups/11/devices/0000:fe:06.3 /sys/kernel/iommu_groups/12/devices/0000:00:00.0 /sys/kernel/iommu_groups/13/devices/0000:00:01.0 /sys/kernel/iommu_groups/14/devices/0000:00:03.0 /sys/kernel/iommu_groups/15/devices/0000:00:05.0 /sys/kernel/iommu_groups/16/devices/0000:00:07.0 /sys/kernel/iommu_groups/17/devices/0000:00:08.0 /sys/kernel/iommu_groups/18/devices/0000:00:09.0 /sys/kernel/iommu_groups/19/devices/0000:00:0a.0 /sys/kernel/iommu_groups/20/devices/0000:00:10.0 /sys/kernel/iommu_groups/20/devices/0000:00:10.1 /sys/kernel/iommu_groups/21/devices/0000:00:11.0 /sys/kernel/iommu_groups/21/devices/0000:00:11.1 /sys/kernel/iommu_groups/22/devices/0000:00:14.0 /sys/kernel/iommu_groups/22/devices/0000:00:14.1 /sys/kernel/iommu_groups/22/devices/0000:00:14.2 /sys/kernel/iommu_groups/22/devices/0000:00:14.3 /sys/kernel/iommu_groups/23/devices/0000:00:15.0 /sys/kernel/iommu_groups/24/devices/0000:00:1a.0 /sys/kernel/iommu_groups/24/devices/0000:00:1a.1 /sys/kernel/iommu_groups/24/devices/0000:00:1a.2 /sys/kernel/iommu_groups/24/devices/0000:00:1a.7 /sys/kernel/iommu_groups/25/devices/0000:00:1c.0 /sys/kernel/iommu_groups/25/devices/0000:08:00.0 /sys/kernel/iommu_groups/25/devices/0000:08:00.1 /sys/kernel/iommu_groups/26/devices/0000:00:1d.0 /sys/kernel/iommu_groups/26/devices/0000:00:1d.1 /sys/kernel/iommu_groups/26/devices/0000:00:1d.2 /sys/kernel/iommu_groups/26/devices/0000:00:1d.7 /sys/kernel/iommu_groups/27/devices/0000:00:1e.0 /sys/kernel/iommu_groups/28/devices/0000:00:1f.0 /sys/kernel/iommu_groups/28/devices/0000:00:1f.2 /sys/kernel/iommu_groups/28/devices/0000:00:1f.3 /sys/kernel/iommu_groups/28/devices/0000:00:1f.5 /sys/kernel/iommu_groups/29/devices/0000:01:00.0 /sys/kernel/iommu_groups/30/devices/0000:04:00.0 /sys/kernel/iommu_groups/30/devices/0000:04:00.1 /sys/kernel/iommu_groups/31/devices/0000:06:00.0 /sys/kernel/iommu_groups/32/devices/0000:06:00.1 /sys/kernel/iommu_groups/33/devices/0000:07:00.0 /sys/kernel/iommu_groups/34/devices/0000:07:00.1
  20. Definitely magic But that makes sense, thanks again. Now im just getting Error code 43 in the device manager unfortunately
  21. This worked like a charm! Thank you very much! and if you got time, i would love to get an explanation on what this just did?
  22. Hoping this is what you where asking for: /sys/kernel/iommu_groups/30/devices/0000:02:00.0 /sys/kernel/iommu_groups/30/devices/0000:02:00.1 Full list: find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/0/devices/0000:ff:00.0 /sys/kernel/iommu_groups/0/devices/0000:ff:00.1 /sys/kernel/iommu_groups/1/devices/0000:ff:02.0 /sys/kernel/iommu_groups/1/devices/0000:ff:02.1 /sys/kernel/iommu_groups/1/devices/0000:ff:02.4 /sys/kernel/iommu_groups/1/devices/0000:ff:02.5 /sys/kernel/iommu_groups/2/devices/0000:ff:03.0 /sys/kernel/iommu_groups/2/devices/0000:ff:03.1 /sys/kernel/iommu_groups/2/devices/0000:ff:03.2 /sys/kernel/iommu_groups/2/devices/0000:ff:03.4 /sys/kernel/iommu_groups/3/devices/0000:ff:04.0 /sys/kernel/iommu_groups/3/devices/0000:ff:04.1 /sys/kernel/iommu_groups/3/devices/0000:ff:04.2 /sys/kernel/iommu_groups/3/devices/0000:ff:04.3 /sys/kernel/iommu_groups/4/devices/0000:ff:05.0 /sys/kernel/iommu_groups/4/devices/0000:ff:05.1 /sys/kernel/iommu_groups/4/devices/0000:ff:05.2 /sys/kernel/iommu_groups/4/devices/0000:ff:05.3 /sys/kernel/iommu_groups/5/devices/0000:ff:06.0 /sys/kernel/iommu_groups/5/devices/0000:ff:06.1 /sys/kernel/iommu_groups/5/devices/0000:ff:06.2 /sys/kernel/iommu_groups/5/devices/0000:ff:06.3 /sys/kernel/iommu_groups/6/devices/0000:fe:00.0 /sys/kernel/iommu_groups/6/devices/0000:fe:00.1 /sys/kernel/iommu_groups/7/devices/0000:fe:02.0 /sys/kernel/iommu_groups/7/devices/0000:fe:02.1 /sys/kernel/iommu_groups/7/devices/0000:fe:02.4 /sys/kernel/iommu_groups/7/devices/0000:fe:02.5 /sys/kernel/iommu_groups/8/devices/0000:fe:03.0 /sys/kernel/iommu_groups/8/devices/0000:fe:03.1 /sys/kernel/iommu_groups/8/devices/0000:fe:03.2 /sys/kernel/iommu_groups/8/devices/0000:fe:03.4 /sys/kernel/iommu_groups/9/devices/0000:fe:04.0 /sys/kernel/iommu_groups/9/devices/0000:fe:04.1 /sys/kernel/iommu_groups/9/devices/0000:fe:04.2 /sys/kernel/iommu_groups/9/devices/0000:fe:04.3 /sys/kernel/iommu_groups/10/devices/0000:fe:05.0 /sys/kernel/iommu_groups/10/devices/0000:fe:05.1 /sys/kernel/iommu_groups/10/devices/0000:fe:05.2 /sys/kernel/iommu_groups/10/devices/0000:fe:05.3 /sys/kernel/iommu_groups/11/devices/0000:fe:06.0 /sys/kernel/iommu_groups/11/devices/0000:fe:06.1 /sys/kernel/iommu_groups/11/devices/0000:fe:06.2 /sys/kernel/iommu_groups/11/devices/0000:fe:06.3 /sys/kernel/iommu_groups/12/devices/0000:00:00.0 /sys/kernel/iommu_groups/13/devices/0000:00:01.0 /sys/kernel/iommu_groups/14/devices/0000:00:03.0 /sys/kernel/iommu_groups/15/devices/0000:00:05.0 /sys/kernel/iommu_groups/16/devices/0000:00:07.0 /sys/kernel/iommu_groups/17/devices/0000:00:08.0 /sys/kernel/iommu_groups/18/devices/0000:00:09.0 /sys/kernel/iommu_groups/19/devices/0000:00:0a.0 /sys/kernel/iommu_groups/20/devices/0000:00:10.0 /sys/kernel/iommu_groups/20/devices/0000:00:10.1 /sys/kernel/iommu_groups/21/devices/0000:00:11.0 /sys/kernel/iommu_groups/21/devices/0000:00:11.1 /sys/kernel/iommu_groups/22/devices/0000:00:14.0 /sys/kernel/iommu_groups/22/devices/0000:00:14.1 /sys/kernel/iommu_groups/22/devices/0000:00:14.2 /sys/kernel/iommu_groups/22/devices/0000:00:14.3 /sys/kernel/iommu_groups/23/devices/0000:00:15.0 /sys/kernel/iommu_groups/24/devices/0000:00:1a.0 /sys/kernel/iommu_groups/24/devices/0000:00:1a.1 /sys/kernel/iommu_groups/24/devices/0000:00:1a.2 /sys/kernel/iommu_groups/24/devices/0000:00:1a.7 /sys/kernel/iommu_groups/25/devices/0000:00:1c.0 /sys/kernel/iommu_groups/25/devices/0000:08:00.0 /sys/kernel/iommu_groups/25/devices/0000:08:00.1 /sys/kernel/iommu_groups/26/devices/0000:00:1d.0 /sys/kernel/iommu_groups/26/devices/0000:00:1d.1 /sys/kernel/iommu_groups/26/devices/0000:00:1d.2 /sys/kernel/iommu_groups/26/devices/0000:00:1d.7 /sys/kernel/iommu_groups/27/devices/0000:00:1e.0 /sys/kernel/iommu_groups/28/devices/0000:00:1f.0 /sys/kernel/iommu_groups/28/devices/0000:00:1f.2 /sys/kernel/iommu_groups/28/devices/0000:00:1f.3 /sys/kernel/iommu_groups/28/devices/0000:00:1f.5 /sys/kernel/iommu_groups/29/devices/0000:01:00.0 /sys/kernel/iommu_groups/30/devices/0000:02:00.0 /sys/kernel/iommu_groups/30/devices/0000:02:00.1 /sys/kernel/iommu_groups/31/devices/0000:05:00.0 /sys/kernel/iommu_groups/32/devices/0000:05:00.1 /sys/kernel/iommu_groups/33/devices/0000:06:00.0 /sys/kernel/iommu_groups/34/devices/0000:06:00.1 Also yes, you are correct about videocard and audio.
  23. Hi I'm pretty new with unraid, and i recently setup my Fujitsu RX300 S6 with unraid. I just added an Nvidia GT730 gpu, as i needed a small bit of gpu power in one of my VM's. I powered down my Win10 VM, chose the GPU from the drop down, saved and tried to start the VM again. But i get the following error: internal error: early end of file from monitor, possible problem: 2017-02-13T18:47:55.090296Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio: failed to set iommu for container: Operation not permitted 2017-02-13T18:47:55.090349Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio: failed to setup container for group 30 2017-02-13T18:47:55.090361Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio: failed to get group 30 2017-02-13T18:47:55.090379Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x5: Device initialization failed My XML for the VM: <domain type='kvm'> <name>Windows 10</name> <uuid>d5107eec-6391-33a8-4069-3c8d8580b402</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/d5107eec-6391-33a8-4069-3c8d8580b402_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b4:46:48'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='connect'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain> Any help would be greatly appreciated