Endy

Members
  • Posts

    212
  • Joined

  • Last visited

Everything posted by Endy

  1. I just picked up a pcie usb card and I passed iit through to my Windows 10 vm. Mouse and keyboard are working just fine, but I am not getting any audio from my usb speakers. I tried to change the vm settings from ehci to xhci to see if that would make a difference and that gave me a no bootable disk message. Any ideas? Here's the XML: <domain type='kvm' id='4'> <name>Windows 10 Gaming</name> <uuid>ef65f514-29aa-9a99-8e4f-6144f41ef670</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Gaming/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Gaming/vdisk2.img'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ad:19:3f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-Windows 10 Gaming/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  2. I've recently done a little upgrading to my Unraid server hardware so that I could do more with VM's (no VT-d previously). This has allowed me to combine my main pc and Unraid server. So far it's been going very well (minus a few problems along the way). Current hardware specs in my sig. A problem with this build is that I did not plan out and buy hardware specifically for Unraid and VM's. It wasn't in my budget to go out and purchase a whole new system, so I am using what I had on hand. Where this becomes the most noticeable is with the processor and the fact that it doesn't do hyperthreading. Currently for Dockers I mostly use Emby and HDHomerun DVR. Currently no transcoding involved, but 1 or 2 transcoded streams might be possible later. Nothing heavy as far as plugins. For VM's I have a Windows 10 vm for gaming. I will probably do some testing with pFsense that would run at the same time as Windows 10. I would also probably create a few other VM's, but I am not sure what exactly or if any of them would run at the same time as the Windows 10 vm. I don't think I'll get very far trying to do that with my current i5 processor, so I am thinking of getting an i7 7700 or i7 7700k (I am pretty sure my motherboard supports these with a bios update, which I have already done). This would get me a little bit faster processor and, more importantly, would add hyperthreading. Is there any reason to pick one of those 2 processors over the other? (I believe older K models couldn't do vm's or something?) Just don't want any surprises. (Like how I missed that the i5's don't do hyperthreading.) I'm leaning towards the base i7 7700. It's cheaper and comes with a heatsink and fan and I don't plan on doing any overclocking. I would look at other options, but they would need to fall into the same price range and gaming performance really shouldn't be less than my current i5. I am also overdue to upgrade my graphics card. I am looking at this one: https://www.amazon.com/dp/B01JD2OSX0/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=1NVAZ99I8CCFB&coliid=I2OOLP8GA3W2I I'm looking at that one because of it's smaller size. My current graphics card almost didn't fit. Any problems that I might run into with a smaller card? (Such as heat or performance etc..) It should be obvious that I am not trying to squeeze out every last drop of performance, but I do want a decent performing rig. While I am at it, a USB card to passthrough to Windows 10 would also be a good idea. Have there been any compatibility issues with USB cards? I think this one has been recommended: https://www.amazon.com/dp/B00FPIMJEW/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=1NVAZ99I8CCFB&coliid=I26AC8PCBC64KG But this one is a little cheaper: https://www.amazon.com/dp/B011LZY20G/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=1NVAZ99I8CCFB&coliid=I3M7UARF3YVM21&psc=1 While I am pretty sure of what I am getting, sometimes it really helps just to get someone else's opinion, especially if there's anything I may not have thought about.
  3. My understanding is that first you would want to try another slot on the motherboard if there is one and it's available. The next thing you could try would be enabling the pcie acs override in vm manager settings in the advanced view. I do not know if there are any dangers in doing this and I think it's considered a last resort option. I am new to this so someone with more knowledge might have a better idea or be able to clarify anything I've mentioned.
  4. Gotcha. So right now, the onboard video is the primary and that's why I didn't need to get the rom to get it working so far. If I were to switch the Nvidia to be the primary, then I would need to get the rom. I've read something about if a video card doesn't have uefi support that it wouldn't work with ovmf, and my card is an older one so that could be it. I didn't find a lot of information about that, so I'm not confidant in that info at the moment. I may be able to pick up a new video card sooner than I thought and at that time I'll probably do some more playing around.
  5. Thanks for replying. I was saying that it does NOT work with OVMF. My converted vdisk (yes, I had followed the guide you linked to and created the vdisk image from a physical disk) would only work as OVMF, but passthrough only works with Seabios. So I could have a fresh install with Seabios and passthrough works, or I could have my old install with OVMF and passthrough doesn't work. some motherboards allow selection of primary GPU from BIOS. Some added this feature in newer bios firmware updates. YMMV not sure if it helps, but your GTX 660 might require to specify the rom in the VM xml (that was the case with my 550ti). For this, you need to have this rom as a file in the array so that unraid can read it and use it during boot of VM (so you need to dump it yourself, using the guides in this forum, or to obrain it from some place - techpowerup - again, ymmv in terms of compatibility). Also try to put the 660 in second slot. Mine didn't work at all to do passthrough in first slot (closest to the GPU) even after rom was used. -d In the bios I can select to use the onboard video for primary or not. I thought the rom was only needed if there was just 1 video source in the system and you were trying to pass it through? There is no room to move the video card to the second slot. (Or the 3rd, for that matter.) There was a bit of a struggle to get enough room to put it in the first slot. At this point, it's mostly academic. I needed to get something up and running, so I went ahead and created a new VM and did a fresh install and am in the process of getting it all set up. I do still have the other VM and I wouldn't mind trying to get it working just for the sake of figuring it out.
  6. If the game has been moved from cache disk to disk3, should I specify in my share settings to use only disk3? At the beginning I had: -included disks: all -Use cache disk: only. But now do I need to change it to this?: -included disks: all (I assume Steam will access disk3 through /mnt/user/games) -Use cache disk: no (so that all the writes while playing this game (i.e. download plugins) happen in disk3?) Nope. Keep it as cache only. (Assuming that you want new games to install to the cache first.) If a game is modifying files, it will modify them where they are. As far as downloading plugins, that gets beyond me. At worst if a plugin you downloaded for a game did get put on the cache drive and you wanted it on disk3, you would just move it to disk3 manually. (This may also be where I need to mention the user share copy bug? Only copy files from user share to user share or disk to disk. Do not copy files from user share to disk or from disk to user share or you risk losing data.)
  7. I am pretty sure that the behavior doesn't persist in safe mode since safe mode doesn't include the GUI. I can't test this behavior with another browser or pc because I do not currently have another one built.
  8. It's set for Real-time. So according to Help I should try lowering it. I'll try that and report back later. Nope. Still have to manually refresh. Changing it to Regular just added a little countdown at the bottom of the screen, but the page didn't refresh at the end of the countdown.
  9. It's set for Real-time. So according to Help I should try lowering it. I'll try that and report back later.
  10. I'm booting into the Unraid GUI and the GUI does not seem to be refreshing correctly. Like if I start the array, the screen should refresh to show the array has now started, but instead the drive assignments get to the point where they are grayed out and then just sits there. If I then go to Dashboard and then come back to Main, it will now show that the array has started. (I can also just click on Main to refresh the page and that works as well.) This happens when shutting down the array and I notice the same type of thing when starting and stopping VM's. (The VM's were doing that in 6.2.4 as well.) Everything else seems to be working normally with the update to 6.3.0 and it's just a small annoyance at this point. turtle-diagnostics-20170205-1220.zip
  11. By setting a share as cache only, that's telling Unraid to put new stuff on the cache only. It doesn't tell Unraid to do anything with the old stuff. It's like having a share that starts off including all disks with none excluded. Later on you change it to exclude disk3. Unraid will no longer put new stuff on disk3, but if the folder structure for that share is still on disk3, it's still included in the share. Unraid is not going to delete or move stuff off of disk3 just because you have now excluded it.
  12. Not an expert, but here's how I think this works. The /mnt/disk1/games/ is not a separate share. You can't have 2 shares called games. You create a cache only share called games. Then you can manually add a /mnt/disk1/games folder and anything in that folder will automatically be part of the games share.
  13. I know pretty much nothing, but I have been playing around with this a bit today. I think maybe what you want to use is pci-stub.ids=1106:3403. I could be wrong. If you're not sure whether you have VT-d or not, in the Unraid GUI click on Info in the upper right hand corner just below Uptime. IOMMU should say Enabled. If not, you either have to enable it in the bios or you don't have it.
  14. Thanks to some help from the people here, I was able to get a fresh Windows 10 VM install working great. Video card passthrough is working just fine. What I was trying to do was to take the Windows 10 install from my main pc (the components of which are now being used for Unraid) and convert that to a VM. I have it converted to a VM and it works with VNC, but when I add in the video card to the mix nothing is getting sent to the monitor. Initially I had created the VM using Seabios, but that got stuck at "Booting from hard disk" so I recreated the VM using OVMF. That got the VM working, but like I said, not when trying to use a video card. I tried enabling the pcie ACS override, but that didn't work either. (Apparently I still don't understand the IOMMU groups that well, because it seems to be unnecessary anyways.) That's when I created the fresh Windows 10 VM install. I used Seabios to see if that made a difference and sure enough, it worked. Does this mean that I am out of luck trying to get my old Windows 10 install working or is there something else I can do? I just upgraded to Unraid 6.3.0 and I noticed that it now gives the option to use the onboard Intel video. I'm currently using the onboard video for the Unraid GUI and the GTX 660 for VM's. If I can't get the GTX 660 working, can I swap and get Unraid to use the GTX 660 and the VM's to use the Intel onboard video and would that make a difference? I'll include the VM XML and if there's anything else that's needed just let me know. (And yes, I know I am running the VM on the array... I don't have the space on the cache drive for the image. I have another SSD that is the disk that the original Windows 10 install is on that I will use to store the VM later.) <domain type='kvm'> <name>Windows 10</name> <uuid>ea4e5277-c3eb-6c0f-e95c-60c5e2e86838</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/ea4e5277-c3eb-6c0f-e95c-60c5e2e86838_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/ISO/virtio-win-0.1.130.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disk1/Data/Windows 10/vdisk1.img'/> <target dev='hdc' bus='ide'/> <boot order='1'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:6a:ab:e7'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='4'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc313'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04d9'/> <product id='0xfa50'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1130'/> <product id='0x1620'/> </source> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Note: I am doing a lot of this as a test and provided this all goes well, I will be updating some of the hardware, but I just can't do that for another month or two at least.
  15. I took out the isolcpus lines and rebooted and the VM to see what would happen and the VM is still ok. Programs open up like they should and the mouse is behaving normally. If it's working now, leave out the isolcpus lines? I guess if this starts happening again I should post the full diagnostics.
  16. It was a big enough difference that I doubt it was a placebo. I'll do some testing tomorrow just to double check and I'll report back. Thank you for the help.
  17. Was that yes to both? Initially I did it just like in my previous post and rebooted. Before starting the VM, there was still some activity on cores 2 and 3. Started up the VM and it seemed much better and more like it should be. Assuming that saarg was saying yes to both, I added isolcpus bit to unRAID OS GUI mode as well. Rebooted and cores 2 and 3 were at 0 before the VM was started. So if it did indeed need to be added to both, that would mean that the problem was something else and for some reason rebooting the first time fixed it. I don't know what the could be because I didn't make any other changes. (There was an error about irq16 in the logs that seems to be gone after the first reboot. Something about no one cared?) I am so confused. I guess I'll use this for a little bit and see if it stays like this.
  18. Thanks Squid. Ok so here is what I did. Is this correct? I have the VM set to use just cores 2 and 3, so those are the 2 cores I wanted to isolate. Because I have been booting into the GUI mode, do I have to add that to the unRAID OS GUI mode as well?
  19. Ok, where do I find out how to do that? I must not be searching for the right thing.
  20. I wasn't sure if there were thread pairs without hyperthreading. I'll try that. What do you mean about isolating the cores? Is there something to do besides selecting which cores to use in the VM settings? I've only had 1, 2, and 3 selected leaving 0 for Unraid.
  21. When I first installed this VM the video card was using the Nvidia driver that Windows 10 installed. That had a lot of issues with not being able to start the VM everytime and crashing and some weird visual corruption. Managed to get the most current driver from Nvidia, 378.49. No more crashing or or anything. Plex, Emby, TVheadend, DelugeVPN for dockers. Then some of the basic plugins like CA, UD, Preclear, Nerd Tools, User Scripts, Dynamix Local Master, Dynamix Cache Directories (installed, but not setup), Dynamix System Temp (which disappeared on me at one point yesterday and I had to reinstall it). CPU utilization is currently sitting around 15-25% on the 3 cores assigned to the VM. About 5-9% on the leftover core. There were issues yesterday where 1 assigned core would peg 100%, but I can't recall whether that was on both VM's or just one of them. It does not seem to be happening now. Sometimes I try to move the mouse and it doesn't want to move right. Extremely slow and not steady and not in the exact direction I try to move it. At the same time, trying to open a program will take forever, even just opening the start menu.
  22. I switched USB Mode to XHCI and that helped. It is no longer painfully slow to browse the web. There is still noticeable lag, though. (I also moved the vm to the correct /mnt/cache/domains/Windows 10 Test/vdisk1.img location, but of course that didn't affect anything.)
  23. Heh... that's the default location and I apparently forgot to change it to the appropriate location. But it's on my SSD cache drive, no cache pool. Unraid version is 6.2.4
  24. Today I finally had the chance to swap out the motherboard and processor that I've been using with Unraid. This gives me the option to run VM's with hardware passthrough which I wasn't able to do before. My plan is to consolidate my Windows 10 pc and my Unraid server. First I converted the Windows 10 install to an image following the wiki. I started by using Seabios, but that had the "Booting from Hard Disk" error so I switched to OVMF. That worked, except that I couldn't get it to passthrough my video card. This may be because of IOMMU so I enabled the PCIe Override. I still wasn't getting any output from the video card. So next I created a Windows 10 VM from scratch and used Seabios. The video card does work with this one. However, the performance is pretty awful. Everything is moving very slowly. It feels very laggy. I'm not sure what to do from here. I guess I have 2 questions, what can I do to improve the performance and is there some way I can get the Windows 10 install that's currently using OVMF to work with the video card passthrough? Not sure what all info is needed so if I miss something lt me know. (It should be noted that I do plan to upgrade the processor to one with Hyperthreading and I also plan to upgrade the video card as well. however, that won't be for awhile yet and I wanted to do some testing with my current hardware first.) Model: Custom M/B: Gigabyte Technology Co., Ltd. - Z170X-Gaming 7 CPU: Intel® Core™ i5-6500 CPU @ 3.20GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 6144 kB Memory: 16 GB (max. installable capacity 64 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: not connected eth2: not connected eth3: not connected Video card is a Nvidia GeForce GTX 660 PCIe Devices: 00:00.0 Host bridge [0600]: Intel Corporation Skylake Host Bridge/DRAM Registers [8086:191f] (rev 07) 00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 07) 00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06) 00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31) 00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31) 00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] [8086:a102] (rev 31) 00:1b.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #17 [8086:a167] (rev f1) 00:1b.2 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #19 [8086:a169] (rev f1) 00:1b.3 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #20 [8086:a16a] (rev f1) 00:1c.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #1 [8086:a110] (rev f1) 00:1c.1 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #2 [8086:a111] (rev f1) 00:1c.2 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #3 [8086:a112] (rev f1) 00:1c.4 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #5 [8086:a114] (rev f1) 00:1d.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #9 [8086:a118] (rev f1) 00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a145] (rev 31) 00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31) 00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31) 00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31) 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8] (rev 31) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106 [GeForce GTX 660] [10de:11c0] (rev a1) 01:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1) 03:00.0 PCI bridge [0604]: Pericom Semiconductor Device [12d8:2304] (rev 05) 04:01.0 PCI bridge [0604]: Pericom Semiconductor Device [12d8:2304] (rev 05) 04:02.0 PCI bridge [0604]: Pericom Semiconductor Device [12d8:2304] (rev 05) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0c) 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0c) 07:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) 0a:00.0 Ethernet controller [0200]: Qualcomm Atheros Killer E2400 Gigabit Ethernet Controller [1969:e0a1] (rev 10) 0b:00.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578] 0c:00.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578] 0c:01.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578] 0c:02.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578] 0c:04.0 PCI bridge [0604]: Intel Corporation DSL6540 Thunderbolt 3 Bridge [Alpine Ridge 4C 2015] [8086:1578] 0f:00.0 USB controller [0c03]: Intel Corporation DSL6540 USB 3.1 Controller [Alpine Ridge] [8086:15b6] IOMMU Groups: /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/3/devices/0000:00:14.0 /sys/kernel/iommu_groups/4/devices/0000:00:16.0 /sys/kernel/iommu_groups/5/devices/0000:00:17.0 /sys/kernel/iommu_groups/6/devices/0000:00:1b.0 /sys/kernel/iommu_groups/6/devices/0000:00:1b.2 /sys/kernel/iommu_groups/6/devices/0000:00:1b.3 /sys/kernel/iommu_groups/6/devices/0000:03:00.0 /sys/kernel/iommu_groups/6/devices/0000:04:01.0 /sys/kernel/iommu_groups/6/devices/0000:04:02.0 /sys/kernel/iommu_groups/6/devices/0000:05:00.0 /sys/kernel/iommu_groups/6/devices/0000:06:00.0 /sys/kernel/iommu_groups/6/devices/0000:07:00.0 /sys/kernel/iommu_groups/7/devices/0000:00:1c.0 /sys/kernel/iommu_groups/7/devices/0000:00:1c.1 /sys/kernel/iommu_groups/7/devices/0000:00:1c.2 /sys/kernel/iommu_groups/7/devices/0000:00:1c.4 /sys/kernel/iommu_groups/7/devices/0000:0a:00.0 /sys/kernel/iommu_groups/7/devices/0000:0b:00.0 /sys/kernel/iommu_groups/7/devices/0000:0c:00.0 /sys/kernel/iommu_groups/7/devices/0000:0c:01.0 /sys/kernel/iommu_groups/7/devices/0000:0c:02.0 /sys/kernel/iommu_groups/7/devices/0000:0c:04.0 /sys/kernel/iommu_groups/7/devices/0000:0f:00.0 /sys/kernel/iommu_groups/8/devices/0000:00:1d.0 /sys/kernel/iommu_groups/9/devices/0000:00:1f.0 /sys/kernel/iommu_groups/9/devices/0000:00:1f.2 /sys/kernel/iommu_groups/9/devices/0000:00:1f.3 /sys/kernel/iommu_groups/9/devices/0000:00:1f.4 /sys/kernel/iommu_groups/10/devices/0000:00:1f.6 XML from the fresh Windows 10 install: <domain type='kvm' id='5'> <name>Windows 10 Test</name> <uuid>ad76896b-af8d-17f4-9007-b07b6d5d573c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>3</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='3' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/appdata/Windows 10 Test/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/ISO/Windows 10/Windows.iso'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/ISO/virtio-win-0.1.130.iso'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:cd:65:97'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10 Test/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc313'/> <address bus='1' device='9'/> </source> <alias name='hostdev2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04d9'/> <product id='0xfa50'/> <address bus='1' device='7'/> </source> <alias name='hostdev3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1130'/> <product id='0x1620'/> <address bus='1' device='5'/> </source> <alias name='hostdev4'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain>
  25. What trurl said. Here's a link with some info you might find useful. https://lime-technology.com/wiki/index.php/Shrink_array