Shadz

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Shadz

  1. I tried adjusting "interrupt moderation" on the W11 computer I'm transferring from to no real difference, per this thread Any other ideas?
  2. I switched to W10 and now it's even worse, running at 1 Gbps, even though everything is using virtio drivers. Help please! Transferring within the VM, 980 -> SN8200 gives me 1.8 GBps so it's not the drive itself.
  3. Hi all, I'm racking my brain trying to figure out why my W11 Pro VM is only transferring ~250MBps to my Unraid cache. As I understand it, the SX8200PNP drives have ~3000 MBps sustained transfer even after the cache is full, so this result is much slower than I'd expect. I'll list components below along w/ things I've tried: Components 11700K on Asus Strix Z590-E 2.5 Gbps Intel I-225V network port W11 VM 10 cores, 24GB DDR4 RAM Samsung 980 Pro 2 TB passed through via config (not IOMMU) AData SX8200PNP 2 TB pass through via config Unraid cache AData SX8200PNP 2 TB btrfs Array 13x WD 14TB (shucked aka red plus) Switch Netgear 2.5 GbE switch Tried Using Netgear 2.5 GbE switch Removing cache pool (used to be RAID0 2x2 TB Adata) Switching from e1000 to virtio-net to virtio Some questions: The virtio driver says 10G, which is ofc emulated, so it shouldn't be affected by the hardware 2.5 GbE, correct? Transfers from the VM to the Unraid cache aren't actually going through my network card, correct? Thanks for the help!
  4. I did this but now transfers are limited to 1 Gbps. Is there a way to revert this to virtio-net afterwards (and still work)? When I changed to virtio, Windows complaints it doesn't have a network card ><"
  5. Hi, I'm having trouble after passing through a NVMe drive (used to passthrough a SATA drive instead) to my W10 VM. Asus Z270-A Prime Nvidia 1030 Intel i7-6700K NVMe: Samsung Evo 970 Plus 500 GB Some debugging info: VT-D is on The NVMe drive has a bare-metal installed version of W10 Pro (same as the SATA drive prior) I followed SpadeInvaderOne's instructions to set the function 0x1 properly for the GPU. Here's the template: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 Gaming</name> <uuid>4395501d-a8b5-1d47-2da6-8e76fad67cd1</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/4395501d-a8b5-1d47-2da6-8e76fad67cd1_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='pci' index='0' model='pci-root'/> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:1a:f3:1b'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='2'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain> Here's the error log: ErrorWarningSystemArrayLogin -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/4395501d-a8b5-1d47-2da6-8e76fad67cd1_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-5.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,migratable=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \ -m 16384 \ -overcommit mem-lock=off \ -smp 6,sockets=1,dies=1,cores=3,threads=2 \ -uuid 4395501d-a8b5-1d47-2da6-8e76fad67cd1 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=32,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1a:f3:1b,bus=pci.0,addr=0x2 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=36,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=2 \ -device vfio-pci,host=0000:02:00.0,id=hostdev0,bus=pci.0,addr=0x3 \ -device vfio-pci,host=0000:02:00.1,id=hostdev1,bus=pci.0,multifunction=on,addr=0x5 \ -device vfio-pci,host=0000:06:00.0,id=hostdev2,bus=pci.0,addr=0x5.0x1 \ -device vfio-pci,host=0000:07:00.0,id=hostdev3,bus=pci.0,addr=0x8 \ -device usb-host,hostbus=1,hostaddr=4,id=hostdev4,bus=usb.0,port=1 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2022-02-23 23:27:56.152+0000: Domain id=5 is tainted: high-privileges 2022-02-23 23:27:56.152+0000: Domain id=5 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2022-02-23T23:27:57.688818Z qemu-system-x86_64: -device vfio-pci,host=0000:02:00.0,id=hostdev0,bus=pci.0,addr=0x3: Failed to mmap 0000:02:00.0 BAR 3. Performance may be slow 2022-02-23T23:28:01.135519Z qemu-system-x86_64: vfio_err_notifier_handler(0000:06:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest IOMMU: IOMMU group 0: [8086:191f] 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers (rev 07) IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 07) IOMMU group 2: [8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 07) IOMMU group 3: [8086:a2af] 00:14.0 USB controller: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 30de:6544 Kingston DataTraveler 2.0 Bus 001 Device 003: ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS Bus 001 Device 004: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub IOMMU group 4: [8086:a2ba] 00:16.0 Communication controller: Intel Corporation 200 Series PCH CSME HECI #1 IOMMU group 5: [8086:a282] 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [2:0:0:0] disk ATA WDC WD140EDGZ-11 0A85 /dev/sdj 14.0TB [3:0:0:0] disk ATA WDC WD60EFRX-68M 0A82 /dev/sdk 6.00TB [4:0:0:0] disk ATA WDC WD140EDGZ-11 0A85 /dev/sdl 14.0TB [5:0:0:0] disk ATA WDC WD140EDGZ-11 0A85 /dev/sdm 14.0TB IOMMU group 6: [8086:a2e7] 00:1b.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #17 (rev f0) IOMMU group 7: [8086:a2eb] 00:1b.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #21 (rev f0) IOMMU group 8: [8086:a290] 00:1c.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #1 (rev f0) IOMMU group 9: [8086:a294] 00:1c.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #5 (rev f0) IOMMU group 10: [8086:a298] 00:1d.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #9 (rev f0) IOMMU group 11: [8086:a2c5] 00:1f.0 ISA bridge: Intel Corporation 200 Series PCH LPC Controller (Z270) [8086:a2a1] 00:1f.2 Memory controller: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller [8086:a2f0] 00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio [8086:a2a3] 00:1f.4 SMBus: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller IOMMU group 12: [8086:15b8] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V IOMMU group 13: [1000:0087] 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) [1:0:0:0] disk ATA WDC WD140EDFZ-11 0A81 /dev/sdb 14.0TB [1:0:1:0] disk ATA WDC WD140EDFZ-11 0A81 /dev/sdc 14.0TB [1:0:2:0] disk ATA WDC WD140EDFZ-11 0A81 /dev/sdd 14.0TB [1:0:3:0] disk ATA WDC WD120EDAZ-11 0A81 /dev/sde 12.0TB [1:0:4:0] disk ATA WDC WD140EDFZ-11 0A81 /dev/sdf 14.0TB [1:0:5:0] disk ATA WDC WD120EFAX-68 0A81 /dev/sdg 12.0TB [1:0:6:0] disk ATA WDC WD120EMFZ-11 0A81 /dev/sdh 12.0TB [1:0:7:0] disk ATA WDC WD140EDFZ-11 0A81 /dev/sdi 14.0TB IOMMU group 14: [10de:1d01] 02:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1) [10de:0fb8] 02:00.1 Audio device: NVIDIA Corporation GP108 High Definition Audio Controller (rev a1) IOMMU group 15: [1cc1:8201] 03:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) [N:0:1:1] disk ADATA SX8200PNP__1 /dev/nvme0n1 2.04TB IOMMU group 16: [1cc1:8201] 04:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) [N:1:1:1] disk ADATA SX8200PNP__1 /dev/nvme1n1 2.04TB IOMMU group 17: [1b21:2142] 06:00.0 USB controller: ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller This controller is bound to vfio, connected USB devices are not visible. IOMMU group 18: [144d:a808] 07:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 This controller is bound to vfio, connected drives are not visible. Any help would be greatly appreciated!
  6. Thanks! I didn't read that far b/c I wasn't going to manually make the USB. I'll try this!
  7. I did do a backup way back when, but the idiotic me put it on the server ><' No to CA Backup / My Servers :(
  8. Hi, was using a Cruzer Ultra Fit 3.1 (16 GB) and it suddenly died on me. It said I couldn't read/write... and I stupidly turned it off rather than backing it up immediately. Now it's completely dead (tried chkdsk) and I cannot recover the drive order. Help would be much appreciated. I've ordered 2x Data Traveler SE9s to prevent this from happening in the future.
  9. Gotcha! I'll turn on turbo-write for large transfers. I recall reading that the virtio drivers emulates a 10G NIC but my motherboard only has a 1G NIC. Am I technically bottlenecking myself at 1G because of it? Specifically, if I change the NIC to a 10G card (and network switch), would I transfer to my cache drive at 10G instead?
  10. Hi all, I've been setting up UnRAID for the first time and having a blast. Now that initial drive setup and files are transferred, I've setup my parity drive. However, transfers to the drives is very slow (~50 MBps) even between the Win10 VM and the server itself. For comparison, disk to disk transfer via Krusader is ~150 MBps, Preclear/parity was ~190 MBps. Transfer over the network, prior to the parity drive was 130 MBps (in line w/ gigabit ethernet). These are all numbers without a cache drive. So question is, 1) how do I get it past 50 MBps or is this a limitation of the parity process? 2) Is it worth it to get a faster NIC? It seems that when using the VM to send data to the share, I'm technically sending data to the router and back, thus capped at my ethernet speed. If I were to get a 2.5 or 10 GbE NIC+switch, would that mean that I'd transfer closer to the max speed of the drives? Relevant hardware: CPU: Intel 6700k RAM: 32 GB (16 allocated to W10 VM) Mobo: Asus Z270-A Prime NIC: integrated gigabit Array: 5x 12 TB WD Red (incl. 1 parity) Thanks!!
  11. I was able to get 6.9beta30 to boot w/ GUI only. I can't produce a diagnostic b/c it just crashes/reboots immediately when booting headless. I imagine it's because of the new kernel reacting to the iGPU - which would explain why it works under GUI mode? System CPU: Intel i7-6700k, integrated GPU Mobo: Asus Z270-A Prime GPU: Asus GT 1030 Drives: Lite-On 256 GB SATA M2, 2 TB AData XPG 8200 Pro, 5x12 TB WD Red
  12. Hi, I think it would be really useful to see what kinds of CPU/motherboard /GPU/HBA combinations that people are using [successfully] with unraid so we have an idea of what has worked and will likely work? Sifting through forums is possible but often neglects certain things. It shouldn't be hard to create a poll / profile where you put in your components and then show the results?
  13. I'm having a similar problem, where the VM worked in 6.8.3 and upon upgrading the 6.9beta30, it completely dies. Even downgrading back to 6.8.3 doesn't fix it. After further restarts, Unraid doesn't even start...
  14. I tried reverting back to 2.8.3, but that doesn't fix it. As far as I can tell, the only difference, when upgrading from 2.8.3 to 2.9beta30 was that machine went from i440fx-4.2 to 5.1. Going back to 2.8.3 changed it back to 4.2, but it doesn't work anymore. In a previous test, I had gotten this to work in 2.8.3, upgraded to 2.9 and it died. I thought it was something wrong w/ me, so I downgraded to 2.8.3 and it still didn't work. HOWEVER, a completely clean 2.8.3 will setup the VM properly and run it. When I try to make a 2.9beta30 flash drive, that refuses to boot in my system...
  15. If I understand correctly, you could run two SSDs in your cache, thus it effectively backs up your VM? That could get expensive, of course... Another option is to have a SSD as an unassigned device, and then put your Plex cache data on that?
  16. Hiya, I'm new to Unraid 6.8.3 and just got my baremetal W10 passthrough to boot, via SpaceInvaders videos. The system is completely clean and works properly. Since the system is brand new anyway, I thought I'd upgrade to 6.9beta30 to make sure everything works. Unfortunately, the beta completely breaks my VM even though there aren't any crazy modifications (I only passed through an NVMe). Instead I get a mapping table? Please help! System: Unraid 6.9beta30 CPU - Intel 6700k (Skylake - which supposedly means no ACS override, but multifunction+downstream does change IOMMU groups in practice) Mobo - Asus Z270-A Prime RAM - 32 GB Cache: 256 GB M2 SATA Array: 2x3 TB WD Red VM: Adata XPG 8200 Pro 2 TB, passed through <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>8d89ce80-d7e2-11dd-90c0-b06ebf2d865d</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='6'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/8d89ce80-d7e2-11dd-90c0-b06ebf2d865d_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/nvme-ADATA_SX8200PNP_2K2929262GJG'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/sdd'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.185.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='0' model='pci-root'/> <interface type='bridge'> <mac address='52:54:00:0d:cc:5e'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='2'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain> voyager-diagnostics-20201028-1857.zip
  17. I tried this but it didn't work. Just making sure I did the correct thing: completely new install. I had stubbed the NVMe, then unstubbed it, and referred to the NVMe by it's id (the whole drive). Did I do that correctly? Should I refer to one of the partitions in particular?
  18. Hiya, I'm having the same dilemma - Windows qbittorrent has more features and is easier to use than the WebGUI (which is required by docker). And then I'd want Plex running somewhere, but not sure where to put it. LIke you, I'm thinking of just running everything in the Windows VM to share resources etc. In my particular case, I don't ever see myself transcoding while I'm using the VM for gaming, so there shouldn't be any undue effects. It also simplifies my hardware setup - right now, I have a little M2 256GB drive for Unraid cache, but if I were to run Plex in a docker, I'd definitely have to increase that to 500 GB or more.
  19. Yeah, the community seems great Precisely! Do motherboard USBs not count as separate controllers? I have several (3) front-panel connectors for at least 6 USB devices. In Windows, they tend to show up as separate controllers - do they also show up that way in Unraid? re: GPU - you're suggest to let Unraid/Docker have the iGPU for Plex and then use a cheap GPU for the Windows VM? Do I have to designate the iGPU to the Plex docker (or is that a shared resource of unraid)? And second, if Plex never has to transcode, then it doesn't need a GPU unit, right?
  20. hehe, I understand your concerns I'm a little on the fence too... But, 1) Unraid seems like a lot of fun 2) Storage spaces is horrible for parity. Performance goes to literally 10 MB/s. 3) I am considering RAID5, but that means I lose the ability to incrementally increase my storage - at the very least, I have to get 4 HDDs at a time to increase the array
  21. Hi all, I'm likely to setup an UnRAID for the first time and am looking for feedback/suggestions! Pressing questions: 1) How big of a cache do you think is optimal? What format should it be (M2 NVMe, M2 SATA, SATA SSD)? 2) How would you organize HDDs for torrenting to minimize wear on the array? For example, I have a spare 500GB HDD that I could designate as the location for incomplete torrents before it transfer to cache/array 3) Do I need a discrete GPU for the Windows VM? Specifically, can Unraid run w/o using the iGPU so my simple Windows VM can run on the iGPU? Hardware Case: Fractal Design Define 7 CPU: Intel i7-6700k GPU: Integrated from 6700k Mobo: Asus Z270-A Prime RAM: 32 GB (4x8 GB) G.Skill 3200@16 Addon: LSI 2308-based 8i Array: 3x12 TB WD Red Add'l available hardware: 500 GB 2.5" SATA HDD 256 GB M2 SATA 5x 3 TB WD Red Mobo supports 2x M2, and up to 6 SATA (depending on M2) Use cases 1) NAS for movies, TV, etc. 2) Hooked up directly to TV for Zoom, playing anime that Plex doesn't handle well 3) Windows VM for qbittorrent*, Plex server** Future upgrades 4) Windows VM w/ discrete graphics for VR Thanks!! * I really prefer to torrent via the Windows GUI b/c I can't rename/handle torrent files in WebGUI easily ** Since I only have the iGPU atm, I can't run the VM and Plex docker at the same time, so Plex will have to be run within Windows