SxC97

Members
  • Posts

    16
  • Joined

SxC97's Achievements

Noob

Noob (1/14)

3

Reputation

1

Community Answers

  1. I would like to cast a vote for some kind of clustering or high availability feature. It probably won't be useful to most people, but I have 3 unraid servers. One networking NUC server thats on a dedicated UPS thats used for the most essential containers and VMs, one storage server that handle most backup and storage tasks, and a hypervisor thats used for running most vms. Having a way to automatically keep the most important VMs and containers up despite the hardware going down for maintenance would be a huge help! I think VMware and TrueNAS both already have these features for VMs and containers respectively, and I think this would make Unraid a much more "professional" product, even if most of us are just using it at home. Over time, as hardware gets cheaper and more accessible, I think more people will end up with multiple old computers that they can use for their home lab. Maybe you have 2 old laptops, or a couple of thin clients you bought in bulk on ebay. These machines can be put to better use in a home lab environment if they can be clustered together. Just my 2 cents!
  2. Hi there! I have recently set up a second Unraid machine and would like to move some of my containers to the new machine. I found a thread on reddit that suggested that the best way to do this would be to use this plugin to create a backup, copy the back up over to the new server and then restore it. The back up from the old server and restore on the new server worked flawlessly! But I don't see the containers in the Docker UI or under "previously apps" tab in community applications. I checked the appdata share via ssh and all of the folders do seem to be there. Am I missing something? Did I misunderstand how this plugin or how the restore function works? Thank you in advance!
  3. I tried this method but it did not work. I ended up having to switch to markusmcnugen/qbittorrentvpn container. I set the Network Type to Bridge and the LAN_NETWORK to 192.168.0.0/24. I had to make sure no other containers on my machine were using port 8080 or 8999, but other than that it worked like a charm.
  4. I looked through @WenzelComputing pastebin and can confirm that my logs look almost identical. I also followed the instructions in Q2 and Q4 of the Docker VPN Github page. I executed /sbin/modprobe iptable_mangle on my Unraid machine and made sure my LAN_NETWORK was properly configured.
  5. I'm having some trouble getting the webui to work. Heres what the logs show with DEBUG enabled: 2022-06-02 10:53:11,482 DEBG 'watchdog-script' stdout output: [debug] Checking we can resolve name 'www.google.com' to address... 2022-06-02 10:53:11,609 DEBG 'watchdog-script' stdout output: [debug] DNS operational, we can resolve name 'www.google.com' to address '172.253.115.103 172.253.115.99 172.253.115.105 172.253.115.104 172.253.115.147 172.253.115.106' 2022-06-02 10:53:11,610 DEBG 'watchdog-script' stdout output: [debug] Waiting for iptables chain policies to be in place... 2022-06-02 10:53:11,620 DEBG 'watchdog-script' stdout output: [debug] iptables chain policies are in place 2022-06-02 10:53:11,625 DEBG 'watchdog-script' stdout output: [debug] VPN incoming port is 42119 [debug] qBittorrent incoming port is 42119 [debug] VPN IP is x.x.x.x [debug] qBittorrent IP is x.x.x.x The VPN IP and qBittorrent IP are identical but I redacted them just incase. These same lines are output to the log every 30 seconds or so. Not sure where to go from here 🤷‍♂️
  6. I managed to finally fix it by just deleting the template and making a new one pointing to the same vdisk file and re-adding the multifunction flag to the xml. Thanks for all of your help!
  7. It worked! Kind of... I added the multifunction flag to the new vm template for both 2e:00.0 and 2e:00.1 and then started the vm and it worked! The monitors lit up and I was able to get into windows. I then realized that I couldn't use my mouse and keyboard since I pass through a usb card to my vm to connect my peripherals. I stopped the VM, added the usb card and started the vm up again and I was back to a black screen. ☹️ I then tried everything to get it working again. I stopped passing through the usb card and it still wouldn't work. I tried adding the multifunction flag to the usb card and that didn't fix it either. I tried rebooting several times to see if that would fix the issue but that didn't help either. The logs don't say anything different than usual, so no help there. This is what the xml for the vm looks like. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 - New</name> <uuid>67c54cae-7a1e-9150-00ae-b59cb1c07be5</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='12'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='13'/> <vcpupin vcpu='8' cpuset='6'/> <vcpupin vcpu='9' cpuset='14'/> <vcpupin vcpu='10' cpuset='7'/> <vcpupin vcpu='11' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 - New/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='7' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:3a:71:3a'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x2e' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/RX5700XT.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x2e' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x27' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain> I also made sure that the usb cards were bound to vfio at boot: IOMMU group 30: [1912:0015] 27:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 31: [1912:0015] 28:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 32: [1912:0015] 29:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 33: [1912:0015] 2a:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. I've also added new diagnostics in case that helps. galaxy-diagnostics-20220401-0950.zip
  8. The output of `cat /proc/iomem` is 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000fffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000ce5ff : Video ROM 000f0000-000fffff : System ROM 00100000-09d81fff : System RAM 04000000-04a00816 : Kernel code 04c00000-04e4afff : Kernel rodata 05000000-05127f7f : Kernel data 05471000-055fffff : Kernel bss 09d82000-09ffffff : Reserved 0a000000-0a1fffff : System RAM 0a200000-0a20afff : ACPI Non-volatile Storage 0a20b000-0affffff : System RAM 0b000000-0b01ffff : Reserved 0b020000-9cbf6017 : System RAM 9cbf6018-9cc15e57 : System RAM 9cc15e58-9cc16017 : System RAM 9cc16018-9cc2f257 : System RAM 9cc2f258-9cc30017 : System RAM 9cc30018-9cc3e057 : System RAM 9cc3e058-aac82fff : System RAM aac83000-aafdffff : Reserved aafe0000-ab043fff : ACPI Tables ab044000-ac742fff : ACPI Non-volatile Storage ac743000-addfefff : Reserved addff000-aeffffff : System RAM af000000-afffffff : Reserved b0000000-fec2ffff : PCI Bus 0000:00 b0000000-c20fffff : PCI Bus 0000:2f b0000000-bfffffff : 0000:2f:00.0 c0000000-c1ffffff : 0000:2f:00.0 c2000000-c203ffff : 0000:2f:00.2 c2000000-c203ffff : xhci-hcd c2040000-c204ffff : 0000:2f:00.2 d0000000-e01fffff : PCI Bus 0000:2c d0000000-e01fffff : PCI Bus 0000:2d d0000000-e01fffff : PCI Bus 0000:2e d0000000-dfffffff : 0000:2e:00.0 d0000000-dfffffff : vfio-pci e0000000-e01fffff : 0000:2e:00.0 e0000000-e01fffff : vfio-pci f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f] f0000000-f7ffffff : Reserved f0000000-f7ffffff : pnp 00:00 fb000000-fc0fffff : PCI Bus 0000:2f fb000000-fbffffff : 0000:2f:00.0 fb000000-fbffffff : nvidia fc000000-fc07ffff : 0000:2f:00.0 fc080000-fc083fff : 0000:2f:00.1 fc084000-fc084fff : 0000:2f:00.3 fc200000-fc8fffff : PCI Bus 0000:03 fc200000-fc7fffff : PCI Bus 0000:20 fc200000-fc5fffff : PCI Bus 0000:25 fc200000-fc5fffff : PCI Bus 0000:26 fc200000-fc2fffff : PCI Bus 0000:2a fc200000-fc201fff : 0000:2a:00.0 fc300000-fc3fffff : PCI Bus 0000:29 fc300000-fc301fff : 0000:29:00.0 fc400000-fc4fffff : PCI Bus 0000:28 fc400000-fc401fff : 0000:28:00.0 fc500000-fc5fffff : PCI Bus 0000:27 fc500000-fc501fff : 0000:27:00.0 fc600000-fc6fffff : PCI Bus 0000:2b fc600000-fc607fff : 0000:2b:00.0 fc600000-fc607fff : xhci-hcd fc700000-fc7fffff : PCI Bus 0000:22 fc700000-fc703fff : 0000:22:00.0 fc704000-fc704fff : 0000:22:00.0 fc704000-fc704fff : r8169 fc800000-fc87ffff : 0000:03:00.1 fc880000-fc89ffff : 0000:03:00.1 fc880000-fc89ffff : ahci fc8a0000-fc8a7fff : 0000:03:00.0 fc8a0000-fc8a7fff : xhci-hcd fc900000-fcbfffff : PCI Bus 0000:30 fc900000-fc9fffff : 0000:30:00.3 fc900000-fc9fffff : xhci-hcd fca00000-fcafffff : 0000:30:00.2 fca00000-fcafffff : ccp fcb00000-fcb01fff : 0000:30:00.2 fcb00000-fcb01fff : ccp fcc00000-fcdfffff : PCI Bus 0000:2c fcc00000-fccfffff : PCI Bus 0000:2d fcc00000-fccfffff : PCI Bus 0000:2e fcc00000-fcc7ffff : 0000:2e:00.0 fcc00000-fcc7ffff : vfio-pci fcca0000-fcca3fff : 0000:2e:00.1 fcca0000-fcca3fff : vfio-pci fcd00000-fcd03fff : 0000:2c:00.0 fce00000-fcefffff : PCI Bus 0000:31 fce00000-fce07fff : 0000:31:00.3 fce08000-fce08fff : 0000:31:00.2 fce08000-fce08fff : ahci fcf00000-fcffffff : PCI Bus 0000:01 fcf00000-fcf03fff : 0000:01:00.0 fcf00000-fcf03fff : nvme fcf04000-fcf040ff : 0000:01:00.0 fcf04000-fcf040ff : nvme fd100000-fd1fffff : Reserved fea00000-fea0ffff : Reserved feb80000-fec01fff : Reserved feb80000-febfffff : amd_iommu fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fec10000-fec10fff : Reserved fec10000-fec10fff : pnp 00:05 fec30000-fec30fff : Reserved fec30000-fec30fff : AMDIF030:00 fed00000-fed00fff : Reserved fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed40000-fed44fff : Reserved fed80000-fed8ffff : Reserved fed81500-fed818ff : AMDI0030:00 fedc0000-fedc0fff : pnp 00:05 fedc2000-fedcffff : Reserved fedd4000-fedd5fff : Reserved fee00000-ffffffff : PCI Bus 0000:00 fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:05 ff000000-ffffffff : Reserved ff000000-ffffffff : pnp 00:05 100000000-104f37ffff : System RAM 104f380000-104fffffff : Reserved It looks like the relevant snippet is here: d0000000-e01fffff : PCI Bus 0000:2c d0000000-e01fffff : PCI Bus 0000:2d d0000000-e01fffff : PCI Bus 0000:2e d0000000-dfffffff : 0000:2e:00.0 d0000000-dfffffff : vfio-pci e0000000-e01fffff : 0000:2e:00.0 e0000000-e01fffff : vfio-pci It looks like the devices are 2c, 2d, and 2e. When I go to Tools -> System Devices it looks like those devices are the same AMD GPU that I'm trying to pass through: IOMMU group 35: [1002:1478] 2c:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch (rev c1) IOMMU group 36: [1002:1479] 2d:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch IOMMU group 37: [1002:731f] 2e:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] (rev c1) IOMMU group 38: [1002:ab38] 2e:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio IOMMU groups 37 and 38 are already bound to VFIO at boot. When I go to the Windows 10 - New VM Template, the GPU I'm passing through is 2e:00.0 and the sound card is 2e:00.1. I'm not sure where to go from here. Sorry if these are stupid questions, I'm still pretty new to this stuff 😅
  9. I had a data loss scare due to the constantly dropping drives. I assumed that this was due to either faulty cables or a lack of ram in the system when VMs and docker services were running. I ordered some replacement cables and some more ram for the system and things appear to be rock solid now. I've also tried to get the Windows VM in question working again but now, when I try to boot the vm, even with nothing passed through, I get a blue screen that says that the OS is corrupted and that I can't even boot into it. This is alright though since I kept all important documents on the array and not in the actual VM. I created a new VM called "Windows 10 - New" and went through the entire set up process through VNC. But when I try to pass through the GPU I get the following error in the logs: 2022-03-29T22:56:19.887355Z qemu-system-x86_64: -device vfio-pci,host=0000:2e:00.0,id=hostdev0,x-vga=on,bus=pci.4,addr=0x0,romfile=/mnt/disk1/isos/RX5700XT.rom: Failed to mmap 0000:2e:00.0 BAR 0. Performance may be slow I think this is due to the aforementioned ROM dumping issue. I've tried to look up tutorials online but I haven't made any progress. I first tried the method outlined by SpaceInvaderOne here. At the step where it says to press the power button on your computer once to complete the dumping process, I press the button but nothing happens. I then found this tutorial online to dump the vbios from the command line but the resulting file was much smaller than the one that I had downloaded from TechPowerUp. 512K vs 60K. and it still didn't work. Do you have any tips for dumping my vbios? Or a tutorial that I can follow? Edit: I managed to fix the 2022-03-29T22:56:19.887355Z qemu-system-x86_64: -device vfio-pci,host=0000:2e:00.0,id=hostdev0,x-vga=on,bus=pci.4,addr=0x0,romfile=/mnt/disk1/isos/RX5700XT.rom: Failed to mmap 0000:2e:00.0 BAR 0. Performance may be slow error by following the commands posted on this thread: echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind Unraid says that the VM is running but it still just shows a black screen on my monitor. galaxy-diagnostics-20220329-1805.zip
  10. Ok there have been some updates. First of all, thank you for helping me out with this, I really appreciate it! Secondly, I'm sorry for not mentioning which VM was acting up, I could have sworn I mentioned it was the Windows 10 one but it looks like I just forgot 😅 1. I changed the multifunction to 'on' and it did in fact fix the gpu pass through issue. I could have sworn I set PCIe ACS override to 'Both' in the VM settings, but I recently added the Nvidia GPU to the system and it looks like it might have screwed with some settings? 2. I don't have another computer to dump the .rom file from for the 5700xt so I just downloaded it from online and it has been a little finicky. I booted the Windows 10 VM and tried to download the gpu-z utility but I kept getting an error message: Error 0xc00007b “Application was unable to start correctly” This message appears when I try to launch any application on the VM. I checked online and it looks like this can happen when you have a corrupted file system. I tried to download Restoro (which every article online recommended) but I cant run it because it gives me the same error message. I then tried to just restore windows from the settings but it didn't work due to some unspecified error. I also tried using VNC instead of having a GPU passed through but no dice. Is there any hope for this VM, or should I just make a new one? 3. I bound the gpu to VFIO, another thing that I'm pretty sure I did before but apparently got reset when I added the second GPU. I also tried to set the Nvidia GPU as the boot VGA device but I couldn't find the option on my motherboard bios, an MSI B450 Gaming Max Plus. I've tried to outline the steps I took in detail here just in case someone else has a similar issue and stumbles on this thread in the future. Also sorry it took so long to reply, occasionally when the system locks up due to a misbehaving VM, Unraid drops one of my drives and I then have to remove it from the array, start the array, then stop the array, re-add the disk, start the array again and wait for a parity check to finish which can take ~24 hours. Apparently this is a known issue in Unraid.
  11. My bad, I had deleted the vm template to get the server working again. I've re-created the template and downloaded the diagnostics again. I generated these diagnostics file while the vm was not running though, hopefully this still provides some insight into whats happening. If I need to get the diagnostics files while the vm is running, is there a way to do it from ssh? Since the webUI stops working while the vm is running? galaxy-diagnostics-20220306-1320.zip
  12. I've been having some issues with my gaming vm recently. I'll outline the steps that I've taken so far. I booted up my gaming VM (with a GPU passed through) a few days ago and the webUI stopped responding and there was no video output. I eventually hard-shutdown the server by holding down the power button and rebooted it after a few seconds. The webUI was operational again but after a few seconds it again locked up. I assumed this was because I had Unraid configured to automatically start the array and start the gaming vm when the system booted up. I still had SSH access though so I deleted the vm template from /etc/libvirt/qemu and rebooted again. I now had access to the webUI so I quickly booted into safe mode and disabled the array auto-start. I then downloaded the diagnostics and found that the ssd that the vm domains were stored on had some corrupted blocks and it recommended me to run xfs_repair, so I went into maintenance mode, and ran the xfs_repair tool from the GUI with the -v option. I now had a lost+found folder filled with files that were just a long string of numbers and I had no idea how or where to restore them to. I figured I would just use the system as normal and see if those files were even necessary. I restarted the VM service from the settings tab and created a new VM with most of the same options (including the original vdisk file) except I didn't pass it a gpu or any usb devices. After VNCing into the windows VM, it told me that the file system was corrupted and that it would take a while to fix. After that finished, I rebooted the VM and found that I was back into the VM like normal. I then tried to pass through a GPU and found that once again the webUI started crashing and there was still no video output from the VM. I again had to delete the VM template from /etc/libvirt/qemu and hard reboot to fix it. I also noticed that after the webUI stopped responding, the virsh list command just hung and didn't return anything. Perhaps the VM is crashing the libvert or qemu process? All of this just because I wanted to play some Elden Ring Anyways, I then downloaded the diagnostics and decided to post here because I'm all out of ideas at this point. galaxy-diagnostics-20220305-2002.zip
  13. Haha, too true! I've only had Unraid for ~6 months so I'm still getting used to basic things like navigating the UI. While I'm here, I figured I'd post my setup as I hadn't seen anything like this posted before. I have an Unraid box running several VMs and a laptop in clamshell mode on my desk and I want to be able to quickly switch my input/output devices between the laptop and any one of the VMs. My solution is to pass one of the 4 ports on the Renesas to one of the VMs each. I then connect each of the 4 ports and the laptop to a USB switch. You can then connect anything into the USB switch and, just by changing the output on the switch, you can connect your USB peripherals to any of the VMs or the laptop. My switch had 4 inputs for peripherals and 4 outputs for different computers. The 4 inputs was not enough for all of the things that I wanted to plug in so I bought a 16 port USB hub to plug into one of the input ports so I could switch more than 4 usb devices between the VMs. An important thing to keep in mind: if you are also planning on adding things that need USB power, make sure you buy a hub that is powered from a wall outlet, not phantom power from the connected devices! I also bought a hub that had on/off switches for each of the ports, so you can quickly disable devices without having to unplug them, but this is not a necessary feature. I have a lamp that attaches to my monitor and it's easier to turn it on and off by disabling the usb port it's plugged into rather than fiddling with the buttons on the lamp itself. Now I can hot plug things like my Amp & DAC stack, webcam, flash drives, etc... to any VM or my laptop and switch between them in a second! This even make things like moving files between the laptop and VMs very easy using the flash drive as temporary storage. An important thing to keep in mind, if you want to connect things like USB HDDs or SSDs to multiple VMs, don't connect them to the USB hub! I saw several people on Amazon mention that going through several USB hubs and controllers is not great for hard drives, you could end up losing or corrupting data! You are better off passing the drive directly to the VM, going through the fewest number of USB controllers. I feel like I might have done a poor job explaining this set up so I've made a simple diagram to explain things visually. I hope this port helps other people who want to use VMs in a multi-OS, desktop setup!
  14. Ok, so I bought this card, plugged it in and was wondering why it didn't work as expected. Turns out..I'm an idiot. You have to go to Tools -> System Devices and select the checkboxes next to the usb controllers. This is how mine looks: IOMMU group 29: [1912:0015] 27:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 30: [1912:0015] 28:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 31: [1912:0015] 29:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. IOMMU group 32: [1912:0015] 2a:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) This controller is bound to vfio, connected USB devices are not visible. Then you have to click the `Bind Selected to VFIO at Boot` button. Reboot and the controllers should show up in the VM's edit page. I also couldn't boot my VM after putting the card in but that was easily fixed by creating a new VM pointing to the same old .img file.