Ep1cPl4yz

Members
  • Posts

    19
  • Joined

  • Last visited

Ep1cPl4yz's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Thanks for the information. I attempted to do that but my motherboard doesn’t like it. I have since assumed that was the case and gave up on trying to resurrect my project.
  2. I made the mistake of updating the BIOS on my X370 Ryzen board, which completely killed my VM project. Here was my little predicament: I recently updated my BIOS, which changed all of the PCIe device IDs. (i.e. 29:00.0 -> 0a:00.0) Unraid went nuts, and I eventually resorted to starting completely from scratch. I created 2 fresh Windows 10 VMs using VNC, which worked. When I went to pass through my RX 580 to a VM, it starts but then pauses itself. Attempting to resume it results in this error: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required I force stop the VM and attempting to start it again results in this error: internal error: Unknown PCI header type '127' I receive the same errors when trying to pass through my RX 570 to the other VM, and they will not do anything until the system is rebooted. However, my cheap Nvidia GT 710 will pass through and run the VM for a few minutes before giving me a black screen with just the cursor. Attached are my diagnostics. Motherboard: ASUS Crosshair VI Hero (Wi-Fi AC) CPU: AMD Ryzen 7 1700 RAM: 32GB DDR4-2400MHz GPUs: - Nvidia Geforce GT 710 - AMD RX 570 (primary) - AMD RX 580 Any ideas as to what is wrong with the new BIOS? diagnostics-20190615-1715.zip
  3. If you have a recent Wi-Fi router close to where you plan to place your thin clients, you may be able to get away with using 5GHz Wi-Fi for the thin clients, but connecting an UnRAID box to Wi-Fi is never a good idea.
  4. Whatever is using the integrated graphics will probably show up on the built-in display. An external display may be required to see the output of the dedicated GPU.
  5. I have had success with passing the primary GPU and a secondary one in a desktop and then switching using OS settings. Install your OS using VNC then pass through both GPUs to the same VM and see what happens. You may need to attach an external display to a video output port for it to work.
  6. Your GPUs are probably bottlenecked by that hard drive. If your vdisks are in their own share, you can set that share to only use the cache disk(s), then run the mover. If you're worried about losing data that is already on your SSD, use cloning software to make a copy of it onto another disk.
  7. Scenario 1, hence the name, is pretty basic and can easily be done with the correct hardware. A three-monitor setup is easy if there are enough video outputs on your GPU. Yes, it would function almost identically to a standalone desktop with similar specs. It would not interfere with dockers provided that there are enough resources left over for Unraid to use for them. Scenario 2 is feasible with the correct hardware. Just make sure there is a solid Ethernet connection between the server and thin clients. Look into using Thunderbolt to connect a dock to the server, such as this one from Elgato. Same as above otherwise. Scenario 3 is completely out of the question unless you have an insanely fast internet connection for the server, and a pretty decent one for the remote clients. Chances are your ISP does not sell a fast enough internet connection to support multiple remote users simultaneously. It would probably be cheaper to build and send each of your team members a new mid-range system than to pay for an insanely fast internet connection every month.
  8. According to the diagnostics you attached at the beginning of this thread, the GPU in IOMMU group 25 is on its own in that group. However, that error message suggests otherwise. Try setting the ACS override to "Both" and reboot. Also make sure the VM is shut down. That error message specifically refers to the HDMI audio device associated with that GPU. Make sure the GPU and its audio device are going to the same VM and that it is not being used by another running VM.
  9. Most NVMe SSDs use 4 PCIe lanes. If you plug one of these adapters into an x4 slot, there should be a negligible performance hit. After all, PCIe is PCIe, no matter what physical connector is used. I don't think you can pass through a storage device to a VM. You will probably have to assign it another vdisk and put the 1TB drive as cache with the 2TB one.
  10. To load the VirtIO drivers, click the Load driver button in the Windows installer and click the option that has "w10" in the name.
  11. I do not know a whole lot about what that does, do so at your own risk. I don't think it would impact getting Windows to boot, though.
  12. I would also suggest you reboot after downloading it in case an old one is loaded into RAM or something.
  13. In that case, your ISO is probably corrupted. Try downloading a new one from Microsoft and try again. (I'll save you a few clicks.)
  14. At the UEFI Interactive Shell prompt, type exit and press enter.