Turnspit

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by Turnspit

  1. Did you also edit the ROM file downloaded? https://forums.serverbuilds.net/t/guide-remote-gaming-on-unraid/4248/6 See 6.7.
  2. There's also Tailscale Dockers for UnRAID and it does not seem to need any port forwarding, similar to ZeroTier... So you could give it a try.
  3. Well, without the option of forwarding any port, your only chance of remote access that comes to my mind spontaneously would be Zero Tier. There's also a Docker container for UnRAID.
  4. I'm also using multiple Raspberrys and Ubuntu machines, and all of them mount fine using cifs and /etc/fstab. Have a look here for reference: https://wiki.ubuntu.com/MountWindowsSharesPermanently
  5. I've noticed (don't exactly know since when) that I can't stop my VMs anymore from the UnRAID-GUI, only Force-Stop works. VMs that are running: Windows 10 Windows Server 2019 Ubuntu Server (x3) Same problem with all of them. What is the difference anyway between Stop and Force-Stop? Thanks in advance! ๐Ÿ™‚ homeserver-diagnostics-20210816-1056.zip
  6. This could maybe be a faulty config of the DNS or Gateway address on your UnRAID machine. From terminal on UnRAID: cat /etc/resolv.conf (show currently used DNS servers) route -n (show Gatway address) If you have set a static IP address inside UnRAID, you could also try switching to automatic assignment and see if it works.
  7. I just wanted to swoop in on this subject, but you already have found the solution. ๐Ÿ™‚ I gotta admit I found passing through a GPU to a VM to be comparably easy (it used to be WAY more complicated in the past). Although I didn't have to do any XML-editin with my 1070 FE.
  8. IOMMU enabled in BIOS? GPU IOMMU group bound to VFIO at boot? Did you pass through all parts of the GPU (audio controller, possible USB-port, ...)? Post your .xml-file of the VM as well.
  9. Sounds like a driver thingy. Had a similar issue occur, where before all the updates had finished, I had video output on my HDMI port, but not on my DP port.
  10. To insert you logs, just drag the files into the bottom part of the editor window. Do you have any part of your server exposed to the internet?
  11. I didn't find any infos about the support of Legacy boot on your Mobo as well... Anyway, switchting from Legacy to UEFI shouldn't be a problem at all and just one click away.
  12. Oh, you might also wanna look at how you boot at the moment (Legacy/UEFI) and if the new motherboards still supports Legacy boot, if currently in use.
  13. I switched from a Xeon 1230v3 to a Ryzen 9 3900x, replacing the same parts as you. Just update the BIOS settings regarding boot order, and virtualization/IOMMU if needed, and you should be good to go. It was a pretty simple process (at least for me). In any case I'd still suggest to make a backup of your flashdrive as well as your drive order, should anything go wrong unexpectedly.
  14. Ypu, scrubbing is possible. And with a little script, it should be possible to be automated.
  15. Try enabling "PCIe ACS override" in the advanced VM settings and have another look at the IOMMU groups and if they've split up.
  16. The first CPU Core should always be left free for UnRAID system usage. Depending on how much performance you need in each of those two VMs, you might wanna go with 4 cores for the Windows VM, and 3 cores for the Mac VM, or something like this. Using the same Cores in multiple VMs might always lead to slowdowns or hickups, depending on usage. What gave my Gaming VM a huge performance and stability boost was also isolating the pinned cores from the system (settings -> CPU pinning).
  17. Looking for something like this as well (albeit for a Docker containter). ๐Ÿ™‚
  18. I'm looking for a solution to connect to my Offsite-Backupserver. I want my whole LAN to have access to the Backupserver, but only the Backupserver having access to my LAN. Sort of a "Server to LAN access". Is this possible, and what would the steps be? Thanks! ๐Ÿ™‚
  19. As a first step of troubleshooting without having seen any details: Have you tried removing all passed-through devices from the VM and tested if it starts up correctly?
  20. Unfortunately, using USB devices for Array or Pool operation is not possible.
  21. Out of curiosity, what's your problem exactly with only one parity-protected array?
  22. No - you can have only 1 Parity-Protected (with up to 30 or so drives and a max of 2 Parity drives) array, but multiple Raid-0 or Raid-1 Pools.
  23. Your HDDs must be placed in the main Array, not in a pool. I would recommend to go for at least 1 Parity Drive. Putting the SSDs in a single pool for caching, you'll have to choose between striping them all together into a single 2 TB pool (where if one drive fails, you lose the pool), or going for a 2x2 mirror with an effective size of 1 TB but aloowing for up to 2 drives to fail. I'm personally more of a redundant guy, so I'd choose the mirror.
  24. Hi! UnRAID boots and runs completely from a (preferably high-quality) USB drive, so there's no need for small leftover SSDs as boot devices. You can set up the HDDs with 0, 1 or 2 drives functioning as Parity Drives, depending on how much drive-failures you want to survive. The setup of your NVMEs and SSDs is totally depending on your use-case. I personally have 3 SSDs as well as 1 NVME set up in 3 different pools. The NVME and a SSD of 1 TB each in separate pools for my Gaming VM (which also could/should be passed through via Unassigned Devices for improved performance) and 2 500 GB SSDs set up in a mirror pool for my Downloads, Appdata and not-performance-reliant daily use VMs.