testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Radarr and Sonarr have proxy option (Settings -> General). So what are the "etc"?
  2. I have X399 Designare EX which is pretty similar. I passed through my GPU, USB controllers, NVMe etc fine. I am even able to run a single GPU passed through. However, I did dump my own vbios using a second GPU to boot Unraid with before switching to single-GPU. This is where Gigabyte "Initial Display Output" setting in the BIOS comes in very useful since I just need to plug the second GPU in any spare slot, set the BIOS and do the dump. No need for GPU swapping and I have seen a report of vbios dumped on 1 slot not working if GPU is on a different slot. Have a look at my build log for some tips that might be useful.
  3. AMD GPU reset bug is not fixed (because AMD is not fixing it). There are community-built custom kernel with reset patch but use it at your own risks. Temp monitoring is dependent on the available driver so hard to say for sure. Other than that, all the early teething issues have been bedded down. Now with regards to gaming VM, there is a clear advantage of Intel (single-die) over AMD. This is due to the inherent properties of the Zen architecture with the CCX/CCD design. In practice, it means gaming VM with Intel (single-die) CPU tends to have more consistent frame rate than AMD. There are ways to mitigate this but (a) it requires some technical know-how and (b) it does not completely level the difference. How bad it affects you very much depends on your own tolerance level. I personally have zero problem but a friend had recently pointed out to me that my (first-person shooting) game looked choppy to him. I think for you gaming VM, you are better off with an Nvidia GPU but as always, GPU pass-through-ability can never be guaranteed.
  4. The red section DEFINITELY does not look like you have turned on reconstruct write (aka Turbo Write) correctly. I would recommend you check the settings again. During write with reconstruct write, there should be zero read on the parity (only write).
  5. With an AMD GPU, your best bet at the moment is to try to pass it through to a VM and do things in the VM, reset bugs and all. Otherwise, it will just be a glorified VGA display for the Unraid CLI / GUI. For Nvidia GPU, there is the alternative of the Unraid Nvida build which allows hardware transcoding e.g. Plex using the GPU. That's another alternative of a GPU outside of a VM but obviously doesn't work with AMD. Or you can set Unraid up in a way that would allow dual-boot, that is it allows you to pick, at boot, whether to boot into a Windows bare metal (e.g. to play game) or Unraid (for NAS and docker stuff). There's a SpaceInvader One tutorial on Youtube for that.
  6. I would +1 on this too.
  7. Your issue probably relates to the vega reset bug which is a completely separate issue to the discussion of this topic. You might want to start your own topic for more targeted advice.
  8. That is not the ideal way to pass through NVMe SSD. It still requires IO to go through the host which naturally would lag if the host is waiting for other IO to complete. The best way to pass through NVMe is to pass it through using the PCIe method (so like a GPU). You need to first stub it so that it shows up on the Other PCI Devices section of the VM template. The new (easy) method to stub PCIe device is to install the VFIO-PCI Cfg plugin. Then Settings -> VFIO-PCI Cfg -> tick the device corresponding to your NVMe -> Build VFIO-PCI.CFG file -> Reboot -> select it in the VM template under Other PCI Devices. Note that any device that is also in the same IOMMU group as the NVMe will also be stubbed by default (i.e. the host can't use it) so make sure the NVMe is in its own IOMMU group first and foremost. Also, you have sdc also "passed through" which is also not the right way to do it. For SATA devices, the best way is to use ata-id (watch SpaceInvader One vid on that topic). Note though that depending on how the SATA disk is being used under the VM, host lag may still carry over to the VM.
  9. For work, buy an air cooler (e.g. NH-U14S TR4-SP3). AIO will not be able to match a good air cooler when taking reliability into account. Does each VM require its own GPU? You picked a single RTX 2070, which is a massive overkill if the each VM doesn't need a dedicated GPU but would be totally insufficient otherwise (i.e. will need multiple GPU). In case you wonder, Unraid does not support any kind of GPU sharing tech. What are the peripherals? How are the peripherals connected to the VM e.g. USB? You will have to go through hoops if all you VM's use the same model of peripheral e.g. the same keyboard model.
  10. How would rclone union treat RO+RW mix? In other words, will there be CoW? 🐮
  11. How are you able to write 14TB of Parity information if your parities are 10TB? Are you doing multiple transfers at the same time or single file one at a time?
  12. IOWait lag is unavoidable, you can only try reducing the impact. Have an SSD array is one way to do it but you don't quite need to go that far. Have fast HDD will help. Those 20-year-old / 2TB HDD are incredibly slow. Try to phase them out of your server by upgrading to fewer large capacity drives. At the very least, it reduces the number of points of failure in the server. Keep the HDD spin up (set spin down timer to never). Waiting for drives to spin up will 100% of the time cause lag. Turn on turbo write. Pass through an (additional) NVMe SSD to your VM using the PCIe method (i.e. like a GPU) and use it as boot drive. Watch out is drives that are on their way out also will cause lag. Also which WD NVMe? The SN500 is DRAM-less and thus slow.
  13. For my case, the new VM will error out due to USB and/or PCIe devices that are being used by the running VM. To swich between profiles, I have to first shutdown the running VM, which is easily done through bash script.
  14. 2 ways I can think of: Use a custom smb config file. The UD SMB is in itself just a custom SMB config file. (Hint: by "include" the file, you can edit the underlying file and restart SMB for it to work, instead of needing to stop the array if you edit the Unraid own config through the GUI.) Have a cache-only share with the different permission and then create a symlink in the top folder that link to your corresponding UD mounts. I am using both methods at the moment.
  15. 1. You can actually combine find + rsync with bash for a 1-liner that would rsync stuff older than a certain number of days. If you run the script daily then it sort of would achieve a FIFO-like thing. 2. It's not entirely the same. In the array, the SSD is parity protected. In the cache pool, the SSD is RAID-protected (or not, depending on RAID type). It all depends on your use case. 3. Cache pool is trim enabled.
  16. By JBOD do you mean each drive shows up as individual drive? Or do you mean JBOD as in a volume that span across multiple drives? They are both "just a bunch of drives" but they are quite different.
  17. 8 cores at 3.7GHz is generally better than 6 cores at 3.8GHz. So I don't see why you can't decide. What makes you hesitate?
  18. You core assignment looks wrong. If 7 + 15 = 1 pair then 0 + 8 is 1 pair so should be 1 + 9 and not 8 + 9 like your config. If you are going to pin emulator, pin it to 8 and not 0. Also, reducing it to 12 cores by removing 4 + 12 pair may actually improve your frame rate consistency. This is to spread the load evenly across CCX. After reducing it to 12 cores, perhaps also try. <topology sockets='1' cores='6' threads='2'/>
  19. Are you sure you asked an Unraid question? Why would "host" be Win 10?
  20. You probably want to attach diagnostics (Tools -> Diagnostics -> attach zip file) Also, up top you said you are passing through the 970 Pro but below that you said passthrough 970 Evo Plus. Which one is passed through? What method did you use (ata-id? scsi bus? PCIe device?) 2 things you might want to try: Start a new config, pass through just storage + GPU and nothing else (i.e. no USB device / controller). I have had experience with strange delays on the Tianocore stage due to USB devices not doing handshakes properly. Go to you BIOS and look for any kind of power saving settings and turn them off / to max performance. I haven't had X99 for a long time so things are a little vague now but I remember there was a power-saving related thing in the BIOS that caused me quite a bit of troubles with Unraid VM.
  21. The "moving to the cloud" is done through the upload script so if you don't run the upload script then your local will remain local forever.