testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Don't be too ambitious. Start with just GPU + HDMI audio + NVMe. Make sure that works first. Does it?
  2. If a single file fills up the cache drive, it will throw an out of space error to your backup app, which usually will error out instead of splitting stuff out. (I'm assuming you want the backup file to be moved to the array by the mover). In your use case, I would just write directly to the array (cache = No) with turbo write turned on.
  3. Answers to your 2 question. Q1: Sort of. The Unraid (boot) GUI can be used to access the Linux VM VNC display and show it. The "sort of" part is the display resolution is terrible so really nobody wants to do that. Q2: Nope. Displaying an analogue clock is one of the most idiosyncratic use cases I have ever seen on this forum. 😅
  4. Nothing strange about it: you have only 1 GPU. Once passed through to the VM, Unraid doesn't have any GPU work with. The GPU will not be returned back to Unraid after passed through. If you want to use the Unraid (boot) GUI as well as the VM then you need 2 GPUs. Alternatively, access the GUI from another computer via the network.
  5. The RTX 2080 has FOUR devices (GPU, HDMI audio, 2x USB devices i.e. everything you see in IOMMU group 73) that must be passed through together for it to work, regardless of IOMMU grouping (i.e. regardless of ACS Override). Your xml only includes the GPU and HDMI audio and is missing the other 2 devices. Go to the app store and look for VFIO PCI Cfg plugin and install it. Then Settings -> VFIO PCI Cfg -> tick all 4 devices in group 73 -> build VFI-PCI.cfg -> reboot. The 2 USB devices should now show up in the Other PCI Devices section in your VM template for you to select.
  6. The con of having 2 separate connections is that your NAS access from within the server (e.g. VM accessing the array) will also go through the router which means it's limited to gigabit (125MB/s). You will also need complicated routing rules to separate Unraid NAS and non-NAS traffic to route them separately. A much simpler thing to set up is to have only specific dockers that actually require VPN to have VPN. There are many of those "VPN-included" dockers on the app store. They also come with privoxy as well to serve as http proxy for other dockers to route http traffic through the same VPN.
  7. ...but if you write to a file that exists on one of the RO locations, it will refuse to write. Radarr/Sonarr requires full write capability to ALL content of the folder. Enabling full write on a mix of RO and RW content requires CoW which is not supported by mergerfs.
  8. Possible? yes. Does it make sense? Not really. Why would you want to do that?
  9. You can't mix RO and RW like that with mergerfs. It will not let you write to any file that is on the RO source since it doesn't support CoW (copy-on-write). To use a mix of RO and RW, you need to use unionfs (which was in the older versions of the script) and of course not have the mergerfs benefits. I guess the important point is why you would need the rclone mount to be RO.
  10. More tinkering: I got a bit annoyed with needing to plugin 3 USB devices for my offline backup so had (another) rethink. Bought a 2-bay non-RAID USB enclosure for the 2x 8TB HDD to serve as my offline backup (so I only need to plugin one device). It is off most of the time at the back of the bookshelf so a bit less HDD noise on a daily basis but the enclosure has terrible coil whine when on. Reuse my old 15mm 2.5" USB enclosure for the 5TB Seagate 2.5" SMR HDD for "archival" storage (i.e. things that I'm only 99% sure I can delete). Plug in all of the SATA SSD's back onto the server and use them for online backup. I originally had these in the array but then decided to use mergerfs instead. The main reasons are (a) I can trim the SSD and (b) without the need for parity, mergerfs and Unraid shfs essentially offer the same functionality. My array, as a result, now only has the 10TB HDD, which is on 3-hour-spin-down setting. I now have 2 mergerfs bash scripts. One to pool the internal SSD's to create a online backup location. One to pool the external USB HDD's when they are connected to create the offline backup location. While running btrfs scrubs, I noticed that disk activities do not respect my isolcpus settings. It's still using the core that I isolated (and behaved just like when one assign an isolated core to a docker i.e. max it out at 100%). So I raised a bug report and it seems to have been fixed by adding some additional options to syslinux e.g. isolcpus=32-63 nohz_full=32-63 rcu_nocs=32-63 Maybe placebo effect but I also noticed things to be a bit springier in my VM after the tweaks too.
  11. Have you tried adding nohz_full and rcu_nocs to your syslinux? It seems to have worked on my server but I'd like to double check on a different config just to be sure it ain't placebo effects.
  12. You need at least 1 SSD, regardless of type in the cache pool. That is where critical stuff like docker img, docker appdata, libvirt img (critical for VM), vbios etc. will be saved. Now for your VM, there are 2 ways to do storage, either vdisk or PCIe-pass-through. If using vdisk then you can use the same cache SSD above. Just need a larger SSD to cover for the vdisk size. In this scenario, NVMe would not be a bad choice but SATA has been proven to be entirely sufficient (i.e. it comes down to how much you are willing to pay). If doing PCIe pass-through then you need a 2nd SSD, which must be NVMe (remember, NVMe SSD is a PCIe device). That is how you get maximum performance. There is a 3rd method which is ata-id pass-through (and a rarer 4th method which is scsi bus pass-through) but it isn't much better than using vdisk so I would rather simplify things to (1) or (2). PCIe 4.0 NVMe, other than bragging rights, does not offer any real life benefits over PCIe 3.0 outside of incredibly niche workloads. If everything exactly the same, obviously you would prefer PCIe 4.0 but otherwise, don't factor that into your consideration. However, you must consider these 3 things when it comes to SSD. Avoid DRAM-less SSD like the plague. Avoid QLC NVMe (e.g. Intel 660p). You want 3D TLC or V-NAND or 3D-NAND or something to that effect. If you intend to pass through an NVMe SSD as a PCIe device then make sure you research the device controller before buying. Some require special workarounds (with limitations) and some just refuse to be passed through. Ideally, you want to see what existing Unraid forumers are using (hint: check the signatures). You can put SSD in the array and it would work but (a) there's no TRIM and (b) it's not officially supported e.g. parity may have errors.
  13. Something you may try out of left field is to get some airflow over the NIC (wherever it is on your motherboard). It can overheat and cause weird issues.
  14. First and foremost, please copy-paste the xml of your VM template instead of using screenshot - USE THE CODE FUNCTIONALITY of the forum i.e. the </> button next to the smiley button. When you say the Nvidia driver "will not install", what does it mean? Was there an error? If so what was the error? Is the RTX2070 the only GPU? Did you dump the vbios yourself or download from Techpowerup? After answering / doing all of the above, start a brand new template with Q35 + OVMF + Hyper-V On. Then add this line above </hyperv> <vendor_id state='on' value='0123456789ab'/> Then save and see if it works.
  15. I gave up a long time ago. If it works, it works. If it doesn't, it's a fool's errand trying to fix temp sensor until 6.9.0-rc1 comes out with 5.x kernel.
  16. Start a new template with Q35 and OVMF. Don't use Seabios unless you have to. To resolve your boot problem is pretty simple. In the xml, remove this line: <boot dev='hd'/> Then look for the section corresponding to your NVMe (e.g. the part below - notice the "bus='0x03'" in between the <source> and </source>. That bus corresponds to the "03:00.0" of your NVMe SSD, that's how you identify it in the xml) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> And add this to below </source> <boot order='1'/> i.e. the section will become like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> That's it. No need to resort to Seabios. For your onboard soundcard, notice the green "+" sign on the left hand side of the Sound Card section of the VM template? Click on that it will allow you to add another sound card, then just select the one on the drop-down list corresponding to your onboard audio. If it doesn't work i.e. gives you error when starting the VM, you need to use the VFIO-PCI.cfg plugin to tick the onboard soundcard, rebuild the VFIO-PCI.cfg and reboot (i.e. do the same thing as your NVMe, except the audio device shows up under Sound Card section instead of Other PCI Devices section). Note: the 1st sound card you select should be your GPU HDMI audio. The onboard soundcard should be the 2nd sound card. That's to ensure you don't forget the GPU HDMI Audio, which must be passed through together with the GPU.
  17. Are you looking to use the NVMe exclusively for the VM? Is it showing up under unassigned devices on the Main page? If the answers are YES and YES then: Go to the Apps page and look for VFIO PCI Config plugin and install it. Settings -> VFIO PCI Config -> Tick the box next to your NVMe (144d:a808) -> click BUILD VFIO-PCI.CFG Reboot and then go back to the VM template, the drive should now show up under the Other PCI Device section.
  18. Where do you begin? Start with watching SpaceInvader One tutorials on Youtube, particularly his VM playlist.
  19. Anything you access over the network through the GUI share settings will go through /mnt/user. So /mnt/disks (i.e. unassigned devices) generally will bypass shfs. But if, let's say, you create a symlink from a cache only share to your /mnt/disks mount points and then access the mounts indirectly through the symlink via the share SMB then you will be accessing it through shfs.
  20. You probably misunderstood the Service Accounts section. SA's can allow you to bypass limits, including API limit, (just switch to a remote using a different SA) but that isn't how the SA's are used in the script. SA's are basically the same as having additional client ID + secrets. (so the answer is yes on your 2nd question). Generally though, you shouldn't be maxing out API requests. The API limits are very generous so you need to resolve the root cause first i.e. whatever app / docker that is crushing it. The first hunch is subtitle-related apps. They have been known to cause problems with API limits.
  21. 1. Yes. A good backup beats mirroring. And RAID-0 gives the best performance, up to a limit that is generally determined by your CPU single-core performance. It's also paramount that you do not access the pool through /mnt/user so you can bypass shfs. 2. BTRFS allows RAID-0 pool (you can use the ZFS plugin to do something similar with ZFS too). I don't think you can create a RAID pool with xfs.
  22. If you don't need parity for the secondary array, mergerfs or unionfs is a lot less cumbersome than snapraid e.g. don't need to waste resource on a VM. I'm currently using mergerfs for my backup arrays. One for my internal SSD (so I can use trim) and one for my 2x external USB HDD offline backup (because I only switch on and connect them on demand). I'm also using unionfs to pool my gdrive rclone mounts with local storage.
  23. Disappearing shares can be due to USB stick dropping offline. Try using USB 2.0 port. And do a disk check on your USB stick as well.
  24. Do a disk check on your USB stick. Stick dropping offline / corrupt can cause weird problems.
  25. You must connect the optical drive to the motherboard chipset controller (i.e. NOT 3rd party controller and almost definitely NOT to a Marvell controller) for it to stand a chance of working. And install virtio scsi driver in the virtio driver iso.