Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Restricting docker extra param alone will not prevent BOINC from using too much memory. It would just kill a BONIC process with OOM error when running above the limit. You also need to adjust the % memory parameter from within BOINC. Apparently it detects your full system memory instead of the amount you restrict the docker with. E.g. for my system with 96GB RAM, I put a limit of 10%-11% to keep memory usage to about 12GB.
  2. You need to pass through a controller to the VM for your use case. You can also try the Libvirt USB hotplug plugin to manually replug the devices to the VM without the need to restart the VM.
  3. Sorry don't understand your problem statement. Is it that you set these VM to start automatically but after you reboot Unraid, they don't start automatically? I.e. you have to manually start them after boot?
  4. That's not true. ONE pending sector on a brand new drive was sufficient for me to claim a replacement. EU consumer protection is even stronger than the UK so you should be able to claim. Failing SMART test is like an imploded car. Pending sector is like strange noise from the engine. You don't need an imploded engine to claim warranty. Unlikely to be the cause.
  5. From the console, run "mc" to launch Midnight Commander then navigate to /mnt/user/[share name] to delete files then go to the GUI to delete share (which should now be possible as the share is empty). For a more Explorer-like interface, you can use Dolphin or Krusader dockers. The path to delete will be different depending on how you do the docker mappings. The reason to delete from the GUI is to also delete any redundant smb settings.
  6. I assume "DSM" and "SHR" is Disk Station Manager and Synology Hybrid RAID. In which case, I don't think there's any straight-forward ways to mount the SHR volume on Unraid directly. Xpenology supports btrfs so if I were you, I would boot Unraid, change default filesystem to btrfs (in Settings) then create an array out of 2x 3TB (no parity for now) and format them. Then boot back to Xpenology, create single volumes (i.e. no RAID / JBOD) out of EACH individual 2x 3TB (assuming there's no funky thing behind the scene preventing using btrfs here) and then do the copy from Xpenology to these individual 3TB volumes. Make sure you verify that the copy is done correctly. Then boot back to Unraid, you should now have array with data. Reverify the data! None of the steps are data destructive (even the format is done on the 2x 3TB which you mentioned are unused, thus no data to destroy) so your data should be safe. I would do and redo data verification very carefully though before permanently switch.
  7. RAM is always used by Linux (including Unraid which based on Linux) to cache write. If the cached data is still in RAM then it is also used automatically as read cache. There is no other functionality to use RAM as other forms of read/write cache
  8. You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 🧐 ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
  9. Hi, sorry forgot to reply. I used all M.2 slots, all PCIe slots and all SATA ports at one point in the past so I can confirm. M.2 NVMe does not disable SATA ports. M.2 NVMe also does not disable any PCIe slot (and vice versa). The whole point of getting Threadripper is the number of PCIe lanes so whether I would recommend the board or not depends on how many PCIe lanes you need. For example, I have a rather unusual number of NVMe drives compared to a typical user so I do need all of those lanes.
  10. FreeNAS uses ZFS (RAID) which is fundamentally very different from Unraid so there is really very few, if any parallel in terms of system requirements. While any RAID-based implementation (not just ZFS but also BTRFS, which Unraid uses for its cache pool) is sensitive to memory corruption, ZFS is even more susceptible due to the difficulty (or impossibility?) in restoring from a corrupt pool. The 1GB RAM / 1TB storage is an old wives' tale. If you don't use deduplication feature then you don't need that much RAM. If you do use deduplication feature then depending on your config, you may need way more than that. The 1G/1T is just so easy to remember that it has taken on a life of its own. In contrast, you probably can think of the Unraid array like a combination of mergerfs and snapraid. It pools data from individual drive (which each has its own file system) and calculate parity bits for such data. And as a result, there are way fewer scenarios (if any?) that would cause you to lose the entire pool (that you would otherwise not had you use ECC RAM - remember there's a limit to what ECC can correct!), as compared to ZFS. There is an argument for ECC RAM that it just helps with overall stability of the system. While I agree with that point from the theoretical standpoint, from my own actual experience though, having used both ECC and non-ECC, I have not seen any significant diff in system stability. I think it's more likely than not that a lot of the instability diff is more attributable to overclocked RAM (including certified overclock i.e. higher-than-stock speed that a manufacturer says your RAM is capable of running at) than ECC vs non-ECC. Now for the Unraid server, if you indeed just use it as a NAS, 4TB (or even 2TB) RAM is enough. You 6700K is a massive overkill with that usage though. And you can skip on the cache drive until you actually need it. Unraid cache is strictly only for write caching, which was important back when reconstruct write (aka "Turbo Write") wasn't a thing. With HDDs that came out in the last 5 years or so + turbo write, cache drive is very much an optional thing for a NAS-only build. One thing to keep in mind is that Unraid will never be as fast as FreeNAS (ZFS), both in the read and write departments. The best performance you get will never be faster than the fastest drive in the array for read and never be faster than the slowest drive in the array for write. Btw, for your 4x drive array, having 2x parity is a massive overkill for general media storage, unless your media is the only digital copy of Casablanca or something to that effect.
  11. No prob and it isn't really a rule per se. It's just that pass-through issues tend to be very idiosyncratic so a lot of times, I have found that small diff is not necessarily insignificant.
  12. @Fabiolander: Please create another topic for your issues. Hijacking someone else's topic is not going to help both of you due to the resulting confusion. (for example: I would not have recommended you the same vfio-pci cfg tweak I recommended the OP)
  13. First and foremost, trying different downloaded vbios' to see what sticks doesn't really work. If you are not 100% sure the vbios you downloaded is the right one, it's better to NOT use vbios instead of using the wrong one. Next, if you want to change Hyper-V state, you need to create a new template. Turning it to off in the GUI doesn't really work. Not that you should turn it off. Turning Hyper-V off was old advice before vendor_id tweak became a thing. Also from my experience multifunction tweak works better with Q35 machine type (so not i440fx). So let's start from the beginning. (Optional): turn on Remote Desktop Protocol in your Windows VM if possible. I prefer using RDP to do diagnostics because if RDP doesn't work then I know for sure Windows hasn't booted (and not some other issues with 3rd-party software). Edit your current template using the GUI, remove the vbios setting and save. This should return your template to an untweaked state which is easier to help. Please attach the resulting xml on your next response (so everyone is on the same page as to what state your VM is at). Then, your 7700K has an iGPU so first thing is to try making Unraid boot with the iGPU (i.e. leave the 1070 alone). Reboot your server to BIOS -> Advanced -> Chipset Configuration -> Primary Graphics Adapter -> can you choose the iGPU? While you are in the BIOS, Tools -> Boot Manager -> make sure your USB stick non-UEFI is the default / first boot device. This is legacy mode boot which improves your chance of passing through the primary GPU. If you can pick the iGPU in the primary graphics adapter setting, connect your monitor to the motherboard display output and see if Unraid boots with the iGPU. If successful then dump your own vbios of the 1070 (and do any required edit per SpaceInvader One tutorial) but don't use it yet. Report back what happens. It is possible that you can start the VM with the GPU passed through now without any further tweak.
  14. What is your SSD in the cache pool? Generally a good idea to attach your diagnostics as well (Tools -> Diagnostics -> attach full zip file). It's not uncommon for a failing drive and/or slow drive in cache causing everything to grind to a halt (since docker and VM need IO from cache). The high CPU usage is a red herring in a sense because it's not really CPU usage but rather high IO wait (think of it like your CPU core freezes while waiting for the storage device to respond). The dirty ratio tweaks and memory parameter are just ways to reduce IO and/or make IO less random and more sequential. However, they are like putting a bandage over COVID-19 if you have a more fundamental problem with your cache drive(s). Docker -c is the same as --cpu-shares which limit the share of CPU processing power when there are multiple competing processes. I highly doubt it would have any impact on your situation since high IO wait isn't processing share. In fact, the reddit you quoted -c=1024 is rather pointless since the default value, if not provided, is 1024.
  15. Install vfio-pci config plugin and then Settings -> VFIO-PCI.CFG and then look for your usb controller. If there's no reset capable icon (RESET column, on the left of VENDOR ID) then the controller can't be used for passing through to a VM unfortunately.
  16. Hyper-V does contain a workaround to fix error code 43 with Nvidia GPU so if you need that for your VM to work then indirectly you need Hyper-V. There are also some other tuning for AMD Threadripper that are in Hyper-V but they are good-to-have and certainly not necessary. Otherwise, you can safely turn off Hyper-V on the host. In the guest (is that what you mean by "inside the Windows VM"?), you shouldn't turn on Hyper-V. It may prevent your VM from booting - it has happened to me. Hyper-V is also disabled by default when you install Windows. It is certainly not needed since it's used for virtualisation, which would constitute nested virtualisation, which is usually not needed since you have KVM/qemu from Unraid.
  17. Assuming your onboard USB is passed through and working fine, get a USB sound card. It has been quite some time since an internal sound card is worth it. USB is just much easier to use while offering comparable if not better sound quality.
  18. Then it looks like it unfortunately can't be passed through then.
  19. With that attitude, you aint' getting any help. Not on here, not in real life.
  20. You added a rom file (vbios). Was it there last time you successfully passed through the GPU? You also added 2a:00 which is the USB device. Were you able to succesfully pass it through before adding the on-board sound card?
  21. Yes you can. In fact, unless you have actual issue (e.g. Oculus Rift randomly disconnecting if connected through libvirt i.e. virtual usb) and/or require true hot-plug then you can just use the virtual USB device. If you install the "libvirt usb hotplug" plugin, you can "warm plug" USB devices to the virtual USB of any VM after VM boot. No need to reboot the VM so not cold plug but you still have to manually replug the device through the Unraid GUI so not true hot plug -> hence "warm plug".
  22. Ok, get it now. The answer is "sort of no". The VNC display feature would do what you are describing but it doesn't work with a GPU passed through hence "sort of no".
  23. The 6700K has iGPU and it's powerful enough to transcode so you don't even need a GPU for Plex unless you are doing a serious number of concurrent streams (or multiple 4k streams). ECC RAM is not necessary so if your focus is to reuse parts then there's no need to obtain ECC. On your current RAM, do NOT overclock. Your memory spec is certified overclock so it is an overclock nonetheless. Run it at 2133MHz, in short. Your server with Plex is certainly "not for storage" because Plex is not a storage functionality / app. With Plex, you should have an SSD cache for appdata, docker image (or vdisk + libvirt image if you go the VM way). Does not have to be M.2 (NVMe). It can be SATA. Whatever SSD you get for cache, go for 3D TLC (or 3D NAND or V-NAND or terms to that effect) and avoid DRAM-less SSD like the plague.
  24. On the other topic, you posted a short query which was answered by beemac on 12 Mar but no subsequent response from you. If you are really struggling with compiling your own kernel, then perhaps you can try just use the compiled files instead. Backup your USB stick and make sure the backup is accessible and restorable in case you need to. Download 6.8.3 zip from Unraid Download 6.8.3-5.5.8.zip from the post I quoted previously Extract (2) to a folder Extract (3) to the same folder as step 4. You should be asked to overwrite TWO files. Take what is in step 5 and copy over to your USB stick. You should be overwriting quite a number of files. Reboot with the new USB stick. You should be booting on 6.8.3 with 5.5.8 kernel as well as Navi patch.
×
×
  • Create New...