testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. For the GPU, edit xml directly or use the VM template. I prefer editing xml directly since it's usually straight-forward e.g. bus 44 becomes bus 10 or something like that. USB devices can be done in the xml too but it's more troublesome so I tend to prefer the VM template for that purpose.
  2. Anything that has pinned cores (VM, dockers, isolscpu) will need to be checked and adjusted accordingly. VM with passed through devices (e.g. GPU) will definitely need to be adjusted as the device bus almost certainly will be different.
  3. Yes it's possible. I have tried similar thing albeit in Windows. 3 things that come to mind. Snapraid has undelete functionality - almost like a partial rebuild. Multiple arrays SSD array with trim
  4. You don't need dual parity with only 5 data drives. You're better off getting 1x10TB parity + 3x10TB data. That way you have fewer points of failure than your current plan. Of course if you have terrible luck then you may wish to have dual parity but then if you have terrible luck then dual parity won't be enough. What will you be holding in the cache pool? Why do you think you need 2x1TB (i.e. running it in RAID-1) - in other words, what important data are you planning to store on the cache pool that require mirror protection? You probably just need to have the 970 in the cache pool and be done with it. If you have a lot of write heavy activities then you just need 1x860 mounted UD and use that exclusively for write-heavy stuff (i.e. having maximum free space available by minimising static data). PS: there's no need to use the cache pool for write cache, if that's what you wanted to use the cache pool for. That usage is archaic with modern high-capacity HDDs and reconstruct write (aka Turbo Write).
  5. HDD failure is a probabilistic. The more drives you have, the more likely you will soon have a failed drive. And then, based on Backblaze stats, older small capacity drives are less reliable than newer large capacity ones. So if I were you, I would rather spend some money on 4x16TB and save myself for meaningful things instead having to constantly worry at the back of my head that one of the 24x2TB is going to fail imminently. Linus gets paid to have those storinators and he has people working for him who know how to deal with failed drives (poor Anthony).
  6. For the "play" part, if you are after gaming in a VM, Intel is still king (caveat: ONLY with Intel single-die chips). The CCX-CCD design of AMD Ryzen adds latency that (a) requires a lot of tweaks to minimise and (b) even with all the tweaks are still not as low as Intel Intel single-core clock is still the highest on the market and most games aren't optimised that well for many-core. For the "nice" part, I don't think there's any show-stopper kind of stability issues any more as the necessary tweaks are known. There are just bits and bobs that you might need to pay extra care for e.g. BIOS version for Ryzen (some versions are known to break things) and pin issues with Threadripper (simply because of the ridiculous number of pins required and the delicate mounting mechanism). You can search the forum for people's post about 3950X config to have an idea of how things are playing out. Most of the issues I have seen are GPU pass through but they are not CPU specific. I don't think you can generalise your issue as being 2950X specific. Something isn't right with your hardware. If I have to guess from the kernel panics, you might want to check your motherboard pins and CPU mounting. It has been known to cause weird inexplicable problems such as kernel panics. Btw, 3x Titan X on a 2950X is highly inefficient for the simple fact that the 2950X only has 2 dies. At least 1 of the 3 Titan will have to compromise (most likely 2 of the 3 since 2 of them need to share the same die).
  7. How did you pick the vbios? There are 4 different versions on Techpowerup. Booting the VM under UEFI (i.e. OVMF) should not affect your ability to pass it through. I still think you got the wrong vbios.
  8. My syslinux boot has: isolcpus=32-63 Originally I discovered that when I run btrfs scrub (Main -> diskx -> scrub), it doesn't respect the isolcpus i.e. it uses the cores that are supposed to be isolated (see attached screenshot). After a few tries, I also found other activities e.g. copying files between disks (simple cp command on console) also does not respect isolcpus. This looks to be system independent as I can reproduce this even on Unraid as a VM.
  9. How did you get your vbios? Your xml doesn't seem out of place so the number 1 most likely candidate is wrong vbios, particularly if downloaded from Techpowerup.
  10. I thought itimpi explanation was very clear that the answer is: your array can have 30 of these "devices" but outside of the array you can have however many you want. However, you are completely wrong in assuming there is "a single point of failure - being the raid card itself". Every drive behind the RAID card is a point of failure too. Moreover, you also severely compromise your ability to recover your data by mixing RAID into Unraid. The whole point of using the Unraid array is that each drive has its own drive system so you will only lose all your data if all your data drives fail. Using your (terrible) scheme of 3 RAID10 group, you can lose ALL your data with just SIX failed drives. Statistically, each parity drive can only reasonably protect for max of (a fraction under) 8 drives (based on my calculation off Backblaze HDD failure stat for 8TB+ drives). So a 30-drive array is, in my opinion, pushing your luck to the limit.
  11. Probably not. Why do you need so many tabs? Must you use Chrome? Also 15 tabs at 99% CPU load is a bit much. You must be doing something quite intense with those tabs.
  12. It's an rclone option e.g. --log-file=/mnt/cache/mount_rclone/rclone/logs/upload_sa_06.log
  13. Something like the one below. I wrote it with inspiration from @DZMM rclone mount script so there's plenty of similarity. Assumptions You have a share called mergerfs (which is where things will be mounted) that has cache = only. This was designed for 2 UD drives mounted under ud01 and ud02 Under each UD drive, there's a folder called mergerfs, the content of which will be merged into the mergerfs mount Each UD drive also has a mountcheck file which is used to determine whether the drive has mounted #!/bin/bash mkdir -p /mnt/cache/mergerfs/build mkdir -p /mnt/cache/mergerfs/mount # check if mergerfs already installed if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed" else # Build mergerfs binary echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now." mkdir -p /mnt/cache/mergerfs/build docker run -v /mnt/cache/mergerfs/build:/build --rm trapexit/mergerfs-static-build mv /mnt/cache/mergerfs/build/mergerfs /bin fi echo "INFO: Checking drives are mounted" | logger if [[ -f "/mnt/cache/mergerfs/mount/mountdisk1" ]]; then echo "INFO: Mergerfs mount already created, exiting." | logger exit else if [[ ! -f "/mnt/disks/ud01/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: UD disk 1 not mounted, exiting." fusermount -uz /mnt/cache/mergerfs/mount exit elif [[ ! -f "/mnt/disks/ud02/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: UD disk 2 not mounted, exiting." fusermount -uz /mnt/cache/mergerfs/mount exit else echo "INFO: All USB drives mounted, creating Mergerfs mountpoint" | logger mkdir -p /mnt/cache/mergerfs/mount mergerfs /mnt/disks/ud01/mergerfs:/mnt/disks/ud02/mergerfs /mnt/cache/mergerfs/mount -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=mfs,cache.files=partial,dropcacheonclose=true,minfreespace=32G echo "INFO: mergerfs mountpoint created" | logger fi fi
  14. @jonp: How much RAM would you need to start seeing benefits of huge pages? Is it only applicable to people running a lot of VM? What happens if my hugepage total doesn't match my VM RAM allocation?
  15. No need to do that. Just back up your stick (using Unraid built-in functionality) and if it doesn't work restoring from the backup.
  16. You need to check your numa topology (search for SpaceInvader One vid on that). Then set your VM appropriately to what numa node the GPU is connected to (need to allocate core + RAM from the same numa node). Reading up on the works that have been done for Threadripper would also be helpful since TR is essentially multi-CPU in a single package.
  17. Not only in theory but also in practice. You don't even need to remove the Proxmox SSDs, just change boot order in BIOS. (implicit assumption: your PCIe SATA card works with Unraid).
  18. Same as me then. All you need is Unassigned Devices to automount the external and even trigger bash script + mergerfs to create a pool of the mounted UD drives + a bash script to install mergerfs and run commands. It's quite straight-forward actually. That's how I utilised by external USB SSD's for offline backup.
  19. Really like the new upload script - enough to completely switch to using Service Accounts now. Very easy to customize as well e.g. have dedicated log files for each of my 11 team drives + sharing 100 SA's among them so now my big backup job can complete overnight. Also found an unexpected extra use for mergerfs - to pool multiple external USB SSD's to make up a single pool for my offline backup. The "least used space" policy is serviceable with mixed-size pool to distribute data as evenly as possible (I prefer most free percentage but I don't think that's an option).
  20. 1) whether it's sensible or not depends on your tech skill. However, why would you want to run Unraid as a VM under Proxmox when Unraid itself has VM functionality? any particular Proxmox-only thing that you are after? If storage pooling is all that you need, you may not even need Proxmox but any Linux distro + Snapraid + Mergerfs. 2) From my own experience, 4GB RAM, 2 cores and not much more, unless you run encryption and/or dual parity, then you need more processing power. 3) It depends on whether Unraid has the dockers you want or not. The benefit is Unraid makes it super easy to use dockers but if you can set it up manually, it doesn't quite matter. 4) Super easy. Unplug drive and stick from old machine, plug them into new machine, boot, done. That is assuming HBA doesn't do funky truncation of disk serial numbers. If that's the case, you just need to make sure you reassign your drives manually correctly but everything else should just work.
  21. If the drives can be connected via SATA (and show up as individual drives), why don't you just add all of them to the array? Then set the shares up such that backup share will use these 4 drives exclusively and the other shares will exclude these 4. That essentially sections your array into two and you don't need anything more than the GUI and built-in Unraid functionalities. If they are connected via USB then it's a different story since it's a terrible idea to add USB drives into the array due to their tendency to drop offline for no reason. In that case, you can have no choice but to use unionfs / mergerfs to combine the UD mounts into a single mount point. You can even set it up with User Script plugin to run the script to mount it at array start. I would use a Linux VM only as a last resort most critically because creating a pool can be so easily done without the additional VM overhead.
  22. A few potential suggestions to consider: You can use python instead of bash. I have found python scripts to be a lot more powerful, especially when sorting data. For better access time, you can add SSD to the array and limit your share to just the SSDs. Since you are more interested in read than write, even QLC can be a good budget choice. Note that some SSD may cause parity sync errors so make sure to watch out for that. Also SSD in the array cannot be trimmed. Alternatively you can even create a pseudo array out of SSD + Unassigned Devices + Mergerfs. You lose some Unraid functionalities e.g. parity, share etc. but then you can trim the SSD so I would consider that a wash. I am running one right now with 3 external USB SSDs for my offline backup job mainly to leverage on lus distribution and not having to deal with the ramification of USB drives dropping offline.
  23. I understand why Main is the default page after login since after fresh boot, that's where you start the array. However, if already booted, the Dashboard page is usually a lot more useful. Even right after boot, if the array is set to autostart, most people will have to click the Dashboard to e.g. start docker, VM etc. anyway. So perhaps have the GUI pick the Dashboard if array has started would be more logical.
  24. You can just call me out instead of making passive aggressive comments. I wrote much stronger words originally before editing it back to something more conducive but if you want strong words then well: FACT: there are more VM users than hardware transcoding users. FACT: as even LT admitted, getting the driver embedded may cause issues with existing VM users. So I hope LimeTech doesn't bow down to the pressure from the few vocal complainers who do nothing but demand stuff for their own needs while ignoring the implication to the larger user base who QUIETLY enjoy Unraid because it just works. You prefer that instead of my conducive suggestion to make it an optional setting?