Kilrah

Members
  • Posts

    1994
  • Joined

  • Last visited

Everything posted by Kilrah

  1. Is it running normally now? Cause if it's still estimating days there's no point. State will be a mess regardless. You should never start any parity/rebuild operation before making sure the hardware is working properly.
  2. All depends what containers they are and what your VMs will be doing. RAM's likely going to be the first limitation.
  3. Install the corefreq plugin to check, it's much more readable than powertop. Also I believe Frigate is "expensive" on ressources, so "idle" may not be much idle if it's constantly running. Could try stopping things to see how much they contribute to total draw.
  4. There is one and the banner tells you about it.
  5. The binhex containers I have and were updated yesterday switched TO the vulnerable version, they were OK on 5.4.1 before that...
  6. Make sure C-states are enabled in BIOS, "max performance" features like MCE aren't enabled as it's often the default for "gaming" setups. You could also underclock/power limit.
  7. There's a setting to enable mover logging, enable it when needed for troubleshooting then disable afterwards, point is not to have thousands of lines of log spam when nothing's wrong.
  8. Re: The manual install page now links to a download archive page that only lists the latest release in a branch, that's kinda problematic in circumstances like now where 6.12.9 has a serious issue with remote mounts and people might need to run an earlier release such as 6.12.8 to have a working setup. The old site/pages used to list more versions.
  9. This was directed to @SpencerJ regarding unraid releases, nothing related to you.
  10. Would slightly object to featuring something that was admittedly developed aginst a > 1 year old "Older / Non-Recommended Release". Why develop something an not validate it against current? Unrelated, but it seems the manual install download page only lists the latest in a branch now, that's kinda problematic in circumstances like now where 6.12.9 has a serious issue with remote mounts.
  11. Is this a joke? A hardware failure of any kind would lead to the same result. Data's still there on the drives if anyone wanted to access it. If there was one nobody would notice or care when it gets switched to, then a 2nd would fail and back to square one.
  12. Try using something like glances or netdata with persistence to see what container caused that runaway increase in RAM usage. EDIT: Seems it could be immich, try limiting the amount of RAM it's allowed to use
  13. If that's about the "clear an array drive" script change references to "/dev/mdX" with "/dev/mdXp1"
  14. Yes, and not put anything on it.
  15. Nope, the liblzma version used in unraid is safe.
  16. The downgrade option uses the "last version you were running" so if you did an upgrade from .5 straight to .9 that's what it'll offer.
  17. Well they aren't "for sale" yet, crowdfunding just started. So they also "don't exist" as it is, although it's at least an established brand this time.
  18. Downgrade back to 6.12.8, there's a known issue about remote SMB mounts in 6.12.9.
  19. The Mover Tuning plugin allows setting exceptions
  20. Single in btrfs is between RAID0 and spanning, i.e. one big "drive" made of both, so both are required. Should have removed one while in RAID1 (only config that actually tolerates a drive loss), and THEN converted afterwards. Can you still add the 2nd drive back? If so it should just work again.
  21. You have tdarr and tdarr_node accessing the same folders, one changed a file while the other was being backed up. Put both into a group so they're both stopped together.
  22. Port 80 needs to be assigned to NPM as well otherwise you won't be able to get letsencrypt certs with the standard method. Just put unraid somewhere else.
  23. Someone pointed to this: https://www.mail-archive.com/[email protected]/msg02393.html