another_hoarder

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by another_hoarder

  1. That was simpler than I expected and totally makes sense. Will follow this to the letter. Many thanks, @apandey 🙌
  2. Hi guys, My Unraid box is a hodge-podge of previously loved consumer hardware which is totally fine for my needs, runs about 25 dockers and gives me 49GB of usable NAS space on top of Unraid 6.9.2 with one SSD cache drive (docker + VMs) and one SSD unassigned device (dumping ground for downloads) (screenshots below). Everything has been working more or less perfectly for a long time (aside from the silly SMART wakeups that I've never been able to find a solution to other than "it's fixed in 6.10+). Everything is in xfs. But like with everything else, I feel it's maintenance time after ~2 years of not touching this at all. Here's what I'm thinking 1. I got a brand new 18TB WD Gold, which I'd like to make my parity drive. 2. My two semi-ancient 1.5TB drives get retired from the array entirely. (-3TB storage) 3. Old parity 14TB becomes a data drive. So net gain of 11TB. 4. Upgrade from 6.9.2 to latest? Can someone help me with the best order of operations / or point me to some guides? I'd love to minimize the work but ultimately I'd rather play it safe, too - about 15TB of my 40TB used is backed up, but restoring the other 25TB, while not impossible, could take a very long time. I was thinking of transferring out the 3TB of data since I have space for it, removing the drives (I remember there's a way of doing that), then maybe adding the 18TB drive (do I still have to preclear?) as a 2nd parity drive, waiting for that to rebuild, then removing the 14tb and removing 2nd parity drive, allowing that to rebuild, then adding the 14tb drive as a data drive, and ... allowing that to rebuild. Or is there a better process. Also, upgrade before or after? Any comments would be much appreciated. Thanks!
  3. For those with the issue triggered by the most recent update, THIS IS THE WAY until new certs are provided. Many thanks @nraygun for saving us the time. I still don't get how sha256 or aes256 isn't safe enough any more but hopefully we'll all get new sha512 certs soon and can replace our ovpn configs for another decade of peace
  4. Hello Squid, While your plugin has been quietly doing its thing for a while and only I only required 1 or 2 restores over the last three years (which have been flawless, thanks!) - I've now come back from a vacation to a system that may have gone through a few power surges while I was gone. All appeared fine except a dead Plex installation and when I tried restoring from a backup, nothing happens in the UI, and the log shows: Aug 18 14:11:53 Tower atd[2834]: PAM unable to dlopen(/lib64/security/pam_unix.so): /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /lib64/libresolv.so.2) Aug 18 14:11:53 Tower atd[2834]: PAM adding faulty module: /lib64/security/pam_unix.so Aug 18 14:11:53 Tower atd[2834]: Module is unknown I'm on 6.9.2 Any thoughts?
  5. Hi @Dustiebin Unfortunately, I did not. Doron's utility points out that the parent process is emhttpd but I can't see anything upwards of that. I tried stopping all dodckers as well to no avail, and have tried turning off plugins, also with no luck. I think I've combed through most solutions proposed to this process (dynamix plugins, nerdpack tweaks, killing dockers, booting fresh), and nada. I'm very hopeful that 6.10.x will fix it since there's some changes to how smartctl works in that version, but so far I'm resisting the urge to upgrade given how many people seem to have had issues. Just don't have the time to play with it ftm.
  6. Sweet, thanks @doron - looking forward to some debugging fun
  7. Ah, interesting. I was guided here through this post which specifically referenced your script as potentially having the side super power of being able to point out the parent process that calls smartctl as part of your wrapper's functionality. Unfortunately, I lack the skill to write a wrapper myself so was hoping to piggyback off someone else's abilities, even if it's a byproduct of their primary function. But if that other 'neat little wrapper' could be of some help in tracking my unending investigation into random spin up / read SMART / spin down sequences, I'd certainly be indebted. And if not, no worries, I very much appreciate your reply in first place. May 24 15:55:38 Tower emhttpd: read SMART /dev/sde May 24 15:58:04 Tower emhttpd: read SMART /dev/sdj May 24 15:58:16 Tower emhttpd: read SMART /dev/sdm May 24 16:27:36 Tower emhttpd: spinning down /dev/sde May 24 16:28:09 Tower emhttpd: read SMART /dev/sdh May 24 16:33:05 Tower emhttpd: read SMART /dev/sdl May 24 16:34:02 Tower emhttpd: read SMART /dev/sdk May 24 16:35:41 Tower emhttpd: spinning down /dev/sdm May 24 16:35:41 Tower emhttpd: spinning down /dev/sdj May 24 16:49:39 Tower emhttpd: read SMART /dev/sde May 24 16:57:46 Tower emhttpd: spinning down /dev/sdh
  8. @doron Hi - what's the correct way of editing this plugin to enable debug output in syslog? I'd like to see what parent process is still driving spin ups / smart reads on my stubborn box (9.2). Cheers.
  9. Any updates on this, OP or Staff? Just noticed my log went to 100% because of the very same issue. For me it happens every hour.
  10. Hi guys, are SMART checks super frequent now under 6.9.0 or is something else a likely culprit here? My log is pretty much this now - spin down, read smart, spin down, wait 30 mins, rinse and repeat. Is that normal? I used to have disks sleep for a very long time.
  11. Perfect, thanks @itimpi. I've started on the path of doing parity swap + move parity to data as 1 step (eg making the two drives go blue, starting 'Copy Parity' function). It's at 45% right now. I'm hoping that when it finishes I can just start the array and allow parity to rebuild the 1TB data onto the old 4TB parity drive. Thus I'll have done first two steps as really one step. When that's done I can replace the last drive with the 2nd 14TB and again allow for a parity rebuild. So potentially two steps all in all. Quick sanity check, does that look alright?
  12. Hi fellow un-raiders, I've been on 6.9.0-stable for several days without any issues (other than the AutoFan extension causing disks to not spin down) and eager to do some disk shifts, for the first time ever. I'm on Unraid OS-Plus, 10 disk array (9+1 partity) + 1 cache SSD + 1 unassigned SSD. My array has only had a few purpose bought drives and a ton of leftovers ... so here's what I'd like to get done with two newly shucked 14TB Seagate Expansion drives: - replace my single parity drive (shucked 14TB replaces 4TB WD Red) - replace another drive (4TB with smart error will get replaced by above 4TH WD Red) - replace super old drive (1TB drive with will get replaced by the 2nd 14TB shucked drive) So - do I need to still preclear the new 14TBs? Within array or run them via basic DBAN? And any suggestions on getting the above accomplished without too many clicks or days spent on the maintenance? Thanks! (Visually - new 14TBd1 becomes Parity, new 14TBd2 becomes Disk 5, old Parity 4TB becomes Disk 6. )
  13. For anyone with the same issue, looks like my little maintenance timing overlapped Binhex rolling out a potentially breaking change to fix a DNS leak. Issue solved after following his Q&A.
  14. Hello everyone, long time lurker, first time poster. I can usually troubleshoot my way out of most issues but feeling stumped... My setup is pretty basic: Unraid Plus, 9 disks + single parity, 1-4GB in size, SSD for cache, SSD for unassigned devices. All leftover commodity hardware, humming along nicely for a couple of years with just 1 disk change. Used as a media server, *arr stuff, couple of other small dockers, 1 VM for random lab stuff. Network is dirt simple, 192.168.3.0/24 subnet, ISP router issues IPs to regular end clients for 3.20+ and everything below is static (unraid, two pi holes, printer, etc). It's been humming along perfectly and then two days ago I had some spare cycles and decided to do some work on it. * I upgraded RAM from 8GB to 24GB (pre-tested with memtest for a 24h) * I moved docker.img & config paths to /mnt/cache (just wanted a bit of speed boost for Plex) * I created a tempfs 2GB disk within Plex docker container, mapped to /tmp (for Plex transcoding to RAM, I infrequently support 1-2 streams) * I enabled VLANs in Network settings (mostly a pre-emptive move as I'm moving in a month and slowly creating a new house network and wanted to see what VLANs feel like in Unraid, but this was supposed to be just for testing) Anyway, all changes went very smoothly, no errors, no unusual entries in logs. System felt snappier of course with 24GB RAM given I'd often be at 88-95% RAM used. But then I started to get a ton of notifications from my *arr containers that they can't see each other. Well, I figured I messed something up so I retraced my steps, changed everything back, but the problem persists. I then restored my appdata to a previous config just to be safe using CA Restore, updated containers, and rebooted, but I'm still seeing the same problem. Docker containers can't see each other. They're still on the same subnet, still same port assignments, still using Bridge mode. Oh, and I'm also using Binhex Deluge container with Privoxy, and even though the container starts up and I can see OpenVPN obtain an IP, privoxy doesn't see it and watchdog just blinks in an out, again, after running without a hiccup for ~2yrs. 2021-02-27 13:09:31,761 DEBG 'watchdog-script' stdout output: [info] Attempting to start Privoxy... 2021-02-27 13:09:31,760 DEBG 'watchdog-script' stdout output: [info] Privoxy not running 2021-02-27 13:09:31,761 DEBG 'watchdog-script' stdout output: [info] Attempting to start Privoxy... 2021-02-27 13:09:32,765 DEBG 'watchdog-script' stdout output: [info] Privoxy process started [info] Waiting for Privoxy process to start listening on port 8118... 2021-02-27 13:09:32,768 DEBG 'watchdog-script' stdout output: [info] Privoxy process listening on port 8118 2021-02-27 13:10:02,910 DEBG 'watchdog-script' stdout output: [info] Privoxy not running Any thoughts on how to troubleshoot what could've happened (and why undoing my steps didn't fix this?) Everything screams routing to me but I've no idea where to really go next.