Jump to content

johnny121b

Members
  • Content Count

    341
  • Joined

  • Last visited

Community Reputation

4 Neutral

About johnny121b

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

684 profile views
  1. I wonder- Is that first licensee STILL using UnRAID? THAT would be quite the testament!
  2. +1 for the 'intermittently stops while unpacking' issue. For me, this is a longstanding problem- many months. I've gotten into the habit of deleting the download and later telling nzbget to 'download ramaining files' to return it to the queue. Still, it stinks knowing that I MUST check the docker at least once a day, or things will pile up. I REALLY hope something shakes-out on this situation soon, because I know I suck at configuring/understanding dockers. And I KNOW... that if I have to switch to another docker, I'm gonna have a devil of a time making it play nicely with binhex's sonarr. In case any info helps: I'm still running 6.3.5, but I DO keep my dockers up-to-date.
  3. Any plans for adding support for LTO backup? Speaking as a data hoarder, I know my server has grown to a size such-that a catastrophic failure would be almost impossible to recover from, yet 1:1 backup options are limited IF you want LONG TERM storage. Hard drives don't last for decades in cold storage, and I suspect many of us are only one good power surge away from disaster. A UPS and power conditioner transformer only protect you so far.
  4. Neither. I run it on a micro Dell (Inspiron 3050), whose only purpose is to back up my array. I figure- it uses almost no power, it saves me the headache of setting up a VM (which I've not proven very good at doing), doesn't impact my server's performance, plus it gave my 3050 a purpose...I had the thing, but couldn't decide how to use it. I have it physically atop my server, sharing a small, dedicated 5-port switch, so that mountain of data doesn't have to move across my entire network during backups. I start a backup (using remote desktop) then log in periodically.
  5. I'd like to leave my shares set to SECURE 99% of the time, with an easy way to allow myself access on-the-fly to edit filenames, move files, replace files, etc..... but return the system back to a secure state with a click. Ideally, I'd like something as simple as a button I could click, that would open-up write-access on my /movie & /TV shares so I can do what I need to do, then just as easily....click a button and return things to SECURE. -I- am the biggest security risk on my network, but -I- also need to frequently fix filenames and replace files with better copies. A click-ON click-OFF or toggle arrangement, preferably with a timeout feature, that returns to SECURE either after X-minutes or whenever it detects no activity...
  6. Currently, my server is set up with most shares set to 'SECURE'. When I add a movie to my server, I begin the process by launching a batch file that creates the \movie share on my cache drive (if it doesn't already exist), and I copy the movie directly to the cache drive. The movie then gets moved into the array overnight by the mover script. This effectively isolates my array's movie files from ME. I figure IF I were to ever become infected with something nasty, only my cache drive contents could be at-risk. This works well for single-level shares like \movies or \documentaries or \appz..........but it's just not practical for \TV.........which has countless sub-folders. I can't think of a way to handle that share with my current method, short of creating ALL my TV shows' folders on the cache drive every time. That doesn't seem like an elegant want to handle this. Has anyone else found a better way to handle keeping the server secure during the 99% of the time you're NOT manually copying/renaming/moving/interacting with the shares? (That doesn't involve manually toggling the shares' security via http interface every time you need access)
  7. Seems like a big step backwards (producing a kernel that only works with older/smaller drives), but I [am] inclined to believe the issue exists; not between controller <-> drive but between O/S <-> drive.....or between O/S <-> controller. That is, until your reassurance.
  8. I don't believe any incompatibility exists between the controller and my drives. Since last year's attempted update, three of my drives have been upgraded to 8Tb models- without issue. Out of curiosity, is there a specific reason you're still at 6.5.3? I (do) notice that my current 6.3.5 only tells me that 6.5.3 is available for update......wondering why it doesn't say 6.6.7; coincidence or is it some line in the sand, for reasons I'm not aware-of?
  9. I'm running 6.3.5.....mostly because I hit a brick wall w/ a 6.5.3 upgrade attempt last year. Back then, I learned the SAT2-MV8 had compatibility issues with the kernel. IS anyone successfully using the SAT2-MV8 (PCI-X) controllers in their 6.6.x system? Don't wanna go down that rabbit hole again if it's still a hopeless effort. Thanks!
  10. Something's still squirrely. The HTTP interface is unresponsive when I issue the command to take the array down, power down.....or even click the HISTORY button. It just sits there. The "Uptime" counter is continuing to advance, despite the fact it's ignoring the commands. I'd attributed this to the failing drive yesterday, but that drive's disconnected. The entire time I've been typing this, I've had a blank 'Parity/Read-Check History' window atop my system's normal main screen, which I now cannot access. On a whim, I opened another browser and went to my server's IP (just in case Chrome's the real problem), and the IP address never answered. I was able to telnet into the server and issue a reboot command, but the reboot never happens. Going to my server's IPMI screen, I see it's stopped at the same location as I've seen earlier; -the last 3 lines onscreen for reference- Sending all processes the SIGKILL signal. Saving random seed from /dev/urandom in /boot/config/random-seed. Turning off swap. And that's where it will apparently sit until I hit the server's RESET button. (At least that's where it's been for the last 10 minutes.) UPDATE: I tried twice to boot into my normal config and begin a rebuild..never could take the array offline- it hung every time. Restarted into safe mode and started the rebuild. So I'm on a road with no turns for the next 2 days while it writes. I suspect I'll have other issues to address once the rebuild finishes. One fire at a time... Thanks for the response, johnnieblack
  11. Diagnostics attached. tower-diagnostics-20190422-0645.zip
  12. I'm running 6.3.5. Tonight a drive dropped out during a parity check. Unraid stopped the parity check and put a red X by the drive, listing something like 1000 errors. I thought this would be no big deal.....nature's way of telling me which drive needed an upgrade, and I've replaced drives before. Things went South, however, because the GUI wasn't responding. I suspect the system was tied up by the failing drive, but I'm not really sure. (My experience with the WD click-of-death may not be applicable to Unraid). Long-story-short. Did an unclean power-down. Replaced the drive Restarted Drive didn't show "missing" Drive showed "Not installed" This leads me to believe it's NOT going to rebuild the drive if I start the array. If I click the HISTORY button, it tells me the last parity check was cancelled with zero errors. And parity is listed as valid on the main screen. I wanna be very careful how I proceed here. My backup for the data on that drive is a month old, but is likely substantially correct. (I'd recently installed a new/larger drive in the array, so it'd been taking most of the write activity) So, I'm not panicking, but I don't want my ignorance to make things worse. I'd really much rather the system rebuild the drive's contents if at all possible. What's my best/safest course of action?
  13. Your tone implies this is a relatively straighforward step, but the thought of a hiccup leaving me with 700,000+ files to delete, strewn across hundreds of folders....is pretty intimidating. Any suggestions to help ensure this doesn't happen? Or some syntax to leisurely recover, if it does?
  14. I dunno how big the UnRAID market really is, but I doubt L/T is rolling in cash. I figure the user base here can probably make a bigger difference than L/T. I do know Slackware/UnRAID has been a significant part of my hobby life over these last several years. The notion [that he's received nothing directly for the benefits I've enjoyed] didn't seem right...so I wanted to do something beyond just money from one guy. Together, we could make a difference.
  15. I thought some of you might be interested to know- the author of Slackware is in a bad financial situation. It's my understanding Slackware is the basis for UnRAID, one of the oldest maintained Linux distros, and apparently is almost a 1-man-operation. I've supported UnRAID by buying extra licenses in the past, and I think the author of its underpinnings should probably also be worthy of our gratitude as well. I have no connection with him in any way, and if this is inappropriate somehow, I won't be offended if my post is removed, but I think this is worth putting in front of everyone here. Forum discussion that introduced the issue to me. TLDR- see pages 1, 11, 31 (as of this writing) Another mention on distrowatch