wheel

Members
  • Posts

    236
  • Joined

  • Last visited

Everything posted by wheel

  1. Nope; they'd get lost in my flood of emails. I run dual parity, check parity monthly, keep an eye on my dashboard(s) multiple times a day, and could easily replace almost everything I have in the event of a total catastrophic data loss from missing notifications. I'm probably not the standard unraid user, but unraid has worked beautifully for my needs for about a decade now. "Easy Preclear" stress tests have absolutely been a part of that, for me.
  2. Yeah, and vaguely remember it working for stuff when I did... but I want to say that was towards the beginning of the pandemic? I'm very much a vanilla-unraid user. I don't think I've clicked the Apps button at all since I first set it up way back when.
  3. Originally posted in General Support as directed in the "120220-fix-common-problems-more-information/page/2/?tab=comments#comment-1101084" instructions, but just got directed back here: I've been a "set and forget" unraid user for years, so the "new" (I totally know it's not new, but still) method of running stuff on top of vanilla unraid through Community Applications is something I haven't messed with much at all... until this week, at which point I'm beating my head against the wall trying to get it to connect through the internet. For some reason, my preclear option disappeared during an update between February or March of this year and now. I'd previously been using the gfjardim plugin, and clicking preclear through the main unraid array menu. Now, I'm trying to "get preclear back" as an option (days are ticking away on purchase warranties), and it's looking like the only way to do that these days is through the Community Apps' Preclear for Unassigned Devices app. So far, so good. I cannot for the life of me get Community Apps to connect. All the servers (github, amazonaws) listed in the main tech support page for Community Apps show zero issues, so it's definitely a problem on my end. I've configured network settings to include a pair of OpenDNS servers as suggested in these forums. Not using a proxy or PiHole. Diagnostics are attached. Honestly, if there's just a backdoor way of me getting the ability to preclear my disks again without needing to connect to Community Apps, I'd actually just take that route, but if the only way to get preclearing again is to get Community Apps connecting, I'd greatly appreciate any help anyone can give me in making that happen before my preclear schedule starts bumping up against return deadlines! Thanks in advance, beyond words! tower3-diagnostics-20220722-0537.zip
  4. Thanks, wgstarks! I'll post in that thread (was posting here versus there following the instructions in the link on the CA connection error page, actually). I'm 100% just using preclear for stress testing new drives, but timing-wise, I'm getting extremely close to just throwing these particular new drives at one of my windows machines and running badblocks or whatever the best-practice software these days is instead. Spending like a decade using preclear as my "don't need to worry as much about RMA'ing this drive in the near future versus just returning it to a retailer in the returns window" insurance policy became a mental security blanket, though, so I'd love to get the functionality back in my unraid box if at all possible. It's just nuts that I'm at an unraid version level that can't manually install the old plugin (just a hair too new), I can't connect through community applications to automatically install the new app, and I'm getting errors trying to install even the binhex-preclear docker container... seriously about to just walk away for a bit for sanity's sake. Hopefully this post (and the one I'm about to make in the CA support thread) lead to some ideas out of this maze! Thank you again for the response.
  5. I've been a "set and forget" unraid user for years, so the "new" (I totally know it's not new, but still) method of running stuff on top of vanilla unraid through Community Applications is something I haven't messed with much at all... until this week, at which point I'm beating my head against the wall trying to get it to connect through the internet. For some reason, my preclear option disappeared during an update between February or March of this year and now. I'd previously been using the gfjardim plugin, and clicking preclear through the main unraid array menu. Now, I'm trying to "get preclear back" as an option (days are ticking away on purchase warranties), and it's looking like the only way to do that these days is through the Community Apps' Preclear for Unassigned Devices app. So far, so good. I cannot for the life of me get Community Apps to connect. All the servers (github, amazonaws) listed in the main tech support page for Community Apps show zero issues, so it's definitely a problem on my end. I've configured network settings to include a pair of OpenDNS servers as suggested in these forums. Not using a proxy or PiHole. Diagnostics are attached. Honestly, if there's just a backdoor way of me getting the ability to preclear my disks again without needing to connect to Community Apps, I'd actually just take that route, but if the only way to get preclearing again is to get Community Apps connecting, I'd greatly appreciate any help anyone can give me in making that happen before my preclear schedule starts bumping up against return deadlines! Thanks in advance, beyond words! tower3-diagnostics-20220722-0537.zip
  6. Good to know! My SMB speeds using Windows are usually a third or less than what Krusader was giving me, so the Unassigned Devices trick might be what I’m looking for, especially if I can just mount individual disks as a share with a little work (organizing setup is disk-dependent on one of the towers I regularly move new files onto).
  7. Any updates on a replacement tower-to-tower transfer alternative? Krusader finally started crapping out on me (won’t even start now with a noVNC webutil.js error I can’t find any details for online, and MC doesn’t seem to want to see my separate server address for transfers between two separate hardware systems…
  8. So I've been using Krusader for ages without making any active changes, but I'm running into this error today: noVNC encountered an error: http://((SERVER)):6080/app/webutil.js readSetting@http://(SERVER):6080/app/webutil.js:150:30 initSetting@http://(SERVER):6080/app/ui.js:710:27 initSettings@http://(SERVER)):6080/app/ui.js:131:12 start@http://((SERVER)):6080/app/ui.js:57:12 prime/<@http://((SERVER)):6080/app/ui.js:45:27 I was running 6.8.3, so I upgraded to 6.9.3; same error. I ran a force update on binhex-krusader; same error. Has anyone else run into this? I was using it to transfer files less than a week ago, and absolutely haven't changed anything sense (outside of running a successful parity check yesterday). Any ideas on ways to resolve would be greatly appreciated! tower3-diagnostics-20220510-1114.zip
  9. Sorry for the incredibly late response - week totally ran away from me with work. Diagnostics attached; never enabled full disk encryption. CPU's an AMD Phenom II X4 820 @ 2800 MHz. Based on a past conversation (which I'm having an incredibly hard time finding right now), I upgraded my CPU to the best-case scenario for my motherboard, and was told it was just a band-aid improvement, as the CPU is the bottleneck and in order to jump up a level in parity-check speed, I'd need to upgrade my motherboard, too. Serious apologies for not including all of this information in the original post! EDIT: I'm thinking these are the relevant syslog sections: Nov 19 20:20:12 Tower2 kernel: raid6: sse2x1 gen() 3644 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x1 xor() 3646 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x2 gen() 5785 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x2 xor() 6341 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x4 gen() 6777 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x4 xor() 3230 MB/s tower2-diagnostics-20211123-0735.zip
  10. Theoretically a simple question, but having found tons of options (most outdated / sold out / no longer being made) through post searching, I figured I'd ask a brand new question in hopes of a November 2021 answer in time for Black Friday: I have a "save and forget" media tower which does absolutely nothing outside of holding drives. No docker, no apps, no cache disk. But the motherboard (AM3 AMD 880G SATA 6Gb/s ATX ECS A885GM-A2) doesn't support strong enough CPUs for my dual parity checks to take less than ~3 days (18 data disks spread across WD 8TBs and 12TBs, mostly even split of EMAZ and EDFZ, and two EDAZs - they're mostly connected to a pair of Genuine LSI 6Gbps SAS HBA LSI 9211-8i P20 IT Mode Low Profile cards, which I definitely don't want to replace). I'm finally ready to upgrade that motherboard, and I'm *guessing* my decade-old 2GB of RAM that's been serving my needs well (Crucial 240-Pin DDR3 SDRAM 1333 PC310600) should probably be upgraded to ECC, but if I wanted to max out my parity check speeds as cheaply as possible without hunting old hardware trade boards or dealing with eBay trust issues, does anyone have any readily-retail-available bang-for-buck suggestions for upgrading that old motherboard (and necessarily CPU, from everything I've read)? If the old-slot RAM works, all the better, but presuming my low-demand needs don't need more than 2GB anyway, a bonus ECC upgrade to match a new motherboard hopefully won't break the bank. Thanks so much in advance for any ideas or guidance on this overwhelming shopping endeavor!
  11. Following a lot of mid-pandemic work on my unRAID towers, I’ve reached a point where I’m pretty comfortable I’ve done all I can do to ensure against catastrophic failure: finally converted all my ReiserFS drives to XFS, got everything protected by dual parity, resolved a bunch of temperature issues. One thing bugs me, though: two of these 21-drive towers (and one 13-drive tower) are about a decade (and about 7 years) old, and I keep reading snippets of “well, unless your PSU fails and takes out everything at once” in unrelated threads that, combined with the “capacity reduces by 10% or so yearly up to a point” adage, has me thinking I may be dancing on thin ice with all three of these PSUs currently. What gives me pause on replacing all three (or at least the pair of ~decade old ones) immediately is the weird use case of UnRAID (or maybe just mine specifically). All three of these towers were designed for their UnRAID WORM purpose, and none of their parts had any previous life. Am I being extremely paranoid, or is replacement at this point a prudent idea? There have definitely been times (months, even years) where one tower or the other has not been powered on at all, or has seen extremely minimal use (90% idle time when powered on). Could these use situations mitigate the normal “danger zone” timeline on replacing a PSU? …or not enough to ease larger concerns on something like built-up fan dust congealing and overheating the PSU regardless of how long it’s actually been in operation (and at what level of effort)? Any guidance on how concerned I should be (and how swiftly I should replace what I have) would be greatly appreciated! PSU/System Age Specifics (all drives 3.5” between 5700-7200): The 2011 21-disk tower is running on a Corsair Enthusiast Series TX650 ATX12V/EPS12V 80 Plus Bronze, purchased in 2011 The 2012 21-disk tower is running on a Corsair Enthusiast Series TX650 ATX12V/EPS12V 80 Plus Bronze, purchased in 2012 The 2015 13-disk tower is running on a Corsair RM Series 650 Watt ATX/EPS 80PLUS Gold-Certified Power Supply - CP-9020054-NA RM650, purchased in 2015
  12. Damn, it does: all five drives are in the same 5x3 Norco SS500 hot swap rack module (from 2011, so... damn). Since the tower is four of those Norco SS500s stacked on top of each other, I'm going to need to find a basically identically-sized 5x3 hotswap cage replacement if something's dying in that one, and I'm not having any luck with a quick search this morning. Might start up a thread in the hardware section if replacement's my solution. I'm guessing with the SS500 as the most likely culprit for power issues, there's no need for me to run extended SMART tests on the 4 drives throwing up errors, but are there any other preventative measures I can take while figuring out the hotswap cage replacement situation? My gut's telling me it's best just to keep the whole thing powered down for now, but that's a massive pain for family reasons. Thank you for confirming it's likely a power issue (connections feel way less likely considering the age involved, but might try and replace the hotswap cage's cables first just to be safe). Any other suggestions to to make sure I really need to replace this thing before I put the effort into trying to replace it would be really helpful!
  13. All four drives are Seagate SMR 8TB drives, which, considering the whole hard drive pricing thing going on for larger-sized drives, has me mildly concerned. It just feels like it's a cabling issue with 4 closely-related drives throwing up the same issue at the same time, but all of my drives are in 5-drive cages, so it feels weird seeing 4 vs 5 (though it could definitely be just the connection of those 4 drives to my LSI card, I've never seen this sort of multi-drive issue in almost 10 years of operation). Diagnostics attached, because I'm scared to touch a damn thing at this point until someone looks at what I've got going on. Thank you in advance for any guidance provided! Edit: just checked age on the drives, and three may have been purchased around the same time (around 2 years of power-on time), but one's less than a year old and definitely from a different purchase batch. Edit 2: Based on other threads I just checked, I went ahead and ran a short SMART test on each of the 4 affected drive. Updated diagnostics file attached. tower-diagnostics-20210513-1436.zip
  14. Possibly a random question with a stupid easy answer for a competent Linux head, but I’ve been searching for hours with no luck: Is there an easy way in the GUI (or terminal) to determine which disks (by any unique identifier, think I could reverse engineer the info I need from there) are being read directly through the SATA ports on my motherboard vs. the ones plugged into my LSI cards? When I initially set up this box (with 4 stacked 5-in-3 Norco hotswap cages), I wasn’t paying attention to which cable ports on the back were associated with which drives (in a left to right order), and when I compare it to another box using the same Norco hotswap cages, I’ve realized they probably changed production between my building the two boxes (both hotswap cage sets are SS-500s, but have different port layouts on the back and different light colors up front) and online instructions aren’t really helping now. So my initial plans of just tracing the motherboard-connected plugs to the hotswap cage cable port fell flat, and now I’m just trying to determine which of these swap cage trays are the ones connected directly to the board so I can use them for Parity drives specifically (as parity’s taking forever on this system and I’m following all the steps for even marginal improvement). Is there an easy way to just see if a certain disk / which disks are SATA1/2/3/4 (the four ports I have on the motherboard) and which are running through the LSIs (16 out of 20)? Thanks for any help, and sorry if this is the dumbest question I’ve ever asked on here. Always appreciate the assistance!
  15. I was kind of hoping that’d be the case, but felt like it’d be safest to check when playing with Parity on a massive array I haven’t moved to dual Parity yet. Thanks for the help!
  16. Same situation as OP, but I’m physically moving my Parity Disk to a slot currently holding a data disk. Just completed an unrelated Parity check, so timing seems perfect. Anything I need to do differently, or swap disks / new config / re-order in GUI / trust Parity works just as simply for (single) Parity In 6.8.3? Thanks for any guidance!
  17. Yeah, I'm just reading tea leaves at this point and hoping there's something obvious I'm missing. I have at least two (could be three in a couple of days) theoretically fine 8TBs ready to roll, and the original 6tb that was throwing up errors (which may have nothing to do with the disk, now) before the rebuild. GUI shows the rebuild ("Read-Check" listed) as paused. I'm guessing my next steps without a free slot to try are going to be: Cancel rebuild ("Read Check"). Stop array, power down. Place (old 6tb? another different 8tb?) into Disk 12 slot. Try a rebuild again today (since I'm guessing unraid trying to turn the old 6tb into an 8tb but failing mid-rebuild means I can't simply re-insert the old 6tb and have unraid automatically go back to the old configuration?) Any reasons why I shouldn't other than the fact that I'm playing with fire again with another disk potentially dying while I'm doing all these rebuilds? I'm starting to think my only options are firedancing or waiting who knows how long for an appropriate hotswap cage replacement and crossing my fingers that I'll physically rebuild everything fine (and I'm almost more willing to lose a data disk's data than risk messing up my entire operation).
  18. Unfortunately not - it's an old box (first built in 2011, I want to say?), four 5-slot Norco SS-500 hotswap cages stacked on each other in the front. Nothing ever really moves around behind the cages, and the only cable movement that I can recall since I first built it was when unplugging/replugging the cage's breakout cables when replacing the Marvell cards with LSIs back in December (and these issues with disk 12 started occurring maybe a quarter of a year later). The hotswap cage containing Disk 12's slot is the second up from the bottom, and could be a massive pain to replace (presuming I can find a replacement of such an old model, or one that doesn't mess up the physical spacing of the other 3 hotswap cages). Edit 2: any chance the rebuild stopped at *exactly* 6tb could be significant? Feels like a bizarre coincidence.
  19. Soooooo something may be up with the Disk 12 slot. That 6tb couldn't finish an extended smart test, so I dropped what I was pretty sure was a fine 8TB (precleared and SMART ok after being used in another box for a couple of years) into the slot for the rebuilt. Had a choice between using an SMR Seagate and CMR WD and used the WD. Rebuild was interestingly exactly 75% complete (right at the 6tb mark) and the new 8tb in the Disk 12 slot started throwing up 1024 read errors and got disabled. My instinct's to throw another 8TB spare in the slot and try it again, but something feels weird, so here's the diagnostics. Am I reaching a point where something's likely wrong with the hotswap cage and I'm going to need to buy / replace that whole thing again? tower-diagnostics-20200605-0534.zip
  20. OK, running extended test now - hate that it's consistently throwing up errors and need to replace a 6tb soon anyway, but definitely don't want to throw out disks unnecessarily during what could be a weird economic time for getting new disks. Thanks for the quick response!
  21. The sync (vs disk) correcting parity check was a total brain fart on my end, and I'm hoping it turned out okay (no error messages but I'll go back to check the underlying data as soon as I can). I was just writing to disk 12 and the GUI threw up a read error, so I immediately pulled diagnostics to send here. I have a precleared 8tb spare ready to replace Disk 12's 6tb, and I'm leaning towards just shutting down and throwing that thing in there to start a Disk 12 rebuild/upgrade now - any reasons I shouldn't do that in terms of better-safe-than-sorry? Thanks for all the guidance! tower-diagnostics-20200604-1054.zip
  22. Weird Disk12 happenings again. I had an unclean shutdown with someone accidentally hitting the power button on my UPS that powered two unraid boxes. One booted back up and prompted me to parity check. One (this one) weirdly gave me the option for a clean shutdown, which I took, then started back up. No visible issues, but felt paranoid, so ran a non correcting parity check before modifying any files. ~200 read errors on Disk 12. Ran correcting parity check. Tried collecting diagnostics at every possible opportunity to help see if anything weird turned up that anyone else might notice: 5-27: right after "unclean" / clean shutdown 5-29: after non-correcting parity check 5-30: after correcting parity check tower-diagnostics-20200530-2053.zip tower-diagnostics-20200529-2000.zip tower-diagnostics-20200527-1017.zip
  23. Thought I'd update in case it helps anyone else searching threads: the 3.3V tape trick worked, so I'm not sure what the root problem was, but if anyone has these drives working in some SS-500s but not others, rest assured the tape trick should work on those other SS-500 cages.