wheel

Members
  • Posts

    206
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wheel's Achievements

Explorer

Explorer (4/14)

1

Reputation

  1. Good to know! My SMB speeds using Windows are usually a third or less than what Krusader was giving me, so the Unassigned Devices trick might be what I’m looking for, especially if I can just mount individual disks as a share with a little work (organizing setup is disk-dependent on one of the towers I regularly move new files onto).
  2. Any updates on a replacement tower-to-tower transfer alternative? Krusader finally started crapping out on me (won’t even start now with a noVNC webutil.js error I can’t find any details for online, and MC doesn’t seem to want to see my separate server address for transfers between two separate hardware systems…
  3. So I've been using Krusader for ages without making any active changes, but I'm running into this error today: noVNC encountered an error: http://((SERVER)):6080/app/webutil.js readSetting@http://(SERVER):6080/app/webutil.js:150:30 initSetting@http://(SERVER):6080/app/ui.js:710:27 initSettings@http://(SERVER)):6080/app/ui.js:131:12 start@http://((SERVER)):6080/app/ui.js:57:12 prime/<@http://((SERVER)):6080/app/ui.js:45:27 I was running 6.8.3, so I upgraded to 6.9.3; same error. I ran a force update on binhex-krusader; same error. Has anyone else run into this? I was using it to transfer files less than a week ago, and absolutely haven't changed anything sense (outside of running a successful parity check yesterday). Any ideas on ways to resolve would be greatly appreciated! tower3-diagnostics-20220510-1114.zip
  4. Sorry for the incredibly late response - week totally ran away from me with work. Diagnostics attached; never enabled full disk encryption. CPU's an AMD Phenom II X4 820 @ 2800 MHz. Based on a past conversation (which I'm having an incredibly hard time finding right now), I upgraded my CPU to the best-case scenario for my motherboard, and was told it was just a band-aid improvement, as the CPU is the bottleneck and in order to jump up a level in parity-check speed, I'd need to upgrade my motherboard, too. Serious apologies for not including all of this information in the original post! EDIT: I'm thinking these are the relevant syslog sections: Nov 19 20:20:12 Tower2 kernel: raid6: sse2x1 gen() 3644 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x1 xor() 3646 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x2 gen() 5785 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x2 xor() 6341 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x4 gen() 6777 MB/s Nov 19 20:20:12 Tower2 kernel: raid6: sse2x4 xor() 3230 MB/s tower2-diagnostics-20211123-0735.zip
  5. Theoretically a simple question, but having found tons of options (most outdated / sold out / no longer being made) through post searching, I figured I'd ask a brand new question in hopes of a November 2021 answer in time for Black Friday: I have a "save and forget" media tower which does absolutely nothing outside of holding drives. No docker, no apps, no cache disk. But the motherboard (AM3 AMD 880G SATA 6Gb/s ATX ECS A885GM-A2) doesn't support strong enough CPUs for my dual parity checks to take less than ~3 days (18 data disks spread across WD 8TBs and 12TBs, mostly even split of EMAZ and EDFZ, and two EDAZs - they're mostly connected to a pair of Genuine LSI 6Gbps SAS HBA LSI 9211-8i P20 IT Mode Low Profile cards, which I definitely don't want to replace). I'm finally ready to upgrade that motherboard, and I'm *guessing* my decade-old 2GB of RAM that's been serving my needs well (Crucial 240-Pin DDR3 SDRAM 1333 PC310600) should probably be upgraded to ECC, but if I wanted to max out my parity check speeds as cheaply as possible without hunting old hardware trade boards or dealing with eBay trust issues, does anyone have any readily-retail-available bang-for-buck suggestions for upgrading that old motherboard (and necessarily CPU, from everything I've read)? If the old-slot RAM works, all the better, but presuming my low-demand needs don't need more than 2GB anyway, a bonus ECC upgrade to match a new motherboard hopefully won't break the bank. Thanks so much in advance for any ideas or guidance on this overwhelming shopping endeavor!
  6. Following a lot of mid-pandemic work on my unRAID towers, I’ve reached a point where I’m pretty comfortable I’ve done all I can do to ensure against catastrophic failure: finally converted all my ReiserFS drives to XFS, got everything protected by dual parity, resolved a bunch of temperature issues. One thing bugs me, though: two of these 21-drive towers (and one 13-drive tower) are about a decade (and about 7 years) old, and I keep reading snippets of “well, unless your PSU fails and takes out everything at once” in unrelated threads that, combined with the “capacity reduces by 10% or so yearly up to a point” adage, has me thinking I may be dancing on thin ice with all three of these PSUs currently. What gives me pause on replacing all three (or at least the pair of ~decade old ones) immediately is the weird use case of UnRAID (or maybe just mine specifically). All three of these towers were designed for their UnRAID WORM purpose, and none of their parts had any previous life. Am I being extremely paranoid, or is replacement at this point a prudent idea? There have definitely been times (months, even years) where one tower or the other has not been powered on at all, or has seen extremely minimal use (90% idle time when powered on). Could these use situations mitigate the normal “danger zone” timeline on replacing a PSU? …or not enough to ease larger concerns on something like built-up fan dust congealing and overheating the PSU regardless of how long it’s actually been in operation (and at what level of effort)? Any guidance on how concerned I should be (and how swiftly I should replace what I have) would be greatly appreciated! PSU/System Age Specifics (all drives 3.5” between 5700-7200): The 2011 21-disk tower is running on a Corsair Enthusiast Series TX650 ATX12V/EPS12V 80 Plus Bronze, purchased in 2011 The 2012 21-disk tower is running on a Corsair Enthusiast Series TX650 ATX12V/EPS12V 80 Plus Bronze, purchased in 2012 The 2015 13-disk tower is running on a Corsair RM Series 650 Watt ATX/EPS 80PLUS Gold-Certified Power Supply - CP-9020054-NA RM650, purchased in 2015
  7. Damn, it does: all five drives are in the same 5x3 Norco SS500 hot swap rack module (from 2011, so... damn). Since the tower is four of those Norco SS500s stacked on top of each other, I'm going to need to find a basically identically-sized 5x3 hotswap cage replacement if something's dying in that one, and I'm not having any luck with a quick search this morning. Might start up a thread in the hardware section if replacement's my solution. I'm guessing with the SS500 as the most likely culprit for power issues, there's no need for me to run extended SMART tests on the 4 drives throwing up errors, but are there any other preventative measures I can take while figuring out the hotswap cage replacement situation? My gut's telling me it's best just to keep the whole thing powered down for now, but that's a massive pain for family reasons. Thank you for confirming it's likely a power issue (connections feel way less likely considering the age involved, but might try and replace the hotswap cage's cables first just to be safe). Any other suggestions to to make sure I really need to replace this thing before I put the effort into trying to replace it would be really helpful!
  8. All four drives are Seagate SMR 8TB drives, which, considering the whole hard drive pricing thing going on for larger-sized drives, has me mildly concerned. It just feels like it's a cabling issue with 4 closely-related drives throwing up the same issue at the same time, but all of my drives are in 5-drive cages, so it feels weird seeing 4 vs 5 (though it could definitely be just the connection of those 4 drives to my LSI card, I've never seen this sort of multi-drive issue in almost 10 years of operation). Diagnostics attached, because I'm scared to touch a damn thing at this point until someone looks at what I've got going on. Thank you in advance for any guidance provided! Edit: just checked age on the drives, and three may have been purchased around the same time (around 2 years of power-on time), but one's less than a year old and definitely from a different purchase batch. Edit 2: Based on other threads I just checked, I went ahead and ran a short SMART test on each of the 4 affected drive. Updated diagnostics file attached. tower-diagnostics-20210513-1436.zip
  9. Possibly a random question with a stupid easy answer for a competent Linux head, but I’ve been searching for hours with no luck: Is there an easy way in the GUI (or terminal) to determine which disks (by any unique identifier, think I could reverse engineer the info I need from there) are being read directly through the SATA ports on my motherboard vs. the ones plugged into my LSI cards? When I initially set up this box (with 4 stacked 5-in-3 Norco hotswap cages), I wasn’t paying attention to which cable ports on the back were associated with which drives (in a left to right order), and when I compare it to another box using the same Norco hotswap cages, I’ve realized they probably changed production between my building the two boxes (both hotswap cage sets are SS-500s, but have different port layouts on the back and different light colors up front) and online instructions aren’t really helping now. So my initial plans of just tracing the motherboard-connected plugs to the hotswap cage cable port fell flat, and now I’m just trying to determine which of these swap cage trays are the ones connected directly to the board so I can use them for Parity drives specifically (as parity’s taking forever on this system and I’m following all the steps for even marginal improvement). Is there an easy way to just see if a certain disk / which disks are SATA1/2/3/4 (the four ports I have on the motherboard) and which are running through the LSIs (16 out of 20)? Thanks for any help, and sorry if this is the dumbest question I’ve ever asked on here. Always appreciate the assistance!
  10. I was kind of hoping that’d be the case, but felt like it’d be safest to check when playing with Parity on a massive array I haven’t moved to dual Parity yet. Thanks for the help!
  11. Same situation as OP, but I’m physically moving my Parity Disk to a slot currently holding a data disk. Just completed an unrelated Parity check, so timing seems perfect. Anything I need to do differently, or swap disks / new config / re-order in GUI / trust Parity works just as simply for (single) Parity In 6.8.3? Thanks for any guidance!
  12. Yeah, I'm just reading tea leaves at this point and hoping there's something obvious I'm missing. I have at least two (could be three in a couple of days) theoretically fine 8TBs ready to roll, and the original 6tb that was throwing up errors (which may have nothing to do with the disk, now) before the rebuild. GUI shows the rebuild ("Read-Check" listed) as paused. I'm guessing my next steps without a free slot to try are going to be: Cancel rebuild ("Read Check"). Stop array, power down. Place (old 6tb? another different 8tb?) into Disk 12 slot. Try a rebuild again today (since I'm guessing unraid trying to turn the old 6tb into an 8tb but failing mid-rebuild means I can't simply re-insert the old 6tb and have unraid automatically go back to the old configuration?) Any reasons why I shouldn't other than the fact that I'm playing with fire again with another disk potentially dying while I'm doing all these rebuilds? I'm starting to think my only options are firedancing or waiting who knows how long for an appropriate hotswap cage replacement and crossing my fingers that I'll physically rebuild everything fine (and I'm almost more willing to lose a data disk's data than risk messing up my entire operation).
  13. Unfortunately not - it's an old box (first built in 2011, I want to say?), four 5-slot Norco SS-500 hotswap cages stacked on each other in the front. Nothing ever really moves around behind the cages, and the only cable movement that I can recall since I first built it was when unplugging/replugging the cage's breakout cables when replacing the Marvell cards with LSIs back in December (and these issues with disk 12 started occurring maybe a quarter of a year later). The hotswap cage containing Disk 12's slot is the second up from the bottom, and could be a massive pain to replace (presuming I can find a replacement of such an old model, or one that doesn't mess up the physical spacing of the other 3 hotswap cages). Edit 2: any chance the rebuild stopped at *exactly* 6tb could be significant? Feels like a bizarre coincidence.
  14. Soooooo something may be up with the Disk 12 slot. That 6tb couldn't finish an extended smart test, so I dropped what I was pretty sure was a fine 8TB (precleared and SMART ok after being used in another box for a couple of years) into the slot for the rebuilt. Had a choice between using an SMR Seagate and CMR WD and used the WD. Rebuild was interestingly exactly 75% complete (right at the 6tb mark) and the new 8tb in the Disk 12 slot started throwing up 1024 read errors and got disabled. My instinct's to throw another 8TB spare in the slot and try it again, but something feels weird, so here's the diagnostics. Am I reaching a point where something's likely wrong with the hotswap cage and I'm going to need to buy / replace that whole thing again? tower-diagnostics-20200605-0534.zip