jaylossless

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by jaylossless

  1. That's a good point. I'll try replacing the power supply with this fresh array.
  2. Sorry swapping as in moving the drives around physically (different bays, not reassigning to different disk # slots in Unraid). I believe if I were to move drives to different slots that would invalidate the config?
  3. I'm having this issue with 12GB of RAM on the destination Unraid server. I have 64GB on the source Unraid server. Just a simple "rsync -avXP" on 3.8TB worth of data. Oddly when I exclude (with --exclude) the files that cause the OOM error, it runs fine for a few more hours. So it seems like this is happening with specific files rather than for RAM limitation reasons. Also, when I take a look at the destination RAM usage, it's only 67% when it happens. Any clues?
  4. So I stopped the array and tried to arrange a few of the disks in empty physical slots (trying to rule out cables). Refreshed the page, all the drives show up and look good. When I started the array, two more disks got disabled. Obviously at this point, I'll have to recreate the array. It seems like I should've rebooted after swapping the the drive slots (I've noticed sometimes you can't trust the "main" view on Unraid at times). Not sure what the problem was. Could it have been the reiserfs? I'm rebuilding now with xfs and no issues so far. I'll just copy the data back when it's finished. But I guess, after more than a decade, it was time for a fresh array.
  5. Sorry it was a 600W: https://www.newegg.com/silverstone-strider-essential-series-st60f-es-600w/p/N82E16817256071# So "stop" = "cancel" ?
  6. So I resumed and the read errors kept going up for disk 1, 2, and 4. Obviously, something seriously wrong. Any idea why this happens during rebuilds but not when the array is started? There were no hints of any issues prior to adding parity #2. Am I able to pull drives out and check which is which to see if they're part of the 640L while paused? Or do I have to stop the array? If I have to stop the array...is there a difference between stopping the array and cancelling the parity?
  7. Thank you for your quick response. They're all ReiserFS because that's how my original array was (back in 2007?). I've been just upgrading ever since. That's the thing, disk 11 wasn't in the Unassigned disk section until I paused and restarted the rebuild. I never mounted it under unassigned disks. The last time disk 11 was disabled, I formatted and rebuilt (this was a few weeks ago). Performed a filesystem check and it seemed fine. I know for sure Disk 4 didn't have any filesystem errors prior to this...so not sure why all the read errors. I've been running this Silverstone 700 watt power supply for a while, it was able to keep up with 15 drives for over a decade? Hm, I've used that 640L for a while with no issues. But this might be the culprit. My gut says it's hardware as well. I've attached an image of a directory in Disk 2. Looks like filesystem corruption...but some directories are accessible. bash: cd: Zootopia (2016): Permission denied So what would be my options at this point? Should I just let it run and hope for the best? What's kind of scary is that disk 11 got disabled so...I can't rebuild that disk AND build parity #2 at the same time. But...if I were to stop the array or cancel the parity rebuild...then parity is 100% invalid...and won't be able to rebuild disk 11....correct? Or is it not too late and the first parity disk has enough information to build disk 11, even though parity 2 is incomplete? At this point, I can't even backup the shares (Permission denied, read only). I'm wondering...if I stop the array now...will these directories suddenly become accessible or will it be more of the same story? Also, would I be able to unplug drives while the array rebuild is paused?
  8. So I've been working on the server for a few weeks and finally had everything organized (I've used Unraid since the mid 2000s). I've stopped and started the array many times today for various reason. No problems. For piece of mind, I added a second 10TB party drive into an extra bay and everything looks good. No SMART errors, all green lights, etc. I hit start and all hell breaks loose. First one of the disks (Disk 11) gets immediately disabled. Oddly, this disk got disabled the last time I did a rebuild. I did a filesystem check and repair on many of these disk prior to today. Everything cleared. Secondly, Disk1 started showing millions of read errors. I panicked and paused the rebuild. Made sure the connection was sturdy (didn't unplug). Resumed the rebuild and now Disk 1, 2, and 4 have the same amount of read errors. And for some reason disk 11 and 2 shows up in the unassigned devices at the same time. When I go into the shell and try to check the disks at /etc/disk1, disk2, disk4...some of the files have ?????? for their file permissions and can't be changed, even as root (read-only filesystem). At this point, I've just paused the rebuild. I'm not sure what I should do next. If I hit resume, the errors stack up but all the lights are green besides disk 11 (which was disabled). It could complete in 24 hours but...I have a feeling I'm just going to be left with junk? But technically all the data shouldn't be touched since it should only be writing to the 2nd parity drive? No sure what to do next... unraid-diagnostics-20201106-1916.zip