jaylossless

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About jaylossless

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. That's a good point. I'll try replacing the power supply with this fresh array.
  2. Sorry swapping as in moving the drives around physically (different bays, not reassigning to different disk # slots in Unraid). I believe if I were to move drives to different slots that would invalidate the config?
  3. I'm having this issue with 12GB of RAM on the destination Unraid server. I have 64GB on the source Unraid server. Just a simple "rsync -avXP" on 3.8TB worth of data. Oddly when I exclude (with --exclude) the files that cause the OOM error, it runs fine for a few more hours. So it seems like this is happening with specific files rather than for RAM limitation reasons. Also, when I take a look at the destination RAM usage, it's only 67% when it happens. Any clues?
  4. So I stopped the array and tried to arrange a few of the disks in empty physical slots (trying to rule out cables). Refreshed the page, all the drives show up and look good. When I started the array, two more disks got disabled. Obviously at this point, I'll have to recreate the array. It seems like I should've rebooted after swapping the the drive slots (I've noticed sometimes you can't trust the "main" view on Unraid at times). Not sure what the problem was. Could it have been the reiserfs? I'm rebuilding now with xfs and no issues so far. I'll just copy the data back when it's f
  5. Sorry it was a 600W: https://www.newegg.com/silverstone-strider-essential-series-st60f-es-600w/p/N82E16817256071# So "stop" = "cancel" ?
  6. So I resumed and the read errors kept going up for disk 1, 2, and 4. Obviously, something seriously wrong. Any idea why this happens during rebuilds but not when the array is started? There were no hints of any issues prior to adding parity #2. Am I able to pull drives out and check which is which to see if they're part of the 640L while paused? Or do I have to stop the array? If I have to stop the array...is there a difference between stopping the array and cancelling the parity?
  7. Thank you for your quick response. They're all ReiserFS because that's how my original array was (back in 2007?). I've been just upgrading ever since. That's the thing, disk 11 wasn't in the Unassigned disk section until I paused and restarted the rebuild. I never mounted it under unassigned disks. The last time disk 11 was disabled, I formatted and rebuilt (this was a few weeks ago). Performed a filesystem check and it seemed fine. I know for sure Disk 4 didn't have any filesystem errors prior to this...so not sure why all the read errors. I've been running t
  8. So I've been working on the server for a few weeks and finally had everything organized (I've used Unraid since the mid 2000s). I've stopped and started the array many times today for various reason. No problems. For piece of mind, I added a second 10TB party drive into an extra bay and everything looks good. No SMART errors, all green lights, etc. I hit start and all hell breaks loose. First one of the disks (Disk 11) gets immediately disabled. Oddly, this disk got disabled the last time I did a rebuild. I did a filesystem check and repair on many of these disk prior to today. Ev