johnnya1306

Members
  • Posts

    17
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

johnnya1306's Achievements

Noob

Noob (1/14)

0

Reputation

  1. John_M, The replacement was not successful. I got the second two red Xes when the system hung while mounting the disks. trurl, Yes, I would like to replace the old 2 Tb disks with new 8 Tb disks but getting the system stable is my first priority. With disks 8 and 9 assigned back to the array, I can set a new configuration, keep only the data disks and rebuild parity, right? Assuming that I can get the array started after replacing the power supply. Thanks all.
  2. Yes, the server's been off all week. I only turned it on last night to take that screenshot. I don't think that they are on the same controller, both pairs are in separate 5X3 modules. 8 and 9 were removed from the configuration when I tried to replace them. One similar observation from both instances is that the array didn't mount the drives and start the array as quickly as it usually does. Agree that it seems to point to an issue with the PS which is probably older than the discs. I've accepted that it should be replaced. Is there currently a PS model and rating that is favored by the community? Aside from diagnosing the cause, I've also accepted that with four failed drives, I'll have to rebuild the array from a new configuration. I suspect the data is intact since I didn't even get to start the array either time and there shouldn't have been any writes. Do I reset the configuration and start copying the data? Is Krusader the recommended application? Thanks again.
  3. Didn't realize I was running an old version. Seems like I upgraded recently but that was only after a prompt on the main page. I have two LSI 9211-8i flashed to IT mode. The power supply is an older Corsair HX 750W. Here's a shot of the main page with the problem drives... Disk 8 and 9 were the two I tried to replace before I had an issue with the other two. Thanks for all the help and apologies for the difficulty.
  4. I think that the first two drives to red x were in slots 8 and 9, they were both old 2 Tb veterans. After replacing them with a pair of less old 2 Tb Samsung drives I had laying around, the second two red Xes appeared. I believe that they were in slots 1 and 3. Both were very new 8 Tb drives. At that point I returned and reassigned the "failed" 2 Tb drives to their original slots and shut down. It's been down ever since...
  5. The diagnostic file is attached to my first post. I have 19 drives, 2 X 8 Tb parity, 3 X 8 Tb data, 6 X 4 Tb data and 8 X 2 Tb data. I think the first two with red X were 8 and 9??? The second two were 1 and 3???
  6. OK, thanks for the replies. No, I didn't need to open the case to remove the drives. I'm using Norco 5in3 hotswap modules and all the cables are locking. I am also sure to shutdown before moving any of the drives, since I learned the system doesn't like to hotswap. The startup described above was just turning it back on from a previously working and active state. Since the failures were not common to one slot or controller, I would most suspect the power supply. I'll probably replace it as I'm considering a full rebuild. I guess my main question is that with four "failed" drives, is my only real option to rebuild the array? I'm not familiar enough with any of the tools to "repair" the drives. Am I correct in presuming that all the data is still on the drives? If I were to get a couple new drives and setup a new configuration, can I just set each as Unassigned, copy the data and re-deploy? Is there a better way? Do I have any other options? Is there any insight to glean from the diagnostics file? Sorry for all the questions and thanks for the help!
  7. Not a good weekend for my server... Started up this weekend to find two red X drives. Replaced with two other drives I had available. Rebooted and started the array to rebuild, only to see two additional red Xes in different slots. This was after a long wait to mount the array and having to refresh the main page. The first two were old 2 Tb drives and didn't think it was odd that they might have failed. The second pair were two new 8 Tb drives for which I don't have replacements. Both times, one would show as "un-mountable file system" but didn't give any result with the xfs check tool. Is my only option to rebuild? Do I set a new configuration and copy all the drives back into a new array? I would like to replace the old 8 X 2 Tb drives with 2 new 8 Tb drives. Should I consider a different infrastructure? Is it a failing power supply (it's older, too...)? Sorry for all the questions, but I don't want to make it worse. Sincerely appreciate any help, diagnostic.zip attached. Thanks, John tower-diagnostics-20180824-2045.zip
  8. OK, Thanks. Inserting new disks into 5in3 hotswap bay must disturb neighboring disk. Maybe try esata dock... Thanks again, John
  9. Hi all, Further in my upgrade from WHS V1, I'm adding WHS drives in open bay to copy data to Unraid array. After inserting WHS drive, two array drives are now showing red, disabled "x". Stopping and restarting the array still shows drives disabled. How do I get drives back? Is there a "right" way to connect WHS drives in order to copy data? Thanks in advance for any help, John tower-diagnostics-20170527-1908.zip
  10. OK, great, Thanks! Gotta preclear new drive first. Uncheck "write corrections to parity" before rebuild, right?
  11. There isn't a "parity is already valid" box. Only "replacement disk installed"
  12. When I first stopped the rebuild, before doing anything, I put the original parity disk back. It wasn't recognized as the original (valid) parity disk. How do I use the original parity disk to restore the array? Won't it just effectively "uninstall" Krusader?
  13. Just installed Krusader but haven't touched the array.