Jump to content

calypsocowboy

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by calypsocowboy

  1. So i think I'm going to go with option 2 and do the new config. I'll then mount disk3 up to another machine and see what data I can get off it and copy it back to the server. I believe these are correct procedures https://wiki.lime-technology.com/UnRAID_6_2/Storage_Management#Reset_the_array_configuration One additional question, I'm assuming that resetting the array rebuilds the parity drive. So I'm guessing it makes sense to wait to do this until I get my new larger parity drive. Is that correct? Once that all completes, then I'll preclear my old parity drive and add it back in. BTW, thanks for the help. I definitely need to get notifications setup on the server.
  2. Okay, at this point, I've copied off most of what I think my (non-replaceable) files off the array. I haven't checked all the files to see if they are okay. At this point on the array is mostly music and movies about 8.8TB worth that I can replace but would prefer not to. I've shut down the array to prevent further writes to it. My current array is 4TB parity, 4TB, 3x2TB data drives. As it sits right now, I'm using 8.8TB, so I don't have room to pull the failing 2TB drive out now. In a week, I'll have a new motherboard and/or a new controller card and a new 8TB drive I was planning on for parity. What's the best way to bring things back up as it sounds like at this point, it sounds like trusting parity to rebuild the drive wouldn't be a good idea. My initial thought was some what around clearing my 4TB parity drive, bringing up all 4 other discs, copying the data from the failing drive over to a good one and removing it from the array. All of this would I'm assuming be done with the array unprotected. Then once the failing 2TB drive is out and the 4TB drive is in, Put the 8TB drive and rebuild parity. Lastly, hope that I didn't lose too much data.
  3. I have a drive showing with a red X, device is disabled. About my system, I'm running Unraid 6.2 on an older Supermicro PDSMi board with a RR1U-ELi riser card with a SuperMicro AOC-SASLP-MV8 card. The drive that is having problems is connected to that card. It's a 4TB Seagate drive. I got it from a schucked enclosure a number of years back from Costco. I have two of those drives in the computer. I believe my mobo bios is up to date, I'm not sure what bios the card is running. This first happened about 3 months ago. At the time, I stopped the array, powered down the server, pulled and reset the power and data cables, restarted the server, checked the smart report, it came back clean on the drive, so went through the process of adding the drive back into the array, Things seemed to be working well for a bit. About a week ago, I noticed the same thing, I took the same steps only this time, I connected the drive to a different end of the breakout cable, the one I had it connected to looked a little suspect, same process, clean smart report, added drive back in. And now, back comes the red x, same drive. No now trying to figure out what's next. I'm not sure the drive is bad because after each reboot, it come back clean. I've tried different ends on the breakout cable (Monoprice). I could try ordering a new cable to see if that might be the issue. I'm not sure if it's the expansion card or maybe the riser or the combo. I don't think my mobo supports just the card without the riser. Any thoughts or am I too the point where I need to look for a new expansion card, or maybe a mobo that supports more SATA connections, or supports the card I have directly instead of via riser. Current Diagnostics - cascade-diagnostics-20170527-1408.zip Last Weeks Diagnostics - cascade-diagnostics-20170521-0812.zip
×
×
  • Create New...