Jump to content

Running Preclear_disk.sh on a drive already active in an array?


Recommended Posts

"Forgive me forum users, for I have sinned.  It has been at least 72 hours since I last added a drive to my unRaid server.

 

I impatiently built an unRaid server without knowing what I was doing.  Now that I do *,  I confess my sins, and am here to amend my ways" ;D

 


 

I have several drives that are, per syslogs, etc -- of questionable nature.  These were random 500s, 250s, etc "laying around".   I built a play unRaid that evolved into a production unRaid without a lot of thought between "a" and "b".

 

Now that I've made the commitment to unRaid (a WHS user moving over) -- I'm adding some more drives to the array,  I'm running pre-clears on all of them.  I see "why" now.  While impatience got the best of me in the past, it's not going to moving forward.

 

So, let's assume this.

 

1) I have existing drives that were never precleared.   They are active in the array.   I sinned, and some are showing up as questionable in smarts, etc.

2) I have new drives, (well, new to the array), that ARE pre-cleared. and "PASS!" status via email.   But NOT added to the array.

3) I have a given active-drive that I want to swap out with a pre-cleared one.

4) I actually have 3 of these 'questionable' drives that I want to swap out with pre-cleared ones.

 


 

Q1) I think it's a matter of just pulling the 'old' drive {as if it failed} and re-assigning the 'pre-cleared' one, and the array will rebuild it.  And I just wait.  As long as the pre-cleared drive is as big, or bigger (I could replace a 500gb drive with a 1tb drive}

 

Q2) Or is there a better, safer procedure to use   {I guess my question is, while rebuilding, if I had a second failure -- since I have the 'pulled drive' in hand, with it's data,  I think I'm safe all around -- might be a pain in the arse to recover from,  but data-wise,  I'm safe since I'm pulling a drive with data intact}

 

Q3) And I should do this "one drive at a time".  Pull questionable drive.  Insert pre-cleared drive.  Rebuild from parity the "pulled drive".    Then, shampoo - rinse and repeat.

 

Q4) *THEN* the old questionable drive,  no longer part of the array, I can safely run "preclear" on it.   Right?   While it used to be in the array, it's not now.. and should show up under "preclear_disk.sh -l" in the list of valid drives to do a pre-clear on.   Even though, it used to be in the active array.

 

Q5) I think I'm safe as long as I don't put MORE data on the unRaid server (see #2) while I'm rebuilding the simulated-failure.   If I get further impatient, and put more data on the server, during the rebuild,  if that data "happens" to land on either the "being rebuilt" drive, or on the hypothetically "second failure" drive -- that data would be lost?  Right?

 

 

Anything else that I am missing in my logic?

 

 

* Well, nobody really knows everything right?   I just know a heck of a lot more now about unRaid... and how it works behind the scenes.

Link to comment

If you have enough space on your new, happy drives, it might be faster to:

 

Pull all the existing drives, keep them as a set with parity intact.

Create entirely new array with new drives. Don't enable the new parity drive yet.

Mount and copy from old drives, one at a time.

Enable parity drive and let it go.

Link to comment

If you have enough space on your new, happy drives, it might be faster to:

 

Pull all the existing drives, keep them as a set with parity intact.

Create entirely new array with new drives. Don't enable the new parity drive yet.

Mount and copy from old drives, one at a time.

Enable parity drive and let it go.

 

Interesting idea.  Alas,  I don't have enough space on the 'new happy drives' to pull that idea off.    [Combined space of new happy drives is less than the space I have already in the array]

 

{But maybe for someone else, who's repented, this is a viable option}

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...