Replacing drive by harddisk that's bigger than parity drive


BartDG

Recommended Posts

I'm currently running an unRAID system containing 3 drives : one 3TB for parity, one 3TB and one 2TB.  Now, the 3TB (non parity) drive has started to show smart errors, more in particular:

#	Attribute Name	        Flag   Value	Worst	Threshold	Type	Updated	 Failed	 Raw Value
197	Current pending sector	0x0032	200	198	000	      Old age	Always	 Never	 1
198	Offline uncorrectable	0x0030	198	198	000	      Old age	Offline	 Never	 1137

 

It also showed some errors on the main tab, but it seems those are all gone now...strange.  A parity check turned out ok.

 

I don't really know what these "smart" notifications mean, but getting an orange exclamation mark instead of a green thumbs up icon is actually a red flag to me.  I don't want to take any chances, so I'm going to replace the drive, and expand the array a bit while I'm at it.  I've bought 3 HGST 7k4000 (4TB drives).

 

Now, since I want to replace the 3TB disk with a 4TB disk, and my parity is also only 3TB, I guess I'd best to the "swap-disable" procedure (read about that here)

 

Now my questions are : (some pretty obvious, but still): 

1)  do the new drives need to be precleared before replacing the old ones? I'm guessing yes?

2)  I probably don't need to change anything to the unRAID setting of the drives? (meaning parity remains parity and data drive remains data drive)?

3)  Is this "swap-disable" procedure more dangerous than normal?

4)  I'll also start using XFS insead of ReiserFS with the new drives.  The old 2TB drive will also remain in the system, so there will be both XFS and ReiserFS drives in the system.  Would that cause problems?  If the answer is yes, or "it might", then I'll also update the 2TB drive with XFS, which brings me to the question:

5) Is there an easy way to copy all the data off of a drive to a new drive? Using Midnight Commander?  I've read something about it, but I have no experience with it (little linux experience to be honest).  I have read though that is can be slow.  How can I be absolutely sure all the data is moved from the drive before pulling/replacing it?

6) I'm still in doubt whether I should use XFS or BTRFS.  One of the main reasons for using a server is for storage and using it as a data time vault.  Bitrot is thus my worst enemy, so you'd think BTRFS would be the obvious choice.  But I'm in doubt since I have absolutely zero experience with it and I'm also not sure if it's 100% stable by now?  What would you recommend?

 

Thank you for your time.

 

 

 

Link to comment
  • Replies 92
  • Created
  • Last Reply

Top Posters In This Topic

1)  do the new drives need to be precleared before replacing the old ones? I'm guessing yes?

You do not need to preclear the new drive as you are replacing an existing drive and not adding it as a new drive.

 

However doing a preclear is a good test to make sure there is no problem with the new drive.

 

You need to decide whether doing a preclear to test the drive is more important than getting a replacement drive into the system as quickly as possible in case the failing drive gets worse.

2)  I probably don't need to change anything to the unRAID setting of the drives? (meaning parity remains parity and data drive remains data drive)?

If you are going to follow the swap-disable procedure the you will be changing the parity drive to the new bigger drive and making the existing parity drive replace the failing data drive.  This is all detailed in the swap-disable procedure.  Therefore before starting make sure you understand the steps needed to perform this procedure and follow it carefully ensuring you do not miss any steps.

3)  Is this "swap-disable" procedure more dangerous than normal?

i'll leave this opinion to others.    It is certainly what I would do in your situation.

 

If you had an existing 3Tb drive that you could just swap for the failed drive I would recommend that.  But in your case where you need to buy a replacement and wish it to be bigger in preparation for future expansion, it is the most sensible thing to do.

4)  I'll also start using XFS insead of ReiserFS with the new drives.  The old 2TB drive will also remain in the system, so there will be both XFS and ReiserFS drives in the system.  Would that cause problems?  If the answer is yes, or "it might", then I'll also update the 2TB drive with XFS, which brings me to the question:

Leave any thoughts of changing file systems to after you have replaced and rebuilt the failed drive.

5) Is there an easy way to copy all the data off of a drive to a new drive? Using Midnight Commander?  I've read something about it, but I have no experience with it (little linux experience to be honest).  I have read though that is can be slow.  How can I be absolutely sure all the data is moved from the drive before pulling/replacing it?

My advice is to move the data using whatever means you are most familiar with so you are less likely to make any mistakes.

6) I'm still in doubt whether I should use XFS or BTRFS.  One of the main reasons for using a server is for storage and using it as a data time vault.  Bitrot is thus my worst enemy, so you'd think BTRFS would be the obvious choice.  But I'm in doubt since I have absolutely zero experience with it and I'm also not sure if it's 100% stable by now?  What would you recommend?

Use XFS in preference to BTRFS unless you have a specific requirement that needs to use BTRFS.

 

Going by reports on this forum, the recoverery options for BTRFS when things go wrong are not as good as XFS with more likelyhood of loosing all the files in such circumstances.  However this is just my opinion from reports on this forum .... I have not used BTRFS myself.

 

Link to comment

Thanks for your advise Remotevisitor!  I feel more confident now. 

You're right, I'll do a preclear before using the new drive.  If, should the 3TB fail before it's complete (small chance), I could still do the "swap disable" procedure anyway.

 

I'll go with your advise and use XFS.  Disaster recovery is also very important, I agree.  Hmmm. maybe ZFS with unRAID will be an option in time, now that Ubuntu will implement it by default in their upcoming LTS release. :)

Link to comment

Ah, come to think of this, one final question: if I do the change-swap procedure, and basically replace my parity drive with the 4TB drive and use my "old" 3TB drive as the new data drive, I'm guessing the drive will still be ReiserFS, right?  So if I want to change everything to XFS, that would take me another step of adding another 4TB drive to the array and actually copying everything over, correct?

Link to comment

... Now, since I want to replace the 3TB disk with a 4TB disk, and my parity is also only 3TB, I guess I'd best to the "swap-disable" procedure (read about that here)

 

I wouldn't do that, for several reasons, which I'll expand on as I answer your questions.

 

 

1)  do the new drives need to be precleared before replacing the old ones? I'm guessing yes?

 

Technically no; but it's a good idea to do it to provide a good initial test of the new drives before putting them into service.  And for the new drives that you're going to add to the array (as opposed to the one you're simply going to replace parity with) it will make the process of adding the drive MUCH faster.

 

 

2)  I probably don't need to change anything to the unRAID setting of the drives? (meaning parity remains parity and data drive remains data drive)?

 

Correct.

 

 

3)  Is this "swap-disable" procedure more dangerous than normal?

 

It's a bit more dangerous, as there are points in it where a failure would result in data loss.    Since you've had a good parity check, I'd not use it.    I'd first replace the parity drive -- keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards.    Then I would ADD a new drive to the array (one of the other 4TB drives); and then copy all of the data off of the "failing" drive to the new 4TB drive [be CERTAIN you do NOT do anything in this copy process that will result in the "user share copy bug" -- this would cause massive data loss => if you're not sure what this means, ASK].    There are then a couple ways to remove the "failing" drive -- but what I'd do is a New Config that does not include the "failing" drive but DOES include the other new 4TB drive and the old parity drive.    When you then Start the array the old parity drive will show as "unmountable" -- just leave it that way until the parity sync has completed; and then format it and it will be mounted fine.

 

 

4)  I'll also start using XFS insead of ReiserFS with the new drives.  The old 2TB drive will also remain in the system, so there will be both XFS and ReiserFS drives in the system.  Would that cause problems?  If the answer is yes, or "it might", then I'll also update the 2TB drive with XFS, which brings me to the question:

 

There's no problem mixing file system types.  You can simply leave this as is;  or you could copy all of the data from this drive to one of the 4TB drives and then reformat the 2TB drive as XFS.  Same caution as above r.e. the "user share copy bug."

 

 

5) Is there an easy way to copy all the data off of a drive to a new drive? Using Midnight Commander?  I've read something about it, but I have no experience with it (little linux experience to be honest).  I have read though that is can be slow.  How can I be absolutely sure all the data is moved from the drive before pulling/replacing it?

 

I'd just do it from the network using your client PC (Windows or Mac).    No need to mess with Linux.

 

 

6) I'm still in doubt whether I should use XFS or BTRFS.

 

Use XFS.    I don't think BTRFS is "ready for prime time" usage for the array.

 

Link to comment

I'd like to add a few points:

 

1) While it is true that you don't need to preclear the drive I feel it is advisable, especially if you are not using some other testing disk utility. Remember that while pre clearing is in essence about preparing the drive for deployment so the system doesn't have to clear the drive for you (and thus make the array unresponsive in that time) it also serves as a means of giving your disks a VERY good workout and exposing any issues there might be with the disk.

 

Infant Mortality Rates can be very high with some HDD batches. The preclear script (after a minimum of 3 cycles) along with a short SMART test before and a LONG SMART test after - will indicate to you if the disk has pending sectors, reallocated sectors and other increasing counts on the nasty disk hating SMART variables.

 

Do the clears and the tests and make sure the disk is OK before you use it. Take the risk and the disk is bad then you ARE going to have issues Soon! Plus I'd say returning a new disk sooner rather than later is also less hassle.

 

2) No settings need to be changed.

 

3) Backup Your Data before you do anything. Then you don't have to worry about Data Loss in doing any procedure.

 

4) unRAID can run mixed FS's. XFS and RFS running together shouldn't be an issue. Although I'd use this time while you are maintaining your system to move all your disks to XFS if you have the time.

 

5) Use a Tool that does a "Verify Copy" to "make sure" the files have copied successfully (Hopefully as part of Backing up) e.g. SyncBak Free (Windows), TeraCopy (Windows), Rsync Linux etc.

 

6) I don't know of ANY use case where you would want to use BTRFS on your data drives. BTRFS on your cache drives allows for a Pool but for Data - like I said I don't know one. I would NOT recommend using BTRFS due to reported Filesystem instability (corruption issues) and very little (in comparison to the awesome reiserfsck tool used in the past to great acclaim by those using past versions of unRAID when the FS was RFS) has been documented on how to recover from such issues. XFS recovery options are much much better and the Filesystem is much more stable and less prone to issue (Based on reports - or lack thereof).

 

Remember - these drives hold your data - you want things as stable and issue free as possible.

Link to comment

IMO the parity swap is no more dangerous than any other disk replacement/upgrade, if anything goes wrong with the parity copy you can go back to using the current parity, after the parity copy completes the disk rebuild is the same as any other rebuild, with the same risks.

 

Just make sure you follow the instructions carefully and take a screenshot of your current assignments.

Link to comment

The Reiser/XFS issue is why I suggested in #3 that you copy all of the data off the "failing" drive instead of doing a replacement.    This lets you move the data to an XFS disk, whereas if you did a drive rebuild it would still be RFS and you'd still have to do the copy.    Since your drive has NOT actually failed, but simply has a single pending sector, there's no need to rebuild it.

 

There IS another fairly simply process you could follow for this case, however.

 

(1 - Optional but recommended) -- Backup all of the data on the "failing" drive to another location [this could be one of the new 4TB drives attached to a client PC's SATA port for example]

 

(2)  Do a "New Config" with the remaining 2 new 4TB drives, with one assigned as parity, and the old parity drive assigned as a data drive, and the current 2TB drive assigned as data.  Note:  If you had somewhere to backup the "failing" 3TB drive that didn't require you to use the other 4TB drive, you can also include it as a data drive.    Start the array and let it do a parity sync.

 

(3)  Copy the data from the 3TB drive to the array over your network.

 

(4)  Now you can add the other 4TB drive to the array if necessary.

 

Link to comment

The Reiser/XFS issue is why I suggested in #3 that you copy all of the data off the "failing" drive instead of doing a replacement.    This lets you move the data to an XFS disk, whereas if you did a drive rebuild it would still be RFS and you'd still have to do the copy.    Since your drive has NOT actually failed, but simply has a single pending sector, there's no need to rebuild it.

 

There IS another fairly simply process you could follow for this case, however.

 

(1 - Optional but recommended) -- Backup all of the data on the "failing" drive to another location [this could be one of the new 4TB drives attached to a client PC's SATA port for example]

 

(2)  Do a "New Config" with the remaining 2 new 4TB drives, with one assigned as parity, and the old parity drive assigned as a data drive, and the current 2TB drive assigned as data.  Note:  If you had somewhere to backup the "failing" 3TB drive that didn't require you to use the other 4TB drive, you can also include it as a data drive.    Start the array and let it do a parity sync.

 

(3)  Copy the data from the 3TB drive to the array over your network.

 

(4)  Now you can add the other 4TB drive to the array if necessary.

 

+1 (Except Id backup the lot to a location that is not part of the current setup).

 

You know you're safe then. And you can also migrate all your disks to XFS without issue (if you are so inclined) and if ANYTHING goes wrong anywhere in any of the processes you don't need to sweat on your data being at risk!

Link to comment

IMO the parity swap is no more dangerous than any other disk replacement/upgrade, if anything goes wrong with the parity copy you can go back to using the current parity, after the parity copy completes the disk rebuild is the same as any other rebuild, with the same risks.

 

Just make sure you follow the instructions carefully and take a screenshot of your current assignments.

 

Here's why I consider it best to do the two-step process ...

 

=>  The parity swap first copies the parity drive to the new (larger) drive, and then starts a rebuild of the "failed" drive onto the old parity drive.    There is NO validation of the parity copy onto the new drive, so if anything went awry in that copy then both parity and the rebuilt drive will be incorrect.    NOT a high likelihood scenario, but it CAN happen.    [And Mr Murphy tends to bite at the worst possible times  :) ]

 

=>  If you do a parity replacement first [Doing what I suggested:  "... keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards."], you now KNOW that you have good parity.    So you can NOW do the rebuild, and if anything goes awry in that process you can repeat it.

 

Link to comment

3)  Is this "swap-disable" procedure more dangerous than normal?

 

It's a bit more dangerous, as there are points in it where a failure would result in data loss.    Since you've had a good parity check, I'd not use it.    I'd first replace the parity drive -- keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards.    Then I would ADD a new drive to the array (one of the other 4TB drives); and then copy all of the data off of the "failing" drive to the new 4TB drive [be CERTAIN you do NOT do anything in this copy process that will result in the "user share copy bug" -- this would cause massive data loss => if you're not sure what this means, ASK].    There are then a couple ways to remove the "failing" drive -- but what I'd do is a New Config that does not include the "failing" drive but DOES include the other new 4TB drive and the old parity drive.    When you then Start the array the old parity drive will show as "unmountable" -- just leave it that way until the parity sync has completed; and then format it and it will be mounted fine.

Thanks Gary, This is good advice.  Somehow I suspected the "swap disable" feature as a bit more dangerous. Don't know why.  Gut feeling I guess.

 

I don't know what the "use share copy" bug is, sorry. I'll have a search for it before I do anything.

 

I must admit I don't really understand what you mean by: "do a new config".  Do you mean I could create a secondary array next to the first one, with also it's parity drive and 2 disks?  (Meaning, I would have 2 "arrays",both consisting of one parity drive and two data disks)  If yes, this would be an excellent solution.  I could then simply set the second array up and then copy everything from the one array to the other.  Or is this not what you mean?

 

Edit: I'm now busy copying all the data off of the server onto external disks, for backup.  So should anything go wrong, I'd be safe.

 

Also: thank you Danioj for your suggestion of using a copy utility which can verify the copy.  I'm always using Teracopy, but I don't always enable the "verify copy" option, because it slow things down enormously.  I guess WILL do it now. :)

Link to comment

A New Config means you're telling UnRAID to "forget" about the current configuration and re-defining the array.    It's an option on the Tools menu [i presume you're running v6].

 

It is NOT a way to run two different arrays on the same system.

 

It would, by the way, be an excellent solution if you had two different systems and could simply set up a new array on the 2nd system [You could use a free trial key for that system -- at least initially while you only had the 3 4TB drives assigned] ... and then copy all the data from the old system to the new one.    Then you could use your "real" UnRAID key for the new system -- or you could simply "move" your key to that flash drive (easy to do from within the Web GUI).

 

The "user share copy bug" is conceptually fairly easy to understand.    You do NOT want to copy or move a file from a user share to the same user share.    i.e. if you have a share called "Movies" and want to move a folder called "Titanic" to a different disk, be CERTAIN that you do it by reference the disks -- NOT the share.

 

e.g. you could move it from \\Tower\disk1\movies\Titanic  to \\Tower\disk2\movies  ... but you do NOT want to move it from \\Tower\disk1\movies\Titanic to \\Tower\movies    That 2nd method would result in a complete loss of the folder Titanic.

 

 

Link to comment

Thanks Gary!

Yes, I'm running v6.  Unfortunately, I don't have a second system at my disposal, so that's sadly enough not an option.  I don't believe I really understand what you're trying to say with the whole "have UnRAID "forget" about the current configuration and re-defining the array", but I guess that will become clear when I look into the Tools menu. :)

 

Edit: I think I understand it now, you want me to:

 

- Pull all the drives from the old config

- setup a new config (one parity, one data) with two of the new 4TB drives, add the 2tb drive as an existing data drive, and the old 3TB parity drive to the array (isn't this about the same as the "swap disable" procedure?)

- let the parity run until it's finished

- then add the failing 3TB drive and copy over it's data?  (when do I add it to then, I suppose not the array?  Or can I add it as an extra drive somehow, not part of the array?

 

Is this what you mean?

 

 

With regards to the "user share copy bug", I believe what you're saying is to copy from disk to disk, but not using any shares.  Heh, come to think of it, that would have been easier with v5, because I could then see the disks themselves under my "network" in my windows explorer (which I always thought was useless, and glad to see gone in v6).  Alas now I cannot. 

To completely get rid of this risk, wouldn't it be better to use midnight commander and move the files on the server itself from disk to disk?  That way no shares are used.  I don't even see how I could do it from windows otherwise? Am I not always using shares to copy over the network?

 

Thanks for being so patient with me. ;)

Link to comment

With regards to the "user share copy bug", I believe what you're saying is to copy from disk to disk, but not using any shares.

You can use either disk to disk or user share to (different) user share.  What you must avoid is mixing a user share and a disk share in the same copy command as this can trigger the bug.

Heh, come to think of it, that would have been easier with v5, because I could then see the disks themselves under my "network" in my windows explorer (which I always thought was useless, and glad to see gone in v6).  Alas now I cannot.

You can still see disk shares with v6 if you enable that option under Settings-Global Share Settings.  By default they are disabled to reduce the chance of users inadvertently running into the "User Share copy bug".

To completely get rid of this risk, wouldn't it be better to use midnight commander and move the files on the server itself from disk to disk?  That way no shares are used.

You still have to be careful with midnight commander as User shares are visible under /mnt/user.  What you must not do in midnight commanderis copy from a path of the form /mnt/diskY/XXXX to one of the form /mnt/user/XXXX. 

I don't even see how I could do it from windows otherwise? Am I not always using shares to copy over the network?

From Windows you can (potentially) see both disk shares (if they are enabled) and User shares.    It is mixing the two types in a copy move that causes problems.  If you only have one type visible in Windows then you cannot encounter the bug.

Link to comment

... Edit: I think I understand it now, you want me to:

 

- Pull all the drives from the old config

- setup a new config (one parity, one data) with two of the new 4TB drives, add the 2tb drive as an existing data drive, and the old 3TB parity drive to the array (isn't this about the same as the "swap disable" procedure?)

- let the parity run until it's finished

- then add the failing 3TB drive and copy over it's data?  (when do I add it to then, I suppose not the array?  Or can I add it as an extra drive somehow, not part of the array?

 

Is this what you mean?

 

NO.  What I suggested was that you FIRST completely backup the "failing" 3TB drive.    Clearly this requires that you have some other location to save all of its data.

 

Once you have that data backed up, you can create a New Config will all of the disks you want to have in your array assigned to it -- the old 2TB drive; the old 3TB parity drive; and your new 4TB drives.  Assign ALL of the drives BEFORE you Start the array, as you can NOT add a drive with existing data to the array after you've done a parity sync.    Once everything's assigned, Start the array and let it do the parity sync.    When that's done, do a parity check to confirm all went well.

 

NOW you can copy the data from the failed 3TB drive to your array from wherever you saved it.

 

I agree with Daniel that your copies should all be done with a utility that verifies the copies.

 

 

 

Link to comment

Thanks itimpi!

To be honest, I don't know the difference between user shares and disk shares.  Looking at my "shares" tab, I only have user shares.  What actually are disk shares?

 

What you must not do in midnight commanderis copy from a path of the form /mnt/diskY/XXXX to one of the form /mnt/user/XXXX. 

This is very clear, thanks for that!!

 

NO.  What I suggested was that you FIRST completely backup the "failing" 3TB drive.    Clearly this requires that you have some other location to save all of its data.

Ah!  That's what I didn't get.  Sorry, I'm new to all of this.

 

Once you have that data backed up, you can create a New Config will all of the disks you want to have in your array assigned to it -- the old 2TB drive; the old 3TB parity drive; and your new 4TB drives.  Assign ALL of the drives BEFORE you Start the array, as you can NOT add a drive with existing data to the array after you've done a parity sync.    Once everything's assigned, Start the array and let it do the parity sync.    When that's done, do a parity check to confirm all went well.

 

NOW you can copy the data from the failed 3TB drive to your array from wherever you saved it.

Ah, gotcha!  Thanks!!!!!

 

Link to comment

Thanks itimpi!

To be honest, I don't know the difference between user shares and disk shares.  Looking at my "shares" tab, I only have user shares.  What actually are disk shares?

A disk share is where the share directly corresponds to a physical disk so you can see all files on a given physical disk.  When you look on each disk share you can see the top level folders corresponding to user shares.

 

If you have enabled disk Shares under Settings->Global Share settings then you get an extra tab appearing under the Shares tab for managing the visibility/security of Disk Shares.

Link to comment

Ah! Thanks itimpi!  I can understand this can be dangerous, but on the other hand, this seems like exactly what I need to move all the files from a certain disk.

 

Disk shares are now set to "auto" but none show up under "shares".  I cannot alter this at the moment because I need to take the array offline for that and I'm currently busy making the backups.

 

I'll have a look when then copying is done, but will make sure to be extra careful!

Link to comment

Ah! Thanks itimpi!  I can understand this can be dangerous, but on the other hand, this seems like exactly what I need to move all the files from a certain disk.

It is if you want to do the copying across the network from Windows.  You will find that the disk shares correspond to the names you would see in midnight commander of the form /mnt/diskX (and /mnt/cache which is also a disk share in this context), while the User shares correspond to what you see under /mnt/user.

 

Disk shares are now set to "auto" but none show up under "shares".  I cannot alter this at the moment because I need to take the array offline for that and I'm currently busy making the backups.

You are right that you cannot change the setting with the array in use.  It is worth switching on the unRAID Help to get a good description of the meaning of the different settings.  I tend to run with Disk Shares enabled but their visibility set to "Hidden" so I can get to them if I explicitly use their name but will not accidentally click on them.  If you do not use them on a regular basis it can be a good idea to switch off the disk shares again when you have finished your housekeeping.

 

I'll have a look when then copying is done, but will make sure to be extra careful!
This is one of those issues that it is reasonably easy to avoid when you are aware of it.  It is something that is, unfortunately, inherent in the fact that you have two different views of the same data (User Shares/Disk Shares) and linux is not always aware of this so it is quite likely that this issue will never be removed.
Link to comment

IMO the parity swap is no more dangerous than any other disk replacement/upgrade, if anything goes wrong with the parity copy you can go back to using the current parity, after the parity copy completes the disk rebuild is the same as any other rebuild, with the same risks.

 

Just make sure you follow the instructions carefully and take a screenshot of your current assignments.

 

Here's why I consider it best to do the two-step process ...

 

=>  The parity swap first copies the parity drive to the new (larger) drive, and then starts a rebuild of the "failed" drive onto the old parity drive.    There is NO validation of the parity copy onto the new drive, so if anything went awry in that copy then both parity and the rebuilt drive will be incorrect.    NOT a high likelihood scenario, but it CAN happen.    [And Mr Murphy tends to bite at the worst possible times  :) ]

 

=>  If you do a parity replacement first [Doing what I suggested:  "... keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards."], you now KNOW that you have good parity.    So you can NOW do the rebuild, and if anything goes awry in that process you can repeat it.

 

Q: Isn't doing a parity sync with a drive that you know has pending sectors a bad idea, as in the parity will be incorrect? Isn't that one of the major reasons why we are so cautious about pending sectors?

Link to comment

IMO the parity swap is no more dangerous than any other disk replacement/upgrade, if anything goes wrong with the parity copy you can go back to using the current parity, after the parity copy completes the disk rebuild is the same as any other rebuild, with the same risks.

 

Just make sure you follow the instructions carefully and take a screenshot of your current assignments.

 

Here's why I consider it best to do the two-step process ...

 

=>  The parity swap first copies the parity drive to the new (larger) drive, and then starts a rebuild of the "failed" drive onto the old parity drive.    There is NO validation of the parity copy onto the new drive, so if anything went awry in that copy then both parity and the rebuilt drive will be incorrect.    NOT a high likelihood scenario, but it CAN happen.    [And Mr Murphy tends to bite at the worst possible times  :) ]

 

=>  If you do a parity replacement first [Doing what I suggested:  "... keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards."], you now KNOW that you have good parity.    So you can NOW do the rebuild, and if anything goes awry in that process you can repeat it.

 

Q: Isn't doing a parity sync with a drive that you know has pending sectors a bad idea, as in the parity will be incorrect? Isn't that one of the major reasons why we are so cautious about pending sectors?

 

Agree, in this case I would only consider safer to a parity swap replacing the bad disk first, and then replace parity, but he does not have a suitable spare.

 

Replacing parity first can be ok but I would not consider it unless the bad disk passes an extended SMART test first.

 

Link to comment

IMO the parity swap is no more dangerous than any other disk replacement/upgrade, if anything goes wrong with the parity copy you can go back to using the current parity, after the parity copy completes the disk rebuild is the same as any other rebuild, with the same risks.

 

Just make sure you follow the instructions carefully and take a screenshot of your current assignments.

 

Here's why I consider it best to do the two-step process ...

 

=>  The parity swap first copies the parity drive to the new (larger) drive, and then starts a rebuild of the "failed" drive onto the old parity drive.    There is NO validation of the parity copy onto the new drive, so if anything went awry in that copy then both parity and the rebuilt drive will be incorrect.    NOT a high likelihood scenario, but it CAN happen.    [And Mr Murphy tends to bite at the worst possible times  :) ]

 

=>  If you do a parity replacement first [Doing what I suggested:  "... keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards."], you now KNOW that you have good parity.    So you can NOW do the rebuild, and if anything goes awry in that process you can repeat it.

 

Q: Isn't doing a parity sync with a drive that you know has pending sectors a bad idea, as in the parity will be incorrect? Isn't that one of the major reasons why we are so cautious about pending sectors?

 

The disk in question here has NOT failed ... it simply has a pending sector.    This is NOT the same as a failure ... it simply means the S.M.A.R.T. system wasn't able to reallocate the sector in question yet.  It may or may not ultimately be able to do that.  But as long as it's not causing parity issues (which clearly it is not, since a parity check passed with no problems) then clearly the data is readable with no problem ... so a parity sync will work just fine.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.