Steps to move data to bigger drive and remove old drive


Recommended Posts

I have 3 1TB drives that I want to remove from my server.  I have read two different methods to do this and I am still confused as to the exact steps needed.

 

My current setup: Parity (3 TB) + 10 disks, 3 of which are 1 TB drives.  I want to copy the data from the 3 1 TB drives (disk6, disk7 and disk8) to an empty 3 TB drive (disk10) and then remove the 3 1 TB drives.

 

I have two user shares: "Videos" and "Backup1"

 

Videos is set up as:

Allocation method: High Water

Split level: 3

Included disk(s): "blank"

Excluded disk(s): disk4

 

Backup1 share is set up as:

Allocation method: High Water

Split level:

Included disk(s):  disk4

Excluded disk(s): "blank"

 

The first method I found is here:  http://lime-technology.com/wiki/index.php/FAQ_remove_drive

Basically, it said, copy the data from the source drive to a target drive, then click "New Config" in the Utils tab, then run initconfig.

 

Then I found this: http://lime-technology.com/wiki/index.php/Shrink_array

This references the older FAQ but I assume that it is more up to date or perhaps detailed or what?

 

The steps in this one are more complicated, involving renaming all the primary share folders (adding underscores).  Assuming this writeup is correct for version 5.0.2, here is the list of steps and my questions (in bold).

 

"In this example, Disk 2 is to be removed, change the number to the appropriate drive for your situation"

 

[*]On the UnRAID Web Management page for configuring User Shares, remove Disk 2 from inclusion in any shares (eg. change disk1,disk2,disk4 to disk1,disk4). Disk 2 should not be listed in any inclusions or exclusions, because it won't exist any more once we are done.  In my case this would mean changing the "Videos" share from "Excluded disk(s): disk4" to "Included disk(s): disk1, disk2, disk3, disk9 and disk10.  That way Backup1 (disk4) is exluded and so are all the 1 TB drives, right?

 

[*]At the server console or in a Telnet console, start MC, and browse to the /mnt/disk2 folder. Rename all of the primary Share folders (eg. movies, series, music to movies_, series_, music_, etc).  So for me, this means rename /mnt/disk6/Videos to /mnt/disk6/Videos_ and the same for disk7 and disk8, right?  What is the point of this and why didn't the previous FAQ include this step?

 

[*]Once done, open the renamed folder on the left, and a selected destination disk and share folder on the right, and move the files inside each Disk 2 folder over to the other using F6. Repeat as necessary for all Share folders. You should not encounter any duplicates any more.

[*]Verify that all files have been moved and that the drive is essentially empty.  So here, I would really not like to "move" the files".  Instead I would copy them using "cp -r -v -p /mnt/disk6/* /mnt/disk10" to preserve the timestamps.

 

[*]Either take a screen capture of your drive list on the Web Management page or write them down. You need a list of each of your drives with at least the last 3 or 4 characters of their serial numbers, plus what its assignment was, such as the Parity drive, the Cache drive, or a specific data drive number. You will need this later in order to correctly reassign each drive.  This confuses me.  After I remove the 1 TB disks, won't the other disks be renumbered?  I mean, there will be a disk6, etc, but they will be different drives.  This sentence implies that after you take out disk2, the other disks keep their old numbers?

 

[*]On the Web Management Utils page, reset the array configuration (Utils -> New config).  OK.  What, exactly does this do?  Is this where the disks are numbered in increasing numerical order?  Does it affect the definition of the Shares?  Also, the other writeup ("FAQ remove drive") said that you had to execute "initconfig" script after this but this writeup does not mention this?  Is this an omission?

 

[*]Now if necessary, reassign all of the drives, except Disk 2, making absolutely sure that the Parity drive is correctly assigned. Double-check your drive list!  By "reassign all drives" I assume they mean "use the drop down selection on the main page of web gui to assign /dev/sda1 to disk1, etc?  So here is where the old disk2 gets a new disk assigned to it?

 

[*]Now you can restart the array, and let parity rebuild (without Disk 2). unRAID will start a Parity-sync, and once complete, you should be good to go.  Seems like they missed a step here.  Don't we have to re-define the Shares at this point?  How else does UnRaid know which disks are included in which shares?  And can you now physically remove the old disks?  And can you physically move the remaining disks around and onto different SATA ports?

 

Sorry this is so long but it seems very easy to screw this up and I really want to get it right.  In my experience, the devil is in the details.

Thanks for any help.

 

 

 

 

Link to comment

Maybe this will help you.  Below is the procedure I follow to remove a drive from my server without losing parity or access to my array for long periods of time. 

 

Some notes about the procedure:

 

- Be sure to record your drive assignments before performing this procedure (take a screen shot of the Main Unraid screen)

- In step #2, make a subdirectory on the target drive to prevent creating a share with duplicates

- Steps #1-5 are done while the array is Started.  You don't stop the array until step #6

- Step #5 can take a very long time, depending upon the size of the drive being zero'ed, but all the while your array is available.

- I have not yet used this procedure on v6, but I don't see why there would be any issues.

- This procedure has been tested on v6b12.

- This is the procedure I use.  I can not guarantee results for others.

- If you plan to write zeros to your drive from a remote console, I suggest using SCREEN.

 

UPDATE 1

- Added alternate dd syntax in Step 5 to show a progress bar, speed, total time and ETA.

 

1) Start the array
2) Empty the drive of all contents (move all files to another drive)
3) Telnet to the server
4) Unmount the drive to be removed with the following command (where ? is Unraid disc number)
  4.1) umount /dev/md?
4.1.1) If the command fails issue 'lsof /dev/md?' command to see which processes have a hold of the drive.  Stop these processes.
4.1.1) Some processes (i.e. smdb [samba file sharing]) require a restart to stop.
4.1.3) Try 'umount /dev/md?' again.
  4.1) At this point the drive will show 0 (zero) Free Space in the WEB GUI.
  
5) Write Zero's to the drive with the dd command.  
  5.1) dd if=/dev/zero bs=2048k of=/dev/md?
5.1.1) Alternatively, you can put a progress bar on your dd operation by piping into the pv command 
	5.1.1.1) dd if=/dev/zero bs=2048k | pv -s 4000G | dd of=/dev/md? bs=2048k
		5.1.1.1.1) The '4000G' in this example represents a target drive size of 4TB
  5.2) Wait for a very long time until the process is finished.
  5.3) when finished a report similar to the following will display in the console

	dd: writing `/dev/md?': No space left on device
	476935+0 records in
	476934+0 records out
	1000204853248 bytes (1.0 TB) copied, 48128.1 s, 20.8 MB/s
  
6) Stop the array
7) From the Devices tab, remove the device
  7.1) At this point the drive will show as 'Missing' on the Main tab.
  
 From the 'Tools' menu choose the 'New Config' option in the 'UnRAID OS' section.
  8.1) this is equivalent to issuing an 'initconfig' command in the console.
  8.2) This will rename super.dat to super.bak, effectively clearing array configuration.
  8.3) All drives will be unassigned
9) Reassign all drives to their original slots.  Leave the drive to be removed unassigned.
  9.1) At this point all drives will be BLUE and Unraid is waiting for you to press the Start Button.  
    9.1.1) Assuming all is good, check the "Parity is valid" box and press the 'Start' button.

10) At this point, provided all is OK, all drives should be GREEN and a parity-check will start.  
If everything does appear to be OK (0 Sync Errors) then parity-check can be cancelled.
11) Stop the array
12) Shut down the server
13) Remove the drive
14) Power up the server
15) Restart the array
16) Maybe a good idea to do complete a parity-check.

 

Jarod

 

  • Like 1
Link to comment

JarDo, thanks for the message.  I do have some questions, though.

1. How did you come to use this procedure?  I mean, did you find it somewhere on the web or did you create it yourself.

2. What is the purpose of writing zeros to the to-be-removed drive?  After copying the data to the new drive, I would like to retain the data on the old drive in case something goes wrong.

3.

Maybe this will help you.  Below is the procedure I follow to remove a drive from my server without losing parity or access to my array for long periods of time. 

4. I did not think you could unmount a drive while the array was "started".  Doesn't this cause a problem?

5. In step 9, you say "Reassign all drives to their original slots"  How can you do this when you are now missing one drive?  Won't the remaining drives with numbers larger than the removed drive be renumbered?  I mean, there will be a new disk2 (assuming disk2 is the drive removed) right?

 

Should this be this hard to do?  :-)

 

 

Some notes about the procedure:

 

- Be sure to record your drive assignments before performing this procedure (take a screen shot of the Main Unraid screen)

- In step #2, make a subdirectory on the target drive to prevent creating a share with duplicates

- Steps #1-5 are done while the array is Started.  You don't stop the array until step #6

- Step #5 can take a very long time, depending upon the size of the drive being zero'ed, but all the while your array is available.

- I have not yet used this procedure on v6, but I don't see why there would be any issues.

- This is the procedure I use.  I can not guarantee results for others.

 

1) Start the array
2) Empty the drive of all contents (move all files to another drive)
3) Telnet to the server
4) Unmount the drive to be removed with the following command (where ? is Unraid disc number)
  4.1) umount /dev/md?
4.1.1) If the command fails issue 'lsof /dev/md?' command to see which processes have a hold of the drive.  Stop these processes.
4.1.1) Some processes (i.e. smdb [samba file sharing]) require a restart to stop.
4.1.3) Try 'umount /dev/md?' again.
  4.1) At this point the drive will show 0 (zero) Free Space in the WEB GUI.
  
5) Write Zero's to the drive with the following command:
  5.1) dd if=/dev/zero bs=2048k of=/dev/md?
  5.2) Wait for a very long time until the process is finished.
  5.3) when finished a report similiar to the following will display in the console

	dd: writing `/dev/md?': No space left on device
	476935+0 records in
	476934+0 records out
	1000204853248 bytes (1.0 TB) copied, 48128.1 s, 20.8 MB/s
  
6) Stop the array
7) From the Devices tab, remove the device
  7.1) At this point the drive will show as 'Missing' on the Main tab.
  
 From the 'Tools' menu choose the 'New Config' option in the 'UnRAID OS' section.
  8.1) this is equivalent to issuing an 'initconfig' command in the console.
  8.2) This will rename super.dat to super.bak, effectively clearing array configuration.
  8.3) All drives will be unassigned
9) Reassign all drives to their original slots.  Leave the drive to be removed unassigned.
  9.1) At this point all drives will be BLUE and Unraid is waiting for you to press the Start Button.  
    9.1.1) Assuming all is good, check the "Parity is valid" box and press the 'Start' button.

10) At this point, provided all is OK, all drives should be GREEN and a parity-check will start.  
If everything does appear to be OK (0 Sync Errors) then parity-check can be cancelled.
11) Stop the array
12) Shut down the server
13) Remove the drive
14) Power up the server
15) Restart the array
16) Maybe a good idea to do complete a parity-check.

 

Jarod

Link to comment

If you wish to keep the data intact on the removed drive, you will need to invalidate parity and recalculate from the remaining drives. This can be done with the "new config" setting, and you can remove multiple drives and put new drives in all at the same time, because you have to recalculate parity anyway.

 

When you do the new config, none of your original drives are assigned any more, so you must correctly reassign all the drives in their new slots. If you forget which drive is which and put a data drive into the parity slot, you will erase it. That's why you need to keep a record of which drive is assigned where by serial number when you do a new configuration. Manually issuing the initconfig command is no longer necessary for current releases.

 

Every root level folder on all your disks is automatically a share when you enable user shares. Inclusions, exclusions, split levels, permissions and such may have to be adjusted if you assign your drives to different slots. In your case if you make a note of all your current drives and assign them identically you shouldn't have to change anything, since you aren't changing disk4. After you are done copying the data and ready to reassign drives you could put the current slot 9 drive into slot 6 and slot 10 into slot 7 if you wanted so there wouldn't be any blank slots. Drive slots have no correlation to physical ports, they are just logical assignments. Some people choose to correlate them to physical positioning, but it's not necessary.

Link to comment

JarDo, thanks for the message.  I do have some questions, though.

1. How did you come to use this procedure?  I mean, did you find it somewhere on the web or did you create it yourself.

2. What is the purpose of writing zeros to the to-be-removed drive?  After copying the data to the new drive, I would like to retain the data on the old drive in case something goes wrong.

 

1) I stitched this procedure together from various posts from this forum.  I've performed this procedure many times, and actually plan to do it again tonight.

 

2) Exactly as trurl posted...  The purpose of writing zeros to the to-be-removed drive is to bring parity to a state that will allow me to remove the drive from the array without breaking parity.  The point is to perform the drive swap without losing access to my array for long periods of time OR breaking parity .  Of course (I forgot to mention) this all assumes that the new drive is Pre-Cleared prior to adding it to the array. 

 

As far as retaining the data on the old drive;  this concern is addressed in Step 2.  Maybe the procedure should say 'Copy' instead of 'Move' the data.  I usually copy the root of the drive to a directory on another drive (i.e. rsync -av --stats --progress /mnt/disk1/* /mnt/disk9/disk_1). 

 

There is very little need to retain data on the old drive after you've safely copied this data to a fault tolerant location elsewhere on your array.  (It's time to move on.  Really...  let it go.)

 

A couple additional notes on this procedure I forgot to mention:

- During Step 5 there is NO feedback.  It will look as if nothing is happening until it is finished.  And, as I already mentioned, this step will take a long time.  Don't stare at the screen waiting for it to finish.  It will win the staring contest.

- If it is an option for you, perform Step 5  directly at your server (not in a telnet window from another PC).  If you start the zeroing process from another PC in a telnet session then if that telnet window is closed for any reason you will have no way of knowing when the zeroing process completed.

Link to comment

...- If it is an option for you, perform Step 5  directly at your server (not in a telnet window from another PC).  If you start the zeroing process from another PC in a telnet session then if that telnet window is closed for any reason you will have no way of knowing when the zeroing process completed.

Or use the screen command to perform Step 5 in a linux screen session. Don't know why they don't just include screen in the unRAID distribution, but if you don't have it you can install it using unMenu Pkg Manager, or just do this for 64bit (v6) or this for 32bit (v5).
Link to comment

Ok.  Now I've got a question for you all.

 

I am currently following the procedure I posted above and something occurred that has me puzzled.

 

I have a drive umount'd and currently being zeroed.  We'll call this drive disk1. 

 

Overnight, a cron job I forgot to disable performed a backup of my flash drive to disk1.  Now disk1 is visible as a mapped drive with the contents of the backup job.

 

When I discovered this I stopped and restarted the zeroing process.  I'm not sure what else I can do.  I'm assuming these files don't actually exist on disk1 but do exist in parity.

 

Does this behavior make sense.  I had assumed that any attempt to write to an unmounted drive would have failed.  Is this behavior normal or is this an issue with unRAID v6?  This is the first time I've tried to zero a drive with version 6.

 

Another thing I noticed with v6 is that after unmounting the drive the webgui did not show the drive with zero free space.  unMenu does show zero free space, but not the stock webgui.

 

Any thoughts?

Link to comment

JarDo,

 

Thanks again.  I think I now understand, but I don't see how I can use your method.

Correct me if I am wrong, but it seems as if you use your method to *replace* a single drive with a larger one?

This means that if, for instance, disk2 is a 1 TB drive and you do your copy to a new drive, say disk6 and then do the zero copy on disk2 to preserve parity and then re-assign disk6 to be disk2.  Is this right?  If so, then I can't use it because I intend to replace 3 drives at the same time.  This means I *must* re-assign all the drives.  If I am wrong, then I still  don't understand exactly what you are doing.

 

Thanks again.

 

JarDo, thanks for the message.  I do have some questions, though.

1. How did you come to use this procedure?  I mean, did you find it somewhere on the web or did you create it yourself.

2. What is the purpose of writing zeros to the to-be-removed drive?  After copying the data to the new drive, I would like to retain the data on the old drive in case something goes wrong.

 

1) I stitched this procedure together from various posts from this forum.  I've performed this procedure many times, and actually plan to do it again tonight.

 

2) Exactly as trurl posted...  The purpose of writing zeros to the to-be-removed drive is to bring parity to a state that will allow me to remove the drive from the array without breaking parity.  The point is to perform the drive swap without losing access to my array for long periods of time OR breaking parity .  Of course (I forgot to mention) this all assumes that the new drive is Pre-Cleared prior to adding it to the array. 

 

As far as retaining the data on the old drive;  this concern is addressed in Step 2.  Maybe the procedure should say 'Copy' instead of 'Move' the data.  I usually copy the root of the drive to a directory on another drive (i.e. rsync -av --stats --progress /mnt/disk1/* /mnt/disk9/disk_1). 

 

There is very little need to retain data on the old drive after you've safely copied this data to a fault tolerant location elsewhere on your array.  (It's time to move on.  Really...  let it go.)

 

A couple additional notes on this procedure I forgot to mention:

- During Step 5 there is NO feedback.  It will look as if nothing is happening until it is finished.  And, as I already mentioned, this step will take a long time.  Don't stare at the screen waiting for it to finish.  It will win the staring contest.

- If it is an option for you, perform Step 5  directly at your server (not in a telnet window from another PC).  If you start the zeroing process from another PC in a telnet session then if that telnet window is closed for any reason you will have no way of knowing when the zeroing process completed.

Link to comment

Did the cron job target disk1 explicitly or was it a write to a share that included disk1?  If it is the latter case, then I don't see how your method would allow use of the array while you are "zeroing" the to-be-removed disk.

 

 

Ok.  Now I've got a question for you all.

 

I am currently following the procedure I posted above and something occurred that has me puzzled.

 

I have a drive umount'd and currently being zeroed.  We'll call this drive disk1. 

 

Overnight, a cron job I forgot to disable performed a backup of my flash drive to disk1.  Now disk1 is visible as a mapped drive with the contents of the backup job.

 

When I discovered this I stopped and restarted the zeroing process.  I'm not sure what else I can do.  I'm assuming these files don't actually exist on disk1 but do exist in parity.

 

Does this behavior make sense.  I had assumed that any attempt to write to an unmounted drive would have failed.  Is this behavior normal or is this an issue with unRAID v6?  This is the first time I've tried to zero a drive with version 6.

 

Another thing I noticed with v6 is that after unmounting the drive the webgui did not show the drive with zero free space.  unMenu does show zero free space, but not the stock webgui.

 

Any thoughts?

Link to comment

JarDo,

 

Thanks again.  I think I now understand, but I don't see how I can use your method.

Correct me if I am wrong, but it seems as if you use your method to *replace* a single drive with a larger one?

This means that if, for instance, disk2 is a 1 TB drive and you do your copy to a new drive, say disk6 and then do the zero copy on disk2 to preserve parity and then re-assign disk6 to be disk2.  Is this right?  If so, then I can't use it because I intend to replace 3 drives at the same time.  This means I *must* re-assign all the drives.  If I am wrong, then I still  don't understand exactly what you are doing.

 

Thanks again.

 

Ok...  First off... unless it were some kind of emergency I would never replace 3 drives at the same time.  But that's just me.  My procedure does assume that you are working on 1 drive at a time.

 

Second... I don't mean to say that you would re-assign disk6 to be disk2.  The idea is to move (copy... whatever) disk2 data to disk6 so that disk2 can be zeroed.  If you plan to replace (i.e. upgrade) disk2 with new hardware then you could then move the data you (temporarily) moved to disk6 back onto disk2 after the new hardware is installed.  If you never plan to replace the old disk2 with new hardware then you are done.  No need to reassign any disks.

 

In a nutshell what you are doing is this:

 

1) Moving your data off of one drive to another drive so that the first drive can be decommissioned.

2) Decommissioning the drive

3) Removing the drive from the server

4) Starting the server and telling unRAID that the old drive is no longer part of the array.

 

You can stop here if you like, but if your purpose is to upgrade a drive then...

 

5) Add new hard drive to the system

6) Move your data back onto the new hard drive.

Link to comment

Did the cron job target disk1 explicitly or was it a write to a share that included disk1?  If it is the latter case, then I don't see how your method would allow use of the array while you are "zeroing" the to-be-removed disk.

 

That's a good question.  I checked and the job writes to a share that is exclusive to disk1.  In this case the write is to /mnt/user/backup/, where '/backup/' is the share.

 

Link to comment

UPDATE1:

 

As an update to my original question below...

 

I restarted the zeroing process and let it finish.  After it was finished disk1 still contained a share that I could access from client PC's even though the disk was completely overwritten with zeros!!

 

I'm guessing that this share must exist entirely in parity because there is no way it could exist on the physical disk. 

 

Faced with no other option other than proceeding with the steps in the procedure I continued with steps 6 through 9.  The only deviation from the procedure as written is that I had to power down the server and restart because it hung when I tried to stop the array.  At this moment the drive has been removed as an array device, the array is started, and a parity check is in progress.  Currently at 10% with 1 corrected sync error.

 

One thing to note about unRAID v6b12 that I never saw before... Now that disk1 has been removed there is a red Alert box in the top right corner informing me that Disk 1 is in an Error State.  I find this very odd since I performed a 'New Config' operation after removing Disk1 from the array.

 

So far, so good.  I will let the parity check finish and then I'll move on to step 11 and finish up the steps to physically remove the old drive from the server.

 

ORIGINAL POST:

Ok.  Now I've got a question for you all.

 

I am currently following the procedure I posted above and something occurred that has me puzzled.

 

I have a drive umount'd and currently being zeroed.  We'll call this drive disk1. 

 

Overnight, a cron job I forgot to disable performed a backup of my flash drive to disk1.  Now disk1 is visible as a mapped drive with the contents of the backup job.

 

When I discovered this I stopped and restarted the zeroing process.  I'm not sure what else I can do.  I'm assuming these files don't actually exist on disk1 but do exist in parity.

 

Does this behavior make sense.  I had assumed that any attempt to write to an unmounted drive would have failed.  Is this behavior normal or is this an issue with unRAID v6?  This is the first time I've tried to zero a drive with version 6.

 

Another thing I noticed with v6 is that after unmounting the drive the webgui did not show the drive with zero free space.  unMenu does show zero free space, but not the stock webgui.

 

Any thoughts?

Link to comment

Hmm.  So that would mean that you cannot write to the array during the zeroing process which is what I thought the advantage to your method was in the first place.  Did you get the same behavior on version 5?

One more question: why did you use umount /dev/md? instead of umount /mnt/disk? to unmount the disk to be removed?

 

Did the cron job target disk1 explicitly or was it a write to a share that included disk1?  If it is the latter case, then I don't see how your method would allow use of the array while you are "zeroing" the to-be-removed disk.

 

That's a good question.  I checked and the job writes to a share that is exclusive to disk1.  In this case the write is to /mnt/user/backup/, where '/backup/' is the share.

Link to comment
  • 1 month later...

Hmm.  So that would mean that you cannot write to the array during the zeroing process which is what I thought the advantage to your method was in the first place.  Did you get the same behavior on version 5?

One more question: why did you use umount /dev/md? instead of umount /mnt/disk? to unmount the disk to be removed?

 

You can write to the array during the zeroing process.  This is exactly one of the benefits of this procedure.  What happened in this case was that while disk1 was unmounted from the array and being zeroed, a cron job that I had forgotten to disable ran and wrote to a share that was excluded on all disks and only included on disk1.  In my mind, that would mean that the write would fail because disk1 was unmounted.

 

I use umount /dev/md? instead of umount /mnt/disk? because that is what another more knowledgeable user suggested to me.  I never tried /mnt/disk?.  Maybe that would work too.

Link to comment
  • 1 month later...

Maybe this will help you.  Below is the procedure I follow to remove a drive from my server without losing parity or access to my array for long periods of time. 

 

Some notes about the procedure:

 

- Be sure to record your drive assignments before performing this procedure (take a screen shot of the Main Unraid screen)

- In step #2, make a subdirectory on the target drive to prevent creating a share with duplicates

- Steps #1-5 are done while the array is Started.  You don't stop the array until step #6

- Step #5 can take a very long time, depending upon the size of the drive being zero'ed, but all the while your array is available.

- I have not yet used this procedure on v6, but I don't see why there would be any issues.

- This procedure has been tested on v6b12.

- This is the procedure I use.  I can not guarantee results for others.

- If you plan to write zeros to your drive from a remote console, I suggest using SCREEN.

 

UPDATE 1

- Added alternate dd syntax in Step 5 to show a progress bar, speed, total time and ETA.

 

1) Start the array
2) Empty the drive of all contents (move all files to another drive)
3) Telnet to the server
4) Unmount the drive to be removed with the following command (where ? is Unraid disc number)
  4.1) umount /dev/md?
4.1.1) If the command fails issue 'lsof /dev/md?' command to see which processes have a hold of the drive.  Stop these processes.
4.1.1) Some processes (i.e. smdb [samba file sharing]) require a restart to stop.
4.1.3) Try 'umount /dev/md?' again.
  4.1) At this point the drive will show 0 (zero) Free Space in the WEB GUI.
  
5) Write Zero's to the drive with the dd command.  
  5.1) dd if=/dev/zero bs=2048k of=/dev/md?
5.1.1) Alternatively, you can put a progress bar on your dd operation by piping into the pv command 
	5.1.1.1) dd if=/dev/zero bs=2048k | pv -s 4000G | dd of=/dev/md? bs=2048k
		5.1.1.1.1) The '4000G' in this example represents a target drive size of 4TB
  5.2) Wait for a very long time until the process is finished.
  5.3) when finished a report similar to the following will display in the console

	dd: writing `/dev/md?': No space left on device
	476935+0 records in
	476934+0 records out
	1000204853248 bytes (1.0 TB) copied, 48128.1 s, 20.8 MB/s
  
6) Stop the array
7) From the Devices tab, remove the device
  7.1) At this point the drive will show as 'Missing' on the Main tab.
  
 From the 'Tools' menu choose the 'New Config' option in the 'UnRAID OS' section.
  8.1) this is equivalent to issuing an 'initconfig' command in the console.
  8.2) This will rename super.dat to super.bak, effectively clearing array configuration.
  8.3) All drives will be unassigned
9) Reassign all drives to their original slots.  Leave the drive to be removed unassigned.
  9.1) At this point all drives will be BLUE and Unraid is waiting for you to press the Start Button.  
    9.1.1) Assuming all is good, check the "Parity is valid" box and press the 'Start' button.

10) At this point, provided all is OK, all drives should be GREEN and a parity-check will start.  
If everything does appear to be OK (0 Sync Errors) then parity-check can be cancelled.
11) Stop the array
12) Shut down the server
13) Remove the drive
14) Power up the server
15) Restart the array
16) Maybe a good idea to do complete a parity-check.

 

Jarod

 

Epic. Thanks for posting, and thanks for coming back to update it. I added it to the Wiki lime-technology.com/wiki/index.php/Make_unRAID_Trust_the_Parity_Drive,_Avoid_Rebuilding_Parity_Unnecessarily

Link to comment

So, I did this procedure and copied all the data from a 1 TB drive (disk7) to one of my 3 TB drives, used the dd trick to write zeroes to the old drive, then selected "New config" etc.  Now this is my status: all disks are green except disk7 which is grey and "Identification" is "Not installed".  See picture below.  Question is: how do I get rid of the empty disk slot?  I.e, I want parity, disk1, disk2, ... disk9 to show in Device column.  I thought that the New Config would have done this?

6_after_start_array_with_ne.jpg.c8786363b6b05194985e205a68e58164.jpg

Link to comment

When you did the New Config you should have assigned the old disk8 to disk7; the old disk9 to disk8; and the old disk10 to disk9.    You apparently assigned everything to the same slots they were in before ... so they still have the same designations.

 

It's perfectly okay to have unused slots in your assignments -- it's only a cosmetic issue to have the unused slots.

 

Link to comment

When you did the New Config you should have assigned the old disk8 to disk7; the old disk9 to disk8; and the old disk10 to disk9.    You apparently assigned everything to the same slots they were in before ... so they still have the same designations.

 

It's perfectly okay to have unused slots in your assignments -- it's only a cosmetic issue to have the unused slots.

 

OK.  Can I do another "New Config" now and re-assign the disks?  Yeah, I know I'm a bit OCD (but probably not the only one around here).  Also, this step was not mentioned in the procedure description.

Link to comment

Sure -- just do a New Config and assign them to the slots you want.  Since you're changing NOTHING on the drives, you can "trust parity" as well  :)

 

OK.  Just to sure:

 

1. Stop array

2. Click on "Utils->NewConfig"

2. reassign the disk assigned to disk8 to disk7, the disk assigned to disk9 to disk8 and disk10 will have no assignment

3. click "Trust Parity"

 

Is this the right order?

 

Link to comment

Yes, but don't forget to assign the disk that's current #10 to #9  :)

Thanks I really appreciate it.

I really want to understand how this works in detail. So pardon me if I am somewhat pedantic.

 

My understanding:

 

the "Device" column on the Main page lists the "logical" devices and the "Identification" column lists the currently assigned physical device (hard disk).  The SATA ports to which the hard disk are actually connected are completely irrelevant, correct?  This allows you to move the hard disks around on the SATA ports freely as long as you maintain the correct logical-to-physical mapping by selecting the correct physical disk (via the serial number) in the dropdown menu under "Identification" while the array is stopped.

 

However, to maintain the parity, the unRaid system MUST know which device is the parity disk and which physical devices are assigned to the logical disks.  This set of information is called the "config", right?  Once a config is set, you can't change it without doing a "New Config" operation (used to be initconfig on console).  If you have valid parity before the "New config" operation, then you can additionally click "Parity is valid".  If not, the system will re-calculate the parity (very long operation) and the array is UNPROTECTED while this parity rebuild operation is going on.

 

Is my understanding correct and complete?

Link to comment

Yes, you understand it quite well.

 

One clarification:  It not only makes no difference which SATA ports each of the drives are connected to (this was NOT true in v4.7, but has been true since v5);  but it also makes no difference which "slot" you assign the drives to, except that the parity drive and (if you have one) the cache drive (or drives if you're using a pool) must be assigned correctly.

 

i.e. in your case the drives that are assigned as disk1 through disk9 can be randomly assigned to whichever slot you want and everything will still work fine (and parity will still be valid).    No particular reason to change them ... although some like to assign the disks in the physical order they're mounted in the case, for example.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.