Shrink Array question


Adrian

Recommended Posts

Can someone explain what exactly is going on in the final steps for clearing a drive. Specifically 11-14

 

https://wiki.lime-technology.com/Shrink_array

 

The "Clear Drive Then Remove Drive" Method

 

10. Go to Tools then New Config

11. Click on the Retain current configuration box (says None at first), click on the box for All, then click on close

12. Click on the box for Yes I want to do this, then click Apply then Done

13. Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)!

14. Click the check box for Parity is already valid, make sure it is checked!

15. Start the array! Click the Start button then the Proceed button (on the warning popup that will pop up)

16. Parity should still be valid, however it's highly recommended to do a Parity Check

Link to comment

I was trying to understand more of what each step is doing.

 

Quote

Click on the Retain current configuration box (says None at first), click on the box for All, then click on close

At this point, the drive is unmounted (by the script), but still assigned. How does this prepare unraid to allow me to unassign it at step 13 without causing unraid to complain about a missing disk? If I unassign it without running this, it complains that it's missing, but if I run this, it doesn't. This is the one I'm probably most unclear on that I want to understand better.

 

Quote

Click the check box for Parity is already valid, make sure it is checked!

 

This one I understand, since the drive was zeroes out, it doesn't affect parity. Telling it that parity is valid (which it is) is just a confirmation telling it that you know you removed a drive that doesn't affect parity so it can accept the parity drive as it is?

 

Link to comment
3 minutes ago, Adrian said:

At this point, the drive is unmounted (by the script), but still assigned. How does this prepare unraid to allow me to unassign it at step 13 without causing unraid to complain about a missing disk? 

Because you'll be doing a new config, and that resets all assignments.

 

4 minutes ago, Adrian said:

is just a confirmation telling it that you know you removed a drive that doesn't affect parity so it can accept the parity drive as it is?

It's not because you removed a drive, it's because you'll be doing a new config with an already valid parity, if this isn't checked a parity sync would begin instead.

 

Link to comment

Twice I've tried this method and twice it hangs on step 9. When the clearing is complete, stop the array

 

How else can I easily remove a drive that no longer has any files on it? before the steps it mentions the following:

  • One quick way to clean a drive is reformat it! To format an array drive, you stop the array and then on the Main page click on the link for the drive and change the file system type to something different than it currently is, then restart the array. You will then be presented with an option to format it. Formatting a drive removes all of its data, and the parity drive is updated accordingly, so the data cannot be easily recovered.

Could I do this instead of running the script and then continue from step 9? Because whatever this script is doing, Unraid doesn't like it and puts it in a state where it gets stuck stopping the array and I can't even reboot/shut it down cleanly. Removing an empty drive shouldn't be this difficult.

Edited by Adrian
Link to comment

I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear):

 

1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method)

2. Start array in Maintenance mode. (array will not be accessible during the clearing)

3. Identify which disk you're removing

4. For Unraid <6.12 type in the CLI:

dd bs=1M if=/dev/zero of=/dev/mdX status=progress
   For Unraid 6.12+ type in the CLI:
dd bs=1M if=/dev/zero of=/dev/mdXp1 status=progress

replace X with the correct disk number

 

5. Wait, this will take a long time, about 2 to 3 hours per TB.

6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply

7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) *

8. Click checkbox "Parity is already valid.", and start the array

 

 

  • Like 1
  • Thanks 5
Link to comment
3 hours ago, johnnie.black said:

I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear):

 

1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method)

2. Start array in Maintenance mode. (array will not be accessible during the clearing)

3. Identify which disk you're removing

4. From the command line type:

 


dd bs=1M if=/dev/zero of=/dev/mdX status=progress
 

replace X with the correct disk number

 

5. Wait, this will take a long time, about 2 to 3 hours per TB.

6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply

7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) *

8. Click checkbox "Parity is already valid.", and start the array

 

 

 

Great, thank you!

Edited by Adrian
Link to comment
  • 5 years later...

I am also busy with clearing a disk (Disk 5 in the array) for removal without having to rebuild parity. 
image.thumb.png.396dd639b16fdfb8ac4637dbff54fc5d.png

 

Disk 5 is being cleared by the dd command, but does it matter that the parity is updated in the meantime? Does it affect clearing speed or something? Or strain the system unnecessary? 

Link to comment
2 minutes ago, Ymetro said:

Disk 5 is being cleared by the dd command, but does it matter that the parity is updated in the meantime?

The whole point of this technique is to update parity as you go to reflect the fact that sectors on the drive to be removed have been zeroed.  That is why you can later remove the drive without affecting parity as zeroed sectors have no effect on parity ones.

  • Thanks 1
Link to comment

Thanks for your answer @itimpi!

Learning all the time. 


It might be wise have to use nohup or screen because of its long process then won't get stopped when the PC one is logged in from crashes or anything. 
I stopped the dd process with Ctrl + C and created a screen session for it just to be sure. 

Edited by Ymetro
Link to comment
1 hour ago, Ymetro said:

It might be wise have to use nohup or screen because of its long process then won't get stopped when the PC one is logged in from crashes or anything. 

I agree.   I might issue a pull request to the documentation (it is on GitHub) to add this as a suggestion.  It needs updating anyway to say that starting with the Unraid 6.12 release the partition is now part of the device name (e.g.  /dev/md1p1 instead of just /dev/md1).    I ‘think’ this change has been made as part of a future plans to support ZFS format drives bought across from other systems which have the ZFS partition on a partition other than the first.

Link to comment

Does this method still work?

 

I get this error

 

dd bs=1M if=/dev/zero of=/dev/md3 status=progress
dd: error writing '/dev/md3': No space left on device
9+0 records in
8+0 records out
8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.00396821 s, 2.1 GB/s

 

It's a 12TB blank drive. Maybe I can remove it if its already zero'd?

Edited by dopeytree
Link to comment
  • 1 month later...
On 12/3/2017 at 3:48 AM, JorgeB said:

7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) *

Can someone clarify this item? I have 12 hard drives in my server, 2 parity and 10 data. Let's say I am taking out Data Drive #1.

  • I use unBalance to transfer all of the files off of the drive to other data drives in the array.
  • Enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method)

  • Start array in Maintenance mode

  • Identify which disk you're removing (it's Disk 1 so that's /dev/md1

  • I am on Unraid 6.11.3 so my command is "dd bs=1M if=/dev/zero of=/dev/md1 status=progress"

  • When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply (I have a screenshot of my disk layout just in case.)

Is step 7 saying that when I create the new config I need to leave the slot for Drive 1 as unassigned in order for the parity to remain valid?

Link to comment
37 minutes ago, rdagitz said:

Is step 7 saying that when I create the new config I need to leave the slot for Drive 1 as unassigned in order for the parity to remain valid?

Yes. After the array has been started once and the config has been committed, you can later put a new drive into the disk1 slot and Unraid will clear it to keep parity valid.

 

But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement.

Link to comment
Quote

But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement.

 

I am in the process of replacing 8 drives, 2 parity and 6 data drives. I have already successfully replaced the parity. Now I am shuffling the data around with unBalance. I am trying to offload all the data off of Data Drive 1, replace it with a 10TB drive and move the data from as many drives I can replace at one time. I don't want to have to rebuild parity after each data disk is removed, but I want some kind of "safety net" in the rebuild process. It would be a lot of data to lose if something goes wrong.

 

My current problem is that unBalance is not moving all of the data, even though the target drives have over 100GB more space available for Data Disk 1's contents. Trying to figure out why.

Link to comment
  • 3 weeks later...

This looks like it reached the end of the disk.  Is "no space left on device" normal output?  

 

root@Tower:~# dd bs=1M if=/dev/zero of=/dev/md2p1 status=progress
2000357949440 bytes (2.0 TB, 1.8 TiB) copied, 23160 s, 86.4 MB/s 
dd: error writing '/dev/md2p1': No space left on device
1907730+0 records in
1907729+0 records out
2000398901248 bytes (2.0 TB, 1.8 TiB) copied, 23160.8 s, 86.4 MB/s
root@Tower:~# 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.