Adrian Posted November 23, 2017 Share Posted November 23, 2017 Can someone explain what exactly is going on in the final steps for clearing a drive. Specifically 11-14 https://wiki.lime-technology.com/Shrink_array The "Clear Drive Then Remove Drive" Method 10. Go to Tools then New Config 11. Click on the Retain current configuration box (says None at first), click on the box for All, then click on close 12. Click on the box for Yes I want to do this, then click Apply then Done 13. Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)! 14. Click the check box for Parity is already valid, make sure it is checked! 15. Start the array! Click the Start button then the Proceed button (on the warning popup that will pop up) 16. Parity should still be valid, however it's highly recommended to do a Parity Check Quote Link to comment
JorgeB Posted November 23, 2017 Share Posted November 23, 2017 Since the disk you're removing has been cleared it can be removed from the array without affecting parity, so parity remains valid without it. Quote Link to comment
Fireball3 Posted November 23, 2017 Share Posted November 23, 2017 https://en.wikipedia.org/wiki/Parity_bit A cleared drive is not affecting the parity bit as it has all "0" on it. Quote Link to comment
Adrian Posted November 23, 2017 Author Share Posted November 23, 2017 I was trying to understand more of what each step is doing. Quote Click on the Retain current configuration box (says None at first), click on the box for All, then click on close At this point, the drive is unmounted (by the script), but still assigned. How does this prepare unraid to allow me to unassign it at step 13 without causing unraid to complain about a missing disk? If I unassign it without running this, it complains that it's missing, but if I run this, it doesn't. This is the one I'm probably most unclear on that I want to understand better. Quote Click the check box for Parity is already valid, make sure it is checked! This one I understand, since the drive was zeroes out, it doesn't affect parity. Telling it that parity is valid (which it is) is just a confirmation telling it that you know you removed a drive that doesn't affect parity so it can accept the parity drive as it is? Quote Link to comment
JorgeB Posted November 23, 2017 Share Posted November 23, 2017 3 minutes ago, Adrian said: At this point, the drive is unmounted (by the script), but still assigned. How does this prepare unraid to allow me to unassign it at step 13 without causing unraid to complain about a missing disk? Because you'll be doing a new config, and that resets all assignments. 4 minutes ago, Adrian said: is just a confirmation telling it that you know you removed a drive that doesn't affect parity so it can accept the parity drive as it is? It's not because you removed a drive, it's because you'll be doing a new config with an already valid parity, if this isn't checked a parity sync would begin instead. Quote Link to comment
Adrian Posted November 27, 2017 Author Share Posted November 27, 2017 (edited) Twice I've tried this method and twice it hangs on step 9. When the clearing is complete, stop the array How else can I easily remove a drive that no longer has any files on it? before the steps it mentions the following: One quick way to clean a drive is reformat it! To format an array drive, you stop the array and then on the Main page click on the link for the drive and change the file system type to something different than it currently is, then restart the array. You will then be presented with an option to format it. Formatting a drive removes all of its data, and the parity drive is updated accordingly, so the data cannot be easily recovered. Could I do this instead of running the script and then continue from step 9? Because whatever this script is doing, Unraid doesn't like it and puts it in a state where it gets stuck stopping the array and I can't even reboot/shut it down cleanly. Removing an empty drive shouldn't be this difficult. Edited November 27, 2017 by Adrian Quote Link to comment
Adrian Posted December 3, 2017 Author Share Posted December 3, 2017 Any thoughts on what the deal is with doing the drive clear method and unraid getting stuck when I go to stop it? I have another drive I want to remove. I already moved all the files off to another drive. Quote Link to comment
JorgeB Posted December 3, 2017 Share Posted December 3, 2017 I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear): 1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method) 2. Start array in Maintenance mode. (array will not be accessible during the clearing) 3. Identify which disk you're removing 4. For Unraid <6.12 type in the CLI: dd bs=1M if=/dev/zero of=/dev/mdX status=progress For Unraid 6.12+ type in the CLI: dd bs=1M if=/dev/zero of=/dev/mdXp1 status=progress replace X with the correct disk number 5. Wait, this will take a long time, about 2 to 3 hours per TB. 6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply 7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) * 8. Click checkbox "Parity is already valid.", and start the array 2 6 Quote Link to comment
Adrian Posted December 3, 2017 Author Share Posted December 3, 2017 (edited) 3 hours ago, johnnie.black said: I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear): 1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method) 2. Start array in Maintenance mode. (array will not be accessible during the clearing) 3. Identify which disk you're removing 4. From the command line type: dd bs=1M if=/dev/zero of=/dev/mdX status=progress replace X with the correct disk number 5. Wait, this will take a long time, about 2 to 3 hours per TB. 6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply 7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) * 8. Click checkbox "Parity is already valid.", and start the array Great, thank you! Edited December 3, 2017 by Adrian Quote Link to comment
Adrian Posted December 4, 2017 Author Share Posted December 4, 2017 19 hours ago, johnnie.black said: dd bs=1M if=/dev/zero of=/dev/mdX status=progress So if this is the disk I'm clearing, I'd be running the following? dd bs=1M if=/dev/zero of=/dev/md16 status=progress Quote Link to comment
Adrian Posted December 4, 2017 Author Share Posted December 4, 2017 15 hours ago, johnnie.black said: Yes Worked perfectly, no issues. Thanks! Quote Link to comment
Ymetro Posted June 20, 2023 Share Posted June 20, 2023 I am also busy with clearing a disk (Disk 5 in the array) for removal without having to rebuild parity. Disk 5 is being cleared by the dd command, but does it matter that the parity is updated in the meantime? Does it affect clearing speed or something? Or strain the system unnecessary? Quote Link to comment
itimpi Posted June 20, 2023 Share Posted June 20, 2023 2 minutes ago, Ymetro said: Disk 5 is being cleared by the dd command, but does it matter that the parity is updated in the meantime? The whole point of this technique is to update parity as you go to reflect the fact that sectors on the drive to be removed have been zeroed. That is why you can later remove the drive without affecting parity as zeroed sectors have no effect on parity ones. 1 Quote Link to comment
Ymetro Posted June 20, 2023 Share Posted June 20, 2023 (edited) Thanks for your answer @itimpi! Learning all the time. It might be wise have to use nohup or screen because of its long process then won't get stopped when the PC one is logged in from crashes or anything. I stopped the dd process with Ctrl + C and created a screen session for it just to be sure. Edited June 20, 2023 by Ymetro Quote Link to comment
itimpi Posted June 20, 2023 Share Posted June 20, 2023 1 hour ago, Ymetro said: It might be wise have to use nohup or screen because of its long process then won't get stopped when the PC one is logged in from crashes or anything. I agree. I might issue a pull request to the documentation (it is on GitHub) to add this as a suggestion. It needs updating anyway to say that starting with the Unraid 6.12 release the partition is now part of the device name (e.g. /dev/md1p1 instead of just /dev/md1). I ‘think’ this change has been made as part of a future plans to support ZFS format drives bought across from other systems which have the ZFS partition on a partition other than the first. Quote Link to comment
dopeytree Posted June 26, 2023 Share Posted June 26, 2023 (edited) Does this method still work? I get this error dd bs=1M if=/dev/zero of=/dev/md3 status=progress dd: error writing '/dev/md3': No space left on device 9+0 records in 8+0 records out 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.00396821 s, 2.1 GB/s It's a 12TB blank drive. Maybe I can remove it if its already zero'd? Edited June 26, 2023 by dopeytree Quote Link to comment
itimpi Posted June 26, 2023 Share Posted June 26, 2023 With the 6.12 release you need to include the partition number (I.e. /dev/md3p1 instead of /dev/md3). Quote Link to comment
dopeytree Posted June 26, 2023 Share Posted June 26, 2023 Ok thanks and guess there is only ever the 1 partition? The drive was blank zeroed already and not used so I was able to remove by doing new config. Quick easy. Quote Link to comment
itimpi Posted June 26, 2023 Share Posted June 26, 2023 7 hours ago, dopeytree said: Ok thanks and guess there is only ever the 1 partition? There is on the current release, but this is likely to change later to allow for ZFS file systems on partition 2. 1 Quote Link to comment
dopeytree Posted June 26, 2023 Share Posted June 26, 2023 Ah interesting thanks Quote Link to comment
rdagitz Posted August 2, 2023 Share Posted August 2, 2023 On 12/3/2017 at 3:48 AM, JorgeB said: 7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) * Can someone clarify this item? I have 12 hard drives in my server, 2 parity and 10 data. Let's say I am taking out Data Drive #1. I use unBalance to transfer all of the files off of the drive to other data drives in the array. Enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method) Start array in Maintenance mode Identify which disk you're removing (it's Disk 1 so that's /dev/md1 I am on Unraid 6.11.3 so my command is "dd bs=1M if=/dev/zero of=/dev/md1 status=progress" When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply (I have a screenshot of my disk layout just in case.) Is step 7 saying that when I create the new config I need to leave the slot for Drive 1 as unassigned in order for the parity to remain valid? Quote Link to comment
JonathanM Posted August 2, 2023 Share Posted August 2, 2023 37 minutes ago, rdagitz said: Is step 7 saying that when I create the new config I need to leave the slot for Drive 1 as unassigned in order for the parity to remain valid? Yes. After the array has been started once and the config has been committed, you can later put a new drive into the disk1 slot and Unraid will clear it to keep parity valid. But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement. Quote Link to comment
rdagitz Posted August 3, 2023 Share Posted August 3, 2023 Quote But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement. I am in the process of replacing 8 drives, 2 parity and 6 data drives. I have already successfully replaced the parity. Now I am shuffling the data around with unBalance. I am trying to offload all the data off of Data Drive 1, replace it with a 10TB drive and move the data from as many drives I can replace at one time. I don't want to have to rebuild parity after each data disk is removed, but I want some kind of "safety net" in the rebuild process. It would be a lot of data to lose if something goes wrong. My current problem is that unBalance is not moving all of the data, even though the target drives have over 100GB more space available for Data Disk 1's contents. Trying to figure out why. Quote Link to comment
dboonthego Posted August 23, 2023 Share Posted August 23, 2023 This looks like it reached the end of the disk. Is "no space left on device" normal output? root@Tower:~# dd bs=1M if=/dev/zero of=/dev/md2p1 status=progress 2000357949440 bytes (2.0 TB, 1.8 TiB) copied, 23160 s, 86.4 MB/s dd: error writing '/dev/md2p1': No space left on device 1907730+0 records in 1907729+0 records out 2000398901248 bytes (2.0 TB, 1.8 TiB) copied, 23160.8 s, 86.4 MB/s root@Tower:~# Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.