starcat Posted July 9, 2020 Share Posted July 9, 2020 (edited) Hey guys, hope you are well. Any ideas as to how to phase out old disks from a running array (with a single parity disk) without stopping the entire array and having to rebuild parity from scratch? The data from the old disks is already migrated to a new larger disk, already part of the array, so I just need to take out the older (smaller) disks. Rebuilding onto the new disk was not possible as I switched from reiserfs on the older disks to xfs on the new drive, also I migrated two 4TB disks to a single, new 8TB drive. Any hints are highly appreciated. Can be anything tricky as I am savvy on the cmdline, etc. Kind regards Edited July 9, 2020 by starcat Quote Link to comment
JonathanM Posted July 10, 2020 Share Posted July 10, 2020 It's quicker to rebuild parity, but the general idea is to completely fill the drives you want to remove with binary 0's, then do a new config with the remaining drives and select that parity is already valid then do a parity check to make sure things went well. 1 Quote Link to comment
starcat Posted July 10, 2020 Author Share Posted July 10, 2020 (edited) Something like dd if=/dev/zero of=/dev/sda for the sda drive to be removed? And I do this while the array is running, right Thanks very much, highly appreciated! Edited July 10, 2020 by starcat Quote Link to comment
JorgeB Posted July 10, 2020 Share Posted July 10, 2020 4 hours ago, starcat said: Something like dd if=/dev/zero of=/dev/sda You need to use the mdX device: https://forums.lime-technology.com/topic/61614-shrink-array-question/?tab=comments#comment-606335 1 Quote Link to comment
starcat Posted July 10, 2020 Author Share Posted July 10, 2020 (edited) Thanks much! Can I leave Settings -> Disk Settings -> Tunable (md_write_method) forever set to "reconstruct write" or is it just for this process and then should I put it back to auto, which is my current setting? Edited July 10, 2020 by starcat Quote Link to comment
JorgeB Posted July 10, 2020 Share Posted July 10, 2020 8 minutes ago, starcat said: Can I leave Settings -> Disk Settings -> Tunable (md_write_method) forever set to "reconstruct write" Whatever you prefer, reconstruct write is faster at the expense of all disks spinning up for writes, more info here. 1 Quote Link to comment
starcat Posted July 10, 2020 Author Share Posted July 10, 2020 Thanks, all is clear now. I suppose I can sequentially fill up more than a single old drive with zeros and then in a single step make a new array config with the remaining drives and select that parity is already valid, right? Quote Link to comment
JonathanM Posted July 11, 2020 Share Posted July 11, 2020 3 hours ago, starcat said: I suppose I can sequentially fill up more than a single old drive with zeros and then in a single step make a new array config with the remaining drives and select that parity is already valid, right? Sure. I'm still not sure what you are gaining by doing it this way, it definitely takes much longer than just building parity with the new config, and you are basically writing the full capacity of the parity drive X times (where X is the number of drives to remove) vs. just once. 1 Quote Link to comment
starcat Posted July 11, 2020 Author Share Posted July 11, 2020 (edited) I am basically recalculating parity while running the array in protected mode!! As I have only one parity drive (12TB) in a huge array, I am removing two 4TB drives (already copied over to a new 8TB drive) without running for hours unprotected while the new parity gets calculated. I don't care how often the parity drive is written and how long it takes to zero the two old 4TB drives. I would however avoid running for some 24-36 hours unprotected while rebuilding parity from scratch. Where it would take only an array re-start if I zero the two old drives manually beforehand. Edited August 3, 2020 by starcat Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.