thestraycat Posted October 17, 2022 Share Posted October 17, 2022 (edited) Quick one, I have 10 x 6tb disks & 10 x 2TB disks and i'd like to remove ALL the 2TB disks from my array which are now surplus and not needed. However i need to get the data off the 2TB disks and over to the 6TB disks.. . I'm using unBalance to move the contents of the 2tb disks over to the newer 6tb disk however each disk looks to be taking around 12 hours and i have 10 to do. It seems unBalance is locked to only doing 1 disk at a time... is there a faster way for me to get all these disks going simultaneously to save on time that my array parity is down.... I thought about just opening up 10 terminal sessions and manually copying the contents over simultaneously. But was wondering if there's a nicer solution? I'd assume i'd probably bewilder my parity disks bandwidth if i had multiple whole disk copies going at the same time? Would turbo write help in this situation? Edited October 17, 2022 by thestraycat added Quote Link to comment
trurl Posted October 17, 2022 Share Posted October 17, 2022 52 minutes ago, thestraycat said: save on time that my array parity is down Don't know what you mean by that. Parity is maintained whenever a disk in the array is updated, whether with unBALANCE, at the command line, or however you do it. Quicker to copy instead of move. 12hours seems excessive for only 2TB though. Do you have anything in the Errors column on Main - Array Devices? Quote Link to comment
thestraycat Posted October 18, 2022 Author Share Posted October 18, 2022 (edited) @trurl I'm following the "remove drives then rebuild parity" Method for the removal of multiple disks. As i can only run unbalance on a single disk at a time and have 10 to do, i was initially assuming my parity would be invalid until i've finished the last disk and rebuilt it. However i think after a re-read the only time my parity is at risk is at the end of the process when i run 'new config' to finish removing the drives and unraid runs a parity rebuild when the disks are finally removed. In regards to the disks I've tried Moving and Copying and both transfer at around 54mb/s with unbalance. Wondering whether it's because the 2TB disks are 99% full. The disks are very old (2010) but the SMART reports for all the 2tb disks all pass so no issue with the disks. I have the disk speed plugin and it shows no bottlenecks with the disk or controller config. I'm currently copying from Disk8 > Disk12 and as per the screenshot it seems there's lots more bandwidth between the 2 disks than unbalance is using. I know i'll lose some from running dual parity but it does seem a little slow regardless. Current Unbalance Copy Disk8 > Disk12 Disk Speed Plugin screenshot showing Disk8 and Disk 12 and the bandwidth available Any idea on how to speed it up? Edited October 18, 2022 by thestraycat added Quote Link to comment
JorgeB Posted October 18, 2022 Share Posted October 18, 2022 1 hour ago, thestraycat said: Any idea on how to speed it up? Parity protected array to array copy will always be slow, since you're are going to rebuild parity you could disable it now so it would move the data much faster, of course array will not be protected once you do that. Quote Link to comment
thestraycat Posted October 18, 2022 Author Share Posted October 18, 2022 Thanks for this - How can i disable parity temporarily? Quote Link to comment
trurl Posted October 18, 2022 Share Posted October 18, 2022 Just now, thestraycat said: How can i disable parity temporarily? Stop array, unassign parity, start array Quote Link to comment
thestraycat Posted October 18, 2022 Author Share Posted October 18, 2022 I thought that might be it Quote Link to comment
trurl Posted October 18, 2022 Share Posted October 18, 2022 10 hours ago, thestraycat said: @trurl Just a note about using this feature of the forum. You have to start typing and then actually make a selection from the list to get it to work. Just typing it all won't work. Instead of this @thestraycat you want this @thestraycat Quote Link to comment
thestraycat Posted October 19, 2022 Author Share Posted October 19, 2022 @trurlThanks. Quote Link to comment
thestraycat Posted October 24, 2022 Author Share Posted October 24, 2022 (edited) @trurl @JorgeB I'm just about to remove 10 x 2tb disks from my array which will leave the disk numbering all over the place... in escense it'll go from this: To this: Is it possible for me to assign my 10 x 6tb remaining disks to disk slots 1 > 10 without losing any data (obviously i would reassign the new disk slot number in my shares if they were previously explicitly set.) And obviously i'd re-assign parity1 and parity2 back to the same slots that they were before. Would that work? Want it to like this after reassignment: Edited October 24, 2022 by thestraycat added info Quote Link to comment
JorgeB Posted October 24, 2022 Share Posted October 24, 2022 Yes, after the new config you can assign the disks in any order you want, just make sure you don't assign a previous data disk as parity. Quote Link to comment
thestraycat Posted October 24, 2022 Author Share Posted October 24, 2022 (edited) @JorgeB Sweet. Obviously i'll just need to remap my shares with the new explicit disk names right... If for example my share /user was explicitly mapped to disk 19 and disk 19 is now disk 10 i'll need to go into each and re-map the new disk number right? Edited October 24, 2022 by thestraycat Quote Link to comment
JorgeB Posted October 24, 2022 Share Posted October 24, 2022 /user is not a share, but yes, if you were including or excluding disks for a specif share you should re-adjust. Quote Link to comment
thestraycat Posted October 24, 2022 Author Share Posted October 24, 2022 @JorgeB - Sorry it was just a bad example of a made up unraid share (I didnt mean to reference the /user share of unraid! My bad!) /work or similar would have been more fitting! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.