Moving data to another unraid installation, shrink array & remove empty drives, sanity check


Recommended Posts

Hello.

 

I have some questions regarding my scenario here. I have 2 x unraid installations, certain files is being moved from unraid 1 to unraid 2 via SMB. (Please note i am MOVING data, so each file that confirms is automatically removed from unraid 1), parity is being updated as content is moving (??).

 

Once i have moved all the data, i should have 13 disks 100% empty. (I will confirm this before i do anything else, if there are remaining files i will move them to another drive that will remain in the array via cli)

 

Once i have 13 disks empty in my 24 drive array, i should be able to uncheck all the 13 disks that are empty from all the shares they are a part of, then simply shut down the array, remove the drives, start up and then unassign these 13 drives and start the array while checking "parity is valid".

 

This way i don't have to rebuild my parity? i also never lose parity safety, am i correct in my thinking here? I really don't want to burden the array with uneccessary rebuilds considering i am moving all the files my though is that parity is being updated as the move is occuring so i don't understand why i would need to rebuild the parity one more time after that?

Edited by je82
Link to comment
  • je82 changed the title to Moving data to another unraid installation, shrink array & remove empty drives, sanity check
4 minutes ago, je82 said:

Once i have 13 disks empty in my 24 drive array, i should be able to uncheck all the 13 disks that are empty from all the shares they are a part of, then simply shut down the array, remove the drives, start up and then unassign these 13 drives and start the array while checking "parity is valid".

No, parity would only be valid if the disks were cleared, i.e., zeros written to the entire surface.

Link to comment
14 minutes ago, JorgeB said:

No, parity would only be valid if the disks were cleared, i.e., zeros written to the entire surface.

Alright, reading the faq this seems like trickery to me, but is it the best way to go about it?.

Quote

One quick way to clean a drive is reformat it! To format an array drive, you stop the array and then on the Main page click on the link for the drive and change the file system type to something different than it currently is, then restart the array. You will then be presented with an option to format it. Formatting a drive removes all of its data, and the parity drive is updated accordingly, so the data cannot be easily recovered.

So once i have the 13 drives completely empty, i shut down array, change filesystem for the 13 drives then mount the array and hit format them? Will this work correctly with the parity? if i remember correctly formating a drive is quick while the full clear takes as long as a parity rebuild (nearly 24 hours for 12tb parity)

 

Having to do 13x 24hour non stop full load to the parity drives is a bad idea, so i really want to do this the best way possible, without losing parity protection.

 

EDIT:

Otherwise its a choice between 13x24 hour parity writes with parity protection or 1x full parity rebuild without parity protection, i guess?

Edited by je82
Link to comment
3 minutes ago, je82 said:

So once i have the 13 drives completely empty, i shut down array, change filesystem for the 13 drives then mount the array and hit format them?

Format isn't the same as clearing, like mentioned you'd need to clear all the disks, it's must faster to just re-sync parity with the remaining disks.

  • Like 1
Link to comment

So recommendation is (for my own sanity here)

1. Write down the current array disk assignments.

2. Move all the data of the 13 drives we want to remove.

3. Remove all the 13 disks from any shares it is a part of.

4. Stop the array

5. Tools > New Config > Retain Current Configuration > All > Close > Yes I want To Do This > Apply > Done

6. Main page > Check assignments and Unassign the 13 drives i want to remove from the array. (Double check that assignments are correct, especially parity drives)

7. Make sure that "Parity is valid" is NOT CHECKED.

8. Start the array and let parity re-sync (pray to god none of the drives decides to crash during this re-sync)

9. Once parity re-sync is done, shut down array, shutdown unraid, remove the 13 drives physically.

 

I do have double parity drives, which seems like a rare configuration, i hope this doesn't change anything?

 

Is my sanity correct here? Thanks for the guidance!

 

 

 

Edited by je82
Link to comment

Thanks for the help. Sorry to bother you with one additional question.

 

My disk order will become sort of messy once i remove 13 8tb drives from the array, as you can see in the picture below:

 

image.thumb.png.3bd2f8768995b68b6be9b48c0a811d43.png

 

What would be the best way to go about re-arranging the disk order? As i understand if i don't want gaps in my disk order i have to do a full parity re-sync first with the old disk order, then i can re-arrange the drives and fix the order so i don't have a massive gap, but due to to me having double parity i will have to do a full re-sync once again?

 

As i understand the second parity drive will not be valid if i change the disk order, but how can i only re-build the parity of the second parity drive so i can keep the protection from at least 1 parity during the disk order reassignment?

 

This is obviously not a super important thing, but it would be nice to keep things tidy as i don't plan to grow the array more once this operation has been done, so having massive gaps in the order will look messy.

Edited by je82
Link to comment
20 minutes ago, JorgeB said:

You are already going to do a parity sync, just assign the disks in the positions you want before doing it.

So i can assign the disks as i see fit directly at step 6? Only the parity assignments need to be correct so the parity is written to the correct drives, i think i understand, thanks for all the information.

Edited by je82
Link to comment

Sorry for bumping this thread again, i came to think of something i really don't feel comfortable doing without testing it first on a test build, but i cannot find the zip package for unraid version 6.7.2, is there any chance someone knows where i can find this package?

 

The thing i want to test is what happens to a share i have on the server and i change the disk location. Is shares that have allocated disks allocated on "per slot basis" or per disk unique identifier basis? If they are per slot basis, what happens when i change the order of the disks and the share is allocated to now paths that no longer exists?

 

image.thumb.png.040be44ccdbe1d927b0c2c7a87587b54.png

 

These allocations would not be the same if i change the order, what happens to this share?

 

This is probably easier for me to try on my own than anyone answering it because its a very strange usecase, so the TLDR is, where can i find 6.7.2? This is the version i have on my unraid production server.

 

Cheers!

 

EDIT: Found version 6.7.1 via waybackmachine for whatever reason 6.7.2 seems to never have been posted to the ofifical releases page as it went from 6.7.1 to 6.8.x :( but 6.7.1 is better than nothing, ill do my own tests to see how unraid behaves when having a share and swapping the order of the drives around.

Edited by je82
Link to comment
17 minutes ago, itimpi said:

Disks are allocated using their serial numbers and where they are connected is not relevant (unless it somehow results in presenting a different serial number to UnRaid)

Okay, i am surprised by the "where they are located" are we talking about physically?

 

I intend to move things psyically too, i never though this had any matter at all though.

 

EDIT: Ignore my question, i am ot of coffee and i somehow read that it mattered where the drivers were connected psyically, i have no idea why, sorry. As long as disks are using the unique ID all should be fine!

Edited by je82
Link to comment

Oh baby you're embarking on a fun project. I did the same kind of consolidation between April and May on my primary server; 23 drives down to 7. Took a week or so shuffling data between the old and new drives but I came out unscathed in the end, you will too. Measure twice and cut once!

 

Be sure to sell your old drives after properly clearing. I recovered all upgrade costs and then some because of drive shortages and Chia demand... nice bonus for the efforts.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.