Jump to content
Sign in to follow this  

Advice on reducing total disks

2 posts in this topic Last Reply

Recommended Posts

Posted (edited)

Please could someone point me towards some documentation on consolidating multiple smaller disks into one bigger one?


As the price creeps down I'd like to start using a smaller number of 6/8TB reds over multiple 1/2TB greens, curious how I go about doing this without data loss or backing up, recreating and then restoring my whole 30TB array.


Am I correct in thinking, that I need to use MC to move files from disk to disk within the array and then complete the shrink process outlined here: http://lime-technology.com/wiki/index.php/Shrink_array


Will moving files in this manner break the parity etc or is that rewritten as the files move?


How do I stop unraid writing new files to the disk as I free up the space or do I need to do this whilst the array is not mounted with a long period of downtime.


I could add the disk, shrink a few 1TB disks out of the array and then use unassigned devices to mount it and copy the files back in place?




Edited by helin0x

Share this post

Link to post

Hi -


I'd start by upgrading your parity and one data drive via the usual rebuild process.  That should give you a data drive with lots of room to spare on it.  Then, you can look at the unBalance Plugin or the diskmv/consld8 scripts to consolidate your data.  Parity remains intact with this strategy.  Once you have a few 8TB drives with lots of data on them and a bunch of unneeded 1 and 2TB drives, it's time to shrink the array.


To prevent unRAID from writing data to the old drive, you may want to temporarily (or permanently) set the Include parameter for your user shares to use only the new disk(s).

Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this