Move HD to New Servers?


KcWeBBy

Recommended Posts

I'm sure someome has already covered this, but I'm moving three servers to my new giant server.

 

I have new drives and a new array..   is it possible to move the drives out of the protection of the parity of the old servers and drop them into the new unraid server and move the data over to my new drives?    Otherwise, I'm going to have to invest in a 10gig network infrastructure.   I have three servers, with roughly 64 TB split between them.

 

Your response and assistance is appreciated.

 

Thanks!!

Link to comment

I don't think this is possible but I'll repeat your question back to you to make sure we are on the same page.

 

You want to consolidate the drives, with data on them, from three physical servers onto one server running unRAID?

 

Why don't you just start copying the data from each of the servers to the new server? I realize it may take, days, or a week, but I don't see any other way.

Link to comment

How many of the old drives can you physically put into the new server at the same time?

 

Unassigned Devices is probably the best way to handle this, mount the old drives 1 at a time in the new server and copy the data (copy, not move) onto the array.  You can copy to user shares in this situation which will allow normal space allocation and split levels.  It will go faster if you don't implement parity on the new array until all the data has been copied onto the new array.

 

If you have lots of drive slots in the new system and want to copy more than one disk at a time a more detailed discussion is needed.

Link to comment

Tdallen,. Thanks for the reply.   Yes.  I have 60 bays in my new server.  45drives xl60

 

I was thinking of putting the old drives in, using unassigned devices to mount, and then using rsync to duplicate the data to the new user shares..   the no parity is smart add..  I would have missed that.   I am already skipping the cache setup for the shares until this copy is completely done..   then enabling and transferring the Dockers and VM that access the data..   hopefully that part becomes pretty transparent.

 

 

My new array is 10x 10TB drives, with two for parity, and 10 128GB SSD drives for cache.

 

 

Thoughts?

Link to comment
8 hours ago, KcWeBBy said:

I imagine rsync will try to delete files

 

Do you have the same file name in the same path structure on more than one of your existing servers? If so, you'll have to manually move, edit, rename files to ensure they all get copied without being overwritten. If some are backups, then I'd write from the oldest backup destination first with overwrite on so you are always overwriting a newer file.

 

Other than those situations, I'm not entirely sure why the "second drive" would be any different than the first or the third or the ninth...

Link to comment

Thanks guys...

 

A couple of things that may not have been clear...

 

My Source for the copy is three different UNRAID servers, with the same share names (not the smartest I know)...

Each of the servers has at least 8 drives, the biggest has 24.  My new server has 10 (10TB) drives, which makes it the highest capacity.  (others are 16TB, and 48TB)  the 16 TB is full, the 48 TB is about 30% full.

 

in correcting my "ways"  I'm looking for the fastest way to copy the data from the user shares, to the new server during some downtime on the old servers.

 

I shy'd away from rsync, and have been using rclone instead.

 I installed the 8 drives from server 1, into server 3 (new one) and mounted the 8 drives using unassigned devices.

 

I have eight copies of rclone running in screen (one against each of the drives, to the new /mnt/user target after creating the shares.

I'm getting overall about 200MB/sec throughput, which is way lower than I was hoping for.   I have cache, and parity shut off for all of the new shares.

Each time I started a new instance of rclone, for another one of the drives (no instance is running against the same source, but all are hitting the same dest(s)) I would see a 40-80MB/sec jump in throughput... so I don't think its an IO limit.

 

I have four HBA controllers in my server, and I have spread the disks out so I have currently 5 disks on each controller, 3x new 10TB drives (user share), and 2 2TB drives from the old server (unassigned devices)

 

Any recommendations on how I could speed them up?  12 hours in, and I'm only 7 TB into the copy...  I won't probably interupt this copy, but have to do it again on the other server which has 24x 2TB drives. (which I can load in the server all at once)

 

CPU / Memory and of course Network IO are flat idle on this server.

Here's a 24 hour Graph attached... You'll notice 26 Disks because of the 10 SSD's installed for cache (but not in use yet)

 

image.thumb.png.67ec107ef3491d5ea7e8da9a0592aeee.png

Link to comment
1 minute ago, johnnie.black said:

Do you mean they all are copying to the same user share? If yes that's pretty good, only way to get more would be to copy directly to different disks bypassing the user share.

Yes.... the "user" share, but different folders underneath, and all shares are sharing all drives...  So I guess I would expect a "striping" type performance which would be much higher, in the GB/Sec perhaps even.

 

Thanks for your response.

Link to comment
6 minutes ago, johnnie.black said:

 

unRAID doesn't stripe data for data disks.

True... Stripe - like in that for each file, a different drive could be selected...   especially since I'm using the "most-free" allocation, and all the drives were empty when I started.

 

 

I have 4x 12GB/sec capable LSI 9305 controllers in the box, I'm just surprised I'm not getting faster disk to disk copy speeds... 

Enterprise Drives all the way around (new ar WD Gold, old are Older WD Golds

Edited by KcWeBBy
Link to comment
1 minute ago, KcWeBBy said:

like in that for each file, a different drive could be selected...    

 

You may be able to get a little speed bump if you select most free for that share allocation method, some times, but not always, files should go to different disks, this can be only advantageous when running paritless, with parity it will result in overlap writes to parity making it even slower than normal.

Link to comment
1 minute ago, johnnie.black said:

Yes but like I said it will only work sometimes, when at the beginning of that transfer the current disk being used as target is no longer the one with the most free space.

That is consistent with what I'm seeing.... Not "exactly" the same free space, but depending on the freespace at the beginning of each file operation, its selecting a drive.
image.png.99b25cf88088960bef2a13cad9d4c897.png

 

 

Thanks again for the help

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.