Jump to content
Sign in to follow this  

Multi Homing / Multi Nics

3 posts in this topic Last Reply

Recommended Posts

I need a cache drive for my backups - all 30+ tb of them a day.

I will be building a server that consists of a 4c8t 64gb intel CPU - 9 8TB drives, 5 1TB cache drives and 6 1gb nics.

So I'll put all the cache drives in a jbod config.
8 of the 9 data drives as data (1 cold spare)
4 of the NICS as multihoming.
1 NIC as admin
1 nic just for looking at.

With a good multihoming - I'm thinking I should be able to do 24gb in a 12 hour time span.

Is my math wrong?

Can I multihome this thing?

Share this post

Link to post

If the data disks writes 150 MB/s assuming turbo write, then they can write about 500 GB/hour. So about 6 TB in 12 hours. It doesn't matter how much you can write to the cache in 12 hours if you don't have the required speed to offload from cache to array within the next 12 hours.


But a backup normally only copies changed files - how much of the 30 TB data is actually changing and needs to be backed up? With 8*8TB data disks you only have room for 64 TB backup files. Which means max two generations of every file if the original data is around 30 TB.

Share this post

Link to post

Are you planning to use unRAID for this purpose?  Is the purpose of this caching server to hold your backups while they get written to tape or some other long-term storage solution?


Assuming you have lots of servers each creating backups on many different networks you can multihome your NICs to allow specific servers to connect to your caching server on different NICs (presumably at staggered times during the day) and get lots of throughput directly to the hard drives.  If you're using unRAID for this purpose, it gets a little complex in the setup though, since by default unRAID writes to just one HDD at a time.


You could set things up so each server sends its files to specific shares which are confined to specific HDDs so that you would be forcing unRAID to write to multiple HDDs at the same time to get the throughput you want.  If however you are using parity with unRAID, that will be a bottleneck due to the single-drive nature of parity in unRAID.


I'd suggest that a slightly easier way on the Ethernet side might be to have a top-of-rack switch that has a free 10Gb port connecting to a 10GbE NIC on your caching server.  You can configure all the VLAN stuff on the switch and just tell each of your servers to send their backups to the caching server.  Theoretically this means ten Gb-connected servers could run simultaneously into the caching server at full speed.  You'd still need to ensure your hard drive subsystem could sustain 1GB/s writes, though. 



Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this