Multipath TCP


uldise

Recommended Posts

Hi,

i just wanna post some suggestions i found about an interesting problem - how to get more from your 1GbE network :) and i just hope some of you maybe have similar experience and can add some more suggestions..

my hardware setup is all simple:

esxi 5.5 production server -- HP 1810-24g switch -- esxi 5.5 backup server. both esxi runs unRAID guests too, more computers are connected to the same switch.

both esxi servers have 4 port Intel NICs, connectted with 4 ethernet cables each to the switch. i setup a nic Teaming on both esxi, with Route based on IP hash on Load balance, and made two static trunks on HP switch. everything runs as expected - if i have more than one connection to my unRAID server for example (from my Desktop computer and from another esxi) then i can saturate more than 1GbE to my unRAID.

 

my next goal is to try achieve more than 1GbE for single connection cos esxi/HP Nic Teaming/Trunking work per IP - if you have more than one active connection per host, you can saturate 1GbE only.

i found Miltipath TCP for that(this was discussed on the Limetech forums too but very little). i setup test ubuntu guest on each of my esxi, added 4 virtual Nics to every one, added a new ubuntu kernel(via apt repository) with Multipath TCP(this is sad news - you still need custom kernel..), and measured network performance - got max 3GBit with iperf3. the only downside i found so far, it's a huge CPU eater on the senders side cos you add a new software at transport layer... 

 

if you have any suggestion about this or have similar experience, please share :)

 

Thanks in advance,

Uldis

Link to comment

You don't need multipath.  You can do balance-rr (mode 0) bonding and get above a single-link connection.... as long as it is between 2 Linux boxen (which you seem to be doing).  Windows however does not support balance-rr.

 

Personally, I went to 10GbE cards, and wham... problem solved without any kludges or special scripting.  Now LAN performance is no longer a consideration... I can concentrate on optimizing disk I/O and RAM caching.

Link to comment

You don't need multipath.  You can do balance-rr (mode 0) bonding and get above a single-link connection.... as long as it is between 2 Linux boxen (which you seem to be doing).  Windows however does not support balance-rr.

 

Personally, I went to 10GbE cards, and wham... problem solved without any kludges or special scripting.  Now LAN performance is no longer a consideration... I can concentrate on optimizing disk I/O and RAM caching.

 

Are you using 10GbE on unraid? If so, which card?

Link to comment

You don't need multipath.  You can do balance-rr (mode 0) bonding and get above a single-link connection.... as long as it is between 2 Linux boxen (which you seem to be doing).  Windows however does not support balance-rr.

 

Personally, I went to 10GbE cards, and wham... problem solved without any kludges or special scripting.  Now LAN performance is no longer a consideration... I can concentrate on optimizing disk I/O and RAM caching.

 

Thanks for the answer. :)

i could use bonding as you suggested if my unRAID boxes are on bare-metal. but as i wrote, there are two esxi servers in between and esxi supports only Cisco Etherchanel/HP Trunking - and those are one to one connection between two ip by design..

 

according to Windows, i read somewhere that new Windows 2012/Windows 8 supports NIC bonding.. but i can't confirm this, never tried..

 

and yes, please share your 10GbE setup please :)   

Link to comment

That's a switch, not a card.

 

Yes, I'm well aware of that.  I need a switch before I can start buying cards.

 

Not really.  I don't use any 10GbE switches.

 

I have 5, possibly 6 boxes that I want to upgrade, so a switch is necessary.  I also want to do some testing accessing my ESXi datastores via 10GbE.  I'm currently using 4Gbs fiber.

Link to comment

I have 5, possibly 6 boxes that I want to upgrade, so a switch is necessary.  I also want to do some testing accessing my ESXi datastores via 10GbE.  I'm currently using 4Gbs fiber.

 

i have no expierence with fiber networking.. can you share any info from which to start? :)

if i understand correctly, do you have fiber switch too?

Link to comment

I have 5, possibly 6 boxes that I want to upgrade, so a switch is necessary.  I also want to do some testing accessing my ESXi datastores via 10GbE.  I'm currently using 4Gbs fiber.

 

i have no expierence with fiber networking.. can you share any info from which to start? :)

if i understand correctly, do you have fiber switch too?

 

Im currently using Emulex 4Gb fiber cards in each of my ESXi servers. Those all connect to a Dell Brocade Silkworm 200e 4gb/s Fiber Channel Switch that I picked up on eBay.  Also connected to that switch is an OpenFiler box that runs as a SAN presenting my storage LUNs to the ESXi hosts.

Link to comment

Im currently using Emulex 4Gb fiber cards in each of my ESXi servers. Those all connect to a Dell Brocade Silkworm 200e 4gb/s Fiber Channel Switch that I picked up on eBay.  Also connected to that switch is an OpenFiler box that runs as a SAN presenting my storage LUNs to the ESXi hosts.

thanks for sharing :)

and how you connect them to WAN? or do you have incoming internet connection with FC too?

Link to comment

Im currently using Emulex 4Gb fiber cards in each of my ESXi servers. Those all connect to a Dell Brocade Silkworm 200e 4gb/s Fiber Channel Switch that I picked up on eBay.  Also connected to that switch is an OpenFiler box that runs as a SAN presenting my storage LUNs to the ESXi hosts.

thanks for sharing :)

and how you connect them to WAN? or do you have incoming internet connection with FC too?

 

This is just for shared storage.

Link to comment

While it's neat to have the higher network speeds, I have to wonder what you really gain with UnRAID as the storage server.    Most steaming video doesn't come close to Gb bandwidth ... it takes quite a few simultaneous streams to hit that limit (more than most are likely to do unless they're all BluRay streams).    Simultaneous data transfers would clearly benefit, since even one transfer can saturate a Gb connection.

 

But in general, I'd think with 10Gb you'd want a much faster storage array => perhaps a RAID-6 with 10 or more disks ... which can hit data rates of 1000MB/s or so -- nearly saturating a 10Gb link (assuming you have a target that can handle incoming data at that rate).

 

 

Link to comment

While it's neat to have the higher network speeds, I have to wonder what you really gain with UnRAID as the storage server.    Most steaming video doesn't come close to Gb bandwidth ... it takes quite a few simultaneous streams to hit that limit (more than most are likely to do unless they're all BluRay streams).    Simultaneous data transfers would clearly benefit, since even one transfer can saturate a Gb connection.

 

But in general, I'd think with 10Gb you'd want a much faster storage array => perhaps a RAID-6 with 10 or more disks ... which can hit data rates of 1000MB/s or so -- nearly saturating a 10Gb link (assuming you have a target that can handle incoming data at that rate).

 

if you go virtualisation route with unRAID(i do but with esxi), and have many virtual machines for doing many nice things which requires resources from outside too, then you reach 1GbE very fast. on the other side, if you make single virtual server for every task then there are no problems inside - guests operate at 10GbE. 

Link to comment

While it's neat to have the higher network speeds, I have to wonder what you really gain with UnRAID as the storage server.   

 

If you have a lot of RAM, writes to unRAID can fly as long as you let the RAM cache your writes (and they fit in free RAM).  It also can depend on what you are doing... I have a lot of LARGE (>100GB) file sequential I/O.

 

Using a RAID-0 for cache can get you up to 10GbE speeds easily.  I have a 1TB RAID-0 with 5x256GB Vertex 4 SSDs, and another RAID-0 drive with 5x512GB Samsung 850s.  I get over 2GBytes/sec with large file I/O to/from the SSD RAIDs.

 

Even using a RAID-0 of 4 spinners will get you 3 times Gigabit speeds, (or a single good SSD even) which makes the "user experience" of copying stiff to unRAID much nicer.

Link to comment

While it's neat to have the higher network speeds, I have to wonder what you really gain with UnRAID as the storage server.    Most steaming video doesn't come close to Gb bandwidth ... it takes quite a few simultaneous streams to hit that limit (more than most are likely to do unless they're all BluRay streams).    Simultaneous data transfers would clearly benefit, since even one transfer can saturate a Gb connection.

 

But in general, I'd think with 10Gb you'd want a much faster storage array => perhaps a RAID-6 with 10 or more disks ... which can hit data rates of 1000MB/s or so -- nearly saturating a 10Gb link (assuming you have a target that can handle incoming data at that rate).

 

 

 

I do a lot of stuff just because I can! :)

Link to comment

This is just for shared storage.

 

ok, this mean, you have a separate SAN and another switch/router for WAN? if yes, how you combine them?

sorry for dumb questions, but i have experience with common ethernet only.. but wanna learn something new ;)

Link to comment

This is just for shared storage.

 

ok, this mean, you have a separate SAN and another switch/router for WAN? if yes, how you combine them?

sorry for dumb questions, but i have experience with common ethernet only.. but wanna learn something new ;)

 

Yes, a Storage Area Network is a separate network that presents shared storage to multiple servers.  Using SAN-based Datastores in VMWare allows you to use vMotion, allowing for high availability.

 

Usually, SANs are running on fiber channel networks.  Mine at home is running at 4Gb/s, connecting three ESXi hosts to two RAID10 arrays running on an OpenFiler server.

 

Link to comment

Yes, a Storage Area Network is a separate network that presents shared storage to multiple servers.  Using SAN-based Datastores in VMWare allows you to use vMotion, allowing for high availability.

Usually, SANs are running on fiber channel networks.  Mine at home is running at 4Gb/s, connecting three ESXi hosts to two RAID10 arrays running on an OpenFiler server.

 

ok, thanks for clarifying.. if i understand correctly, SANs are running on block level, and i'm wondering can you use unRAID somewhere apart of this configuration? :)

Link to comment

Yes, a Storage Area Network is a separate network that presents shared storage to multiple servers.  Using SAN-based Datastores in VMWare allows you to use vMotion, allowing for high availability.

Usually, SANs are running on fiber channel networks.  Mine at home is running at 4Gb/s, connecting three ESXi hosts to two RAID10 arrays running on an OpenFiler server.

 

ok, thanks for clarifying.. if i understand correctly, SANs are running on block level, and i'm wondering can you use unRAID somewhere apart of this configuration? :)

 

No, unRAID doesnt play into any of this at all.  Except that my current unRAID runs as a guest on ESXi.  Im a firm fan of maxing out motherboards on RAM. I didnt need 32GB of RAM for unRAID, so I use the excess capacity for failover, etc in my ESXi cluster.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.