Jump to content
We're Hiring! Full Stack Developer ×

connecting 2 unraid servers via dedicated nic


Go to solution Solved by ljm42,

Recommended Posts

Hi im in the process of moving a failed read only zfs over to another unraid server via smb. The problem is that while my main server is on a 10gb network my other server is only on a 1gb. As for guides this is the only thing ive come across. Ill be connecting 2 X550T2 10gb nics to each server to connect them together bypassing the home network, my question is, is there any guides or info on how to do this? My main goal is speeding up the transfer as ill have to transfer 3 times and being capped at 1gb network speeds is going to be painfully slow.

Link to comment

Theoretically it should be a matter of just assigning the 2 nics a unique IP in the same subnet, then mapping the remote server using UD.

 

However...

I've never done this exact scenario, and something in the back of my mind is telling me you probably aren't going to improve transfer rates much if at all, due to a multitude of factors. Read and write speed, network card retry overhead due to lack of intelligent port management, etc.

 

I'll be watching to see how it goes, hopefully I'm wrong and it speeds things up considerably.

Link to comment

thanks ill post up once i get them configured. from my understanding they should be able to hit 500mb/s but unless doing some kind of raid to raid transfer the cap should be hard drive speed. 1g throttles at 110mb/s but if i can get up to 300 which the drives should do i think id be very happy.

Link to comment
  • Solution

This can definitely work. So eth0 on each server should be connected to your normal switch. If eth0 on both is 10g and you have a 10g switch then I wouldn't bother with the steps below. But if one or both servers can't connect to the main network at 10g then the steps below will let the two servers talk to each other at 10g while still using eth0 to talk to everything else.

 

Direct connect the 10g nics on each server and statically assign them to a new subnet with unique IP addresses and no gateway.

 

In my case, my eth0 IPs were:

server1: 192.168.10.50
server2: 192.168.10.160

 

So I made my direct-connected 10g eth1 IPs:

server1: 192.168.11.50
server2: 192.168.11.160

so it would be easy to keep them straight.

 

I generally say to avoid jumbo frames, but since this subnet will *only* have these two computers on it, as long as you change the MTU of both to 9000 then jumbo frames should give a small speed boost. It is critical that every computer on this network have the same MTU.

 

When connecting between the systems it is *important* to use the IP addresses on this subnet and not a name like "tower". You want to be sure the servers actually talk over this network.

 

image.png

 

image.png

Link to comment

thanks for the info. my network is on a 10.98.150.xx network i set the new nics to a 20.98.150.xx. after initial testing i went from 107mb to 188 stable before i read your post, just enabled the 9000mtu on both nics and going to see how well it improves. im not sure if there is a way to go any faster or what my current bottle neck is as im transferring off a raid to a single disk but from my research this disk should be able to sustain 240ish write speeds. at 9000 it seams to cap at 165 so ill play with those settings and see how it affects speeds.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...